upvote
No they don't because they have brains.

I have no strong idea why people can't accept that intelligence formed separately of a human brain can truly be alien: not in the hyperbolic sense of "that person is so unique it's like they're a different species", but "that thing does not have a brain, so it can have intelligence that is not human-like".

A human without a brain would die. An LLM doesn't have a brain and can do wonderous things.

It just does them in ways that require first accepting that there is no homo sapien thinks like an LLM.

We trained it on human language so often times it borrows our thought traces so to speak, but effective agentic systems form when you first erase your preconceived notions of how intelligence works and actually study this non-human intelligence and find new ways to apply it.

It's like the early days of agents when everyone thought if you just made an agent for each job role in a company and stuck them in a virtual office handing off work to each other it'd solve everything, but then Claude Code took off and showed that a simple brain dead loop could outperform that.

Now subagents almost always are task specific, not role specific.

I feel like we could leap ahead a decade if people could divorce "we use language, and it uses language so it is like us", but I think there's just something really challenging about that because it's never been true.

Nothing had this level of mastery over human language before that wasn't a human. And funnily enough, the first times we even came close (like Eliza) the same exact thing happened: so this seems like a persistent gap in how humans deal with non-humans using language.

reply
"I feel like we could leap ahead a decade if people could divorce "we use language, and it uses language so it is like us","

Or maybe just maybe... the thing should be much better designed around the human.

That's how personal computers made their way into homes. People like yourself are comical and can't understand how widespread adoption takes place to obtain value from what the thing intrinsically possesses.

Firms literally exist to take care of the hassle so that the person can get the value from the thing closer to the present - like hello...?

reply
You quote me then start speaking about things completely unrelated to anything I said.

We can't choose if the LLM is like us unless you want to go back 10-20 years in time and choose a new direction for AI/ML.

We stumbled upon an architecture with mostly superficial similarities to how we think and learn, and instead focused on being able to throw more compute and more data at our models.

You're talking about ergonomics that exist at a completely different layer: even if you want to make LLM based products for humans, around humans, you have to accept it's not a human and it won't make mistakes like a human (even if the mistakes look human) -

If anything you're going to make something that burns most people if you just blindly pretend it's human-like: a great example being products that give users a false impression of LLM memory to hide the nitty gritty details.

In the early days ChatGPT would silently truncate the context window at some point and bullshit its way through recalling earlier parts of the conversation.

With compaction it does better, but still degrades noticeably.

If they'd exposed the concept of a context window to the user through top level primitives (like being able to manage what's important for example), maybe it'd have been a bit less clean of a product interface... but way more laypeople today would have a much better understanding of an LLM's very un-human equivalent to memory.

Instead we still give users lossy incomplete pictures of this all with the backends silently deciding when to compact and what information to discard. Most people using the tools don't know this because they're not being given an active role in the process.

reply
I think these are reasonable questions but it assumes that everything is actually a black box instead of being treated as such.

Despite what the headlines say, these systems aren’t inscrutable.

We know how these things work and can build around and within and change parameters and activation functions etc…and actually use experience and science and guidance.

However those are not technical problems those are organizational social and quite frankly resource allocation problems.

reply
I said the opposite of what your comment is replying to.

> but effective agentic systems form when you first erase your preconceived notions of how intelligence works and actually study this non-human intelligence and find new ways to apply it.

There's no reason you can't make good use of them and learn how to do it more reliably and predictably, it's just chasing those gains through a human intelligence-like model because they use human language leads to more false starts and local maxima than trying to understand stand them as their owb systems.

I don't think it should even be a particularly contentious point: we humans think differently based on the languages we learn and grew up with, what would you expect when you remove the entire common denominator of a human brain?

reply