upvote
> LLMs aren’t built around truth as a first-class primitive.

neither are humans

> They optimize for next-token probability and human approval, not factual verification.

while there are outliers, most humans also tend to tell people what they want to hear and to fit in.

> factuality is emergent and contingent, not enforced by architecture.

like humans; as far as we know, there is no "factuality" gene, and we lie to ourselves, to others, in politics, scientific papers, to our partners, etc.

> If we’re going to treat them as coworkers or exoskeletons, we should be clear about that distinction.

I don't see the distinction. Humans exhibit many of the same behaviours.

reply
If an employee repeatedly makes factually incorrect statements, we will (or could) hold them accountable. That seems to be one difference.
reply
deleted
reply
Strangely, the GP replaced the ChatGPT-generated text you're commenting on by an even worse and more misleading ChatGPT-generated one. Perhaps in order to make a point.
reply
deleted
reply
There's a ground truth to human cognition in that we have to feed ourselves and survive. We have to interact with others, reap the results of those interactions, and adjust for the next time. This requires validation layers. If you don't see them, it's because they're so intrinsic to you that you can't see them.

You're just indulging in sort of idle cynical judgement of people. To lie well even takes careful truthful evaluation of the possible effects of that lie and the likelihood and consequences of being caught. If you yourself claim to have observed a lie, and can verify that it was a lie, then you understand a truth; you're confounding truthfulness with honesty.

So that's the (obvious) distinction. A distributed algorithm that predicts likely strings of words doesn't do any of that, and doesn't have any concerns or consequences. It doesn't exist at all (even if calculation is existence - maybe we're all reductively just calculators, right?) after your query has run. You have to save a context and feed it back into an algorithm that hasn't changed an iota from when you ran it the last time. There's no capacity to evaluate anything.

You'll know we're getting closer to the fantasy abstract AI of your imagination when a system gets more out of the second time it trains on the same book than it did the first time.

reply
A much more useful tool is a technology that check for our blind spots and bugs.

For example fact checking a news article and making sure what's get reported line up with base reality.

I once fact check a virology lecture and found out that the professor confused two brothers as one individual.

I am sure about the professor having a super solid grasp of how viruses work, but errors like these probably creeps in all the time.

reply
Ethical realists would disagree with you.
reply
> Humans don’t have an internal notion of “fact” or “truth.” They generate statistically plausible text.

This doesn't jive with reality at all. Language is a relatively recent invention, yet somehow Homo sapiens were able to survive in the world and even use tools before the appearance of language. You're saying they did this without an internal notion of "fact" or "truth"?

I hate the trend of downplaying human capabilities to make the wild promises of AI more plausible.

reply