"...it illustrates exactly the kind of unsupervised output that makes open source maintainers wary."
followed later on by
"[It] illustrates exactly the kind of unsupervised behavior that makes open source maintainers wary of AI contributions in the first place."
The utility is that the infrenced output tends to be right much more often than wrong for mainstream knowledge.
Misquotes and fabricated quotes have existed long before AI, And indeed, long before computers.
So you STILL have not read the original blog post. Please stop bickering until AFTER you have at least done that bare minimum of trivial due diligence. I'm sorry if it's TL;DR for you to handle, but if that's the case, then TL;DC : Too Long; Don't Comment.
I read the article.
My claim is as it has always been. If we accept that the misquotes exist it does not follow that they were caused by hallucinations? To tell that we would still need additional evidence. The logical thing to ask would be; Has it been shown or admitted that the quotes were hallucinations?
Then you would be fully aware that the person who the quotes are attributed to has stated very clearly and emphatically that he did not say those things.
Are you implying he is an untrustworthy liar about his own words, when you claim it's impossible to prove they're not hallucinations?
I think calling the incorrect output of an LLM a “hallucination” is too kind on the companies creating these models even if it’s technically accurate. “Being lied to” would be more accurate as a description for how the end user feels.
Lying is deliberately deceiving, but yeah, to a reader, who in a effect is a trusting customer who pays with part of their attention diverted to advertising support, broadcasting a hallucination is essentially the same thing.
Vibe Posting without reading the article is as lazy as Vibe Coding without reading the code.
You don’t need a metaphysics seminar to evaluate this. The person being quoted showed up and said the quotes attributed to him are fake and not in the linked source:
https://infosec.exchange/@mttaggart/116065340523529645
>Scott Shambaugh here. None of the quotes you attribute to me in the second half of the article are accurate, and do not exist at the source you link. It appears that they themselves are AI hallucinations. The irony here is fantastic.
So stop retreating into “maybe it was something else” while refusing to read what you’re commenting on. Whether the fabrication came from an LLM or a human is not your get-out-of-reading-free card -- the failure is that fabricated quotes were published and attributed to a real person.
Please don’t comment again until you’ve read the original post and checked the archived Ars piece against the source it claims to quote. If you’re not willing to do that bare minimum, then you’re not being skeptical -- you’re just being lazy on purpose.
By what proceess do you imagine I arrived at the conclusion that the article suggested that published quotes were LLM hallucinations when that was not mentioned in the article title?
You accuse me of performative skepticism, yet all I think is that it is better to have evidence over assumptions, and it is better to ask if that evidence exists.
It seems a much better approach than making false accusations based upon your own vibes, I don't think Scott Shambaugh went to that level though.