upvote
I disagree with your first assumption. Well, mostly disagree. Let me explain.

It's true that AI-generated arguments can have true premises and sound inferences, some of the time. But the models are still too likely to hallucinate. Let's say at some distant future date the hallucination rate gets down to just 10%, so it's 90% likely that the argument is well constructed. (Which I personally doubt LLMs will ever be capable of as long as they are still statistically-based; I think it will take a model that is based on facts and logical reasoning, rather than on the probability of the next word being "the" or "argument" or "premise", before LLMs will be reliably able to produce logical reasoning that follows actual logic).

But here's the thing. When I'm reading an article, I'm not looking for "is this 90% likely to be true?" I'm looking for 100%. If a source has a 10% chance of being wrong, I'm going to skip reading that source in favor of a source that has a 0% chance of being wrong, or if that's not possible then a 1% chance. Yes, that's a logical fallacy... if my goal was proving the argument wrong. But my goal is different. My goal is finding reliable information as quickly as I can. And to that end, the genetic fallacy is actually useful to apply. Not as a "it's written by AI so it's wrong" argument — that would be fallacious indeed — but "it's written by AI so I'm not going to spend time on it, I'll skip to another article that is less likely to contain hallucinations" is an actually useful metric to apply.

I've had one too many cases where I asked an LLM, "Can product XYZ do ABC?", it confidently told me "Yes, you can do ABC with XYZ and here's how to do it," then I looked at the actual documentation for XYZ and it specifically said "we can't do ABC; at some future point we plan to add it, and then you will be able to do this: (example code)". And that example code was what the LLM spat out to me saying "Yes, you can do ABC" when the truth was the opposite.

The maxim falsus in uno, falsus in omnibus doesn't really apply to LLMs, because they don't have a moral component to them. It applies to people, because someone whose ethics forbid them to lie is reliable, but someone who is willing to lie about one thing is very likely to be willing to lie about other things, and is therefore unreliable as a source of information. LLMs don't have a sense of morality, and in fact when they hallucinate they're not lying, per se, since lying requires knowing the truth and willingly saying the opposite (as opposed to being mistaken, where you think you're telling the truth even though you're speaking an objective falsehood). LLMs don't know the truth, that's just not a concept programmed into them, so they're not lying, and their willingness to "lie" once does not prove a moral defect. But the fact that they do hallucinate a measurable percentage of the time makes them just as unreliable a source of information as a person who is willing to lie.

So while I do agree that AI-generated arguments can be logically correct, it is not guaranteed that they will be correct. And while it would be fallacious to say "AI-generated, therefore false", it is still useful to say "AI-generated, therefore unreliable and I'll seek out a different source of information".

reply