The downside to having labels on AI-written political comments, stellar reviews of bad products, speeches by a politician, or supposed photos of wonderful holiday destinations in ads targeted at old people are what, exactly?
Are you really arguing that putting a label on AI generated content could do more harm than just leaving it (approximately) indistinguishable from the real thing might somehow be worse?
I'm not arguing that we need to label anything that used gen AI in any capacity, but past the point of e.g. minor edits, yeah, it should be labeled.
People have been writing articles without the help of an LLM for decades.
You don't need an LLM for grammar and spell checking, arguably an LLM is less efficient and currently worse at it anyway.
The biggest help a LLM can provide is with research but that is only because search engines have been artificially enshitified these day. But even here the usefulness is very limited because of hallucinations. So you might be better off without.
There is no proof that LLMs can significantly improve the workflow of a professional journalist when it comes to creating high quality content.
So no, don't believe the hype. There will still be enough journalists not using LLMs at all.
Californians have measurably lower concentrations of toxic chemicals than non-California's, so very useless!