I just find phrases like this a bit obnoxious at times.
>You would not have had a problem with calling out a badly composed rambling article 5 years ago.
Then why not just say that? It's rambling bla bla bla. What's so hard about that? Why invent a reason for issues, as if rambling articles didn't get written 5 years ago.
Like No, being written by an LLM or not is not the reason the article has no benchmarks or interpretability results. Those things would be there regardless if the author was interested in that, so again, it just seems there's little point in making such assertions.
But anyway, yes, I can also just move on to the next article. Most of the time I indeed do that.
The subtle ones like this I don’t mind too much, as long as they get the content correct, which in this case leaves quite a bit to be desired.
I’m also noticing that some people around me appear to just be oblivious to some LLM signals that bother me a lot, so people consume media differently.
I absolutely do believe that AI generated content needs to be called out, although at this point it’s safe to say that pretty much all online content is LLM written.