If there's humans involved, "I took this data and made a really fancy interactive chart" means that you put a lot more work into it, and you can probably somewhat assume that this means some more effort was also put into the accuracy of the data.
But with the LLM it's not really very much more work to get the fancy chart. So the thing that was a signifier of effort is now misleading us into trusting data that got no extra effort.
(Humans have been exploiting this tendency to trust fancy graphics forever, of course.)
There has always been a bias towards form over function.
P.S. Credit to the poster, she posted a correction note when someone caught the issue: https://www.linkedin.com/posts/mariamartin1728_correction-on...
Honestly, people make them up just as much or generate equally incorrect graphs.
It's about time our trust into random visualizations is destroyed, without the actual formulas and data behind being exposed.
People find them quite easy to check - easier than the raw document. My angle with teams is use these to check your processes. If the flow is wrong it’s either because the LLM has screwed up, or because the policy is wrong/badly written. It’s usually the latter. It’s a good way to fix SOPs
I still review each diagram afterward, but the great thing is that, unlike image-based diagrams, they remain fully text-readable and searchable. And you can even expose them as part of the knowledge base for the LLM to reference when needed going forward.
I'm finding more and more often the limiting factor isn't the LLM, it's my intuition. This goes a way towards helping with that.
https://www.reddit.com/r/dataisugly/comments/1mk5wdb/this_ch...
I mean is it really that shocking that you can have an LLM generate structured data and shove that into a visualizer? The concern is if is reliable, which we know it isnt.
Passive questions generate passive responses.