https://www.youtube.com/watch?v=hNuu9CpdjIo
"I HAVE LLM SKILLS! I'M GOOD AT DEALING WITH THE LLMS!"
It is common and a mistake IMO to rely on the AI as the sole source for answers to follow-up questions. Better verification would have humans sign off on the veracity of fundamental assumptions. But where does this live? Can an AI model be trusted to rely on previous corrections? This seems impossible or possibly adversarial in a public cloud.