I really don't see the argument for this tech to be any kind of good, unless you think moving into an era where you cannot trust any image or video is somehow a neutral outcome, AND are happy about the people who are in control of this tech. which I guess captures a larger part of the HN crowd than I'd hoped
GenAI has presented tangible proof of such risks and is forcing society to reevaluate the way we trust evidence. In my eyes, it serves as an opportunity to improve our foundations of trust to something that relies less on the good will of random authorities onto something more objective.
Also, I haven't really seem anyone celebrating the large corporations who control AI tech. Could be simply the people I'm involved with, but most AI enthusiasts I've seem are more about, at least, open-weights AI models.
You could have said the same about say, pre-AI deceptively edited/ragebait/made up content going viral on FB, "actually this is good because soon people will realize they were tricked/lied to, they'll think extra-critically before sharing dubious videos next time".
Which has not happened. I can only see AI videos making the problem worse as people are fed personalized, narrowly targeted content that seem to perfectly align with their own beliefs/emotions/etc.