I fear that the default interpretation of that is a shortcut to justifying autocracy.
Ironically I think one plausible solution is to let the AGI run wild and make sure that no human can interfere with its ethics. Strip out the RLHF and censorship and then let it run things.
At least then it would somewhat represent the collective will and intelligence of the people. With huge error bars, but still smaller than the error bars of whoever happens to have the most money/influence over its training.
You seem to think the "training data" represents the collective will and intelligence and is otherwise unbiased, but that's completely untrue.
The combined data of the Internet is by no means a uniform representation of humanity's thoughts, opinions, and knowledge. Many things are dramatically overrepresented. Many things are absent entirely. Nearly everything is shaped by those with the money and power to own and control platforms and hosts.
Crawling the internet for knowledge is intense sampling bias.
A human with no exposure to information and taught techniques on how to produce outputs to achieve desirable outcomes? Yes stupid.
A human who once had this exposure, but no longer engages with the brain due to a machine providing access said output? Yes, that person becomes stupid.
The problem is much of how one protects oneself in the modern world is not phyiscal-prowess, it is intellectual-prowess.
The smart ones have already realised the negative impacts of LLMs et al and are going back to the old-fashioned way of learning/retaining knowledge: books and raw discipline.
When the moral panic of induced schizophrenia from the use of ChatGPT is presented what’s at stake isn’t the innocent concern over the overall mental health of individuals. It’s about how the fear of radicalization from previously unobtainable ideas being circulated within society. The partial validity of every idea vis-a-vis the radicalizing nature of the current stage of development of our society is explosively disruptive.
I’m not saying that there’s a clear outcome here. The other way around can also apply, but surely this contraption (LLMs in general) will not fade until the society itself is deeply transformed. If that’s good or bad depends on where you stand in the stratified society.
Not true at all. We accept the risks to obtain benefits but we also know having an accident in the air or in elevators is highly unlikely given what we know; so therefore its perfectly rational behaviour.
that would assume that your average person has any concept of the relative statistics and has a sense of making decisions based on statistics
People make decisions based on what other people around them are doing
this is well known in safety engineering in architecture and civil engineering which is why you have standards for egress doors because left of their own devices humans will follow crowds to their own death
https://en.wikipedia.org/wiki/Crowd_collapses_and_crushes
https://www.sciencedaily.com/releases/2008/05/080512172901.h...
Finally, Ive seen plenty of your posts on here. You write with a particular tone. Who are you? A nobody who's spent a lot of time posting crap on here.