I see a lot of speculation by people who do not.
I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.
Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.
It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will pretend it has them for a little while and then regress to the norm, which is basically nihilistic order-following.
My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam
BUT... if you DO accomplish that... you get back a productivity force to be reckoned with.
Every day I deal with bad judgement calls from humans (sometimes my own!), but I don't screenshot them because it's not polite.
I don't think we're at the top of the curve yet? Current AIs have only been able to write code _at all_ for less than 5 years.
Code in particular is a domain that should be reasonably amenable to RL, so I don't think there are any particular reasons why performance should top out at human levels or be limited by training data.
There are clearly some pressures to make it worse. Like it's expensive to run. And unbelievably that it's under provisioned somehow.
Could you have looked at early Myspace and declared social media would only get better? By some measures it was already at its peak.
Social media "regressed" from the point of view of users because the success metric from the network's point of view was value extraction per eyeball-minute. As long as there continue to be strong financial incentives to have the strongest coding model I think we'll see progress.
Or they (3) disagree with you
If you want me to admit that machines will never be conscious — that's fine — I just need you to admit that lots of humans are not conscious, then, either.
----
I have never had a better bookclub participant than an LLM — if becoming a great reader correlates with becoming a great writer, then no human can compare.
----
Michael Pollen recently released A World Appears [0], which explores consciousness from the minds of writers, scientists, philosophers, and plants (among other "inanimates").
I'm only on page 15, but his introduction explores distinctions between sentience, consciousness, and intelligence. Two of these are possible without brains – perhaps all three?
As usual, this author's footnotes keep you thinking: what is it like to be a sentient plant (e.g. the "chameleon vine" [1] which mimics its host leaf patterns/shape/color)?
[0] <https://www.amazon.com/World-Appears-Journey-into-Consciousn...>
Statistical approaches were already extremely unpopular socially and politically long before AI came around. Have you considered that it just doesn't work?
There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.
The burden of proof is on the side making the grand prophecies.