upvote
Some people thought SHRDLU was basically AGI after seeing its demo in 1970. The hype around such systems was so strong that Hubert Dreyfus felt the need to write an entire book arguing against this viewpoint (1972 What Computers Can't Do). All this demonstrates is that we need to be careful with various claims about computer intelligence.
reply
Sure, but it was probably stuck at doing that one thing.

neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.

I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.

reply
It performs at a usable level across a wide range of tasks. I'm not sure about two years ago, but ten years ago we would have called it an AGI. As opposed to "regular AI" where you have to assemble a training set for your specific problem, then train an AI on it before you can get your answers.

Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong

reply
I'm pretty sure most people take issue with AGI, because we've been raised in culture to believe that AGI is a super entity who is a complete superset of humans and could never ever be wrong about anything.

In some sense, this isn't really different than how society was headed anyways? The trend was already going on that more and more sections of the population were getting deemed irrational and you're just stupid/evil for disagreeing with the state.

But that reality was still probably at least a century out, without AI. With AI, you have people making that narrative right now. It makes me wonder if these people really even respect humanity at all.

Yes, you can prod slippery slope and go from "superintelligent beings exist" to effectively totalitarianism, but you'll find so many bad commitments there.

reply
No one who read science fiction in 1955 would call any of the various models we know to be "artificial intelligence". They would be impressed with it, even excited at first that it was that... until they'd had a chance to evaluate it.

Science fiction from that era even had the concept of what models are... they'd call it an "oracle". I can think of at least 3 short stories (though remembering the authors just isn't happening for me at the moment). The concept was of a device that could provide correct answers to any question. But these devices had no agency, were dependent on framing the question correctly, and limited in other ways besides (I think in one story, the device might chew on a question for years before providing an answer... mirroring that time around 9am PST when Claude has to keep retrying to send your prompt).

We've always known what we meant by artificial intelligence, at least until a few years ago when we started pretending that we didn't. Perhaps the label was poorly chosen (all those decades ago) and could have a better label now (AGI isn't that better label, it's dumber still), but it's what we're stuck with. And we all know what we mean by it. We all almost certainly do not want that artificial intelligence because most of us are certain that it will spell the doom of our species.

reply
Just don't move the goal posts. AGI was already here the day ChatGPT came out:

https://www.noemamag.com/artificial-general-intelligence-is-...

reply
If you didn't call GPT 3.5 AGI I do not believe you when you claim you would have called 5.5 AGI.
reply
I agree with this but they don’t. And that’s the the thing, AGI as they refer is much much much more than what we have, and I don’t know if they are going to ever get there and I’m not sure what’s even there at this point and what will justify their investments.
reply
... until you actually, like, use it and find out all the limitations it has.
reply
How is this relevant? Human General Intelligence has a lot of limitations as well and we have managed to do lots.
reply
This is like saying that talking about my financial limitations is irrelevant because Jeff Bezos also has financial limitations...
reply
GPT 4 was 3 years ago... it's iterative enhancement.
reply
And I've been told my job (litigation attorney) is about to be replaced for over 3 years now, has yet to come close.
reply
People always over estimate the impact of technology because they dont Understand human aspect of many businesses. Will it eventually replaced or will the shape of these kind of work will be completely different in the future? That’s an easy yes, when is that future? That’s a big unknown, in my experience this kind of stuff takes at least a decade (and possibly more on this case) to make a big impact like replacing all of X.
reply
deleted
reply
These models need orders of magnitude in change before they can be more helpful than just a "find me an example of [an extremely basic principle]" which most of the time it does not do right anyway.
reply
What kind of litigation attorney?

I've been working with a startup, and I want to invest in it, and for the paperwork for that, all the nitty gritty details; instead of spending $20k in lawyers and a whole bunch more time going back and forth with them as well, the four of us, me, their CEO, my AI, and their AI; we all sat in a room together and hashed it out until both of us were equally satisfied with the contract. (There's some weird stuff so a templated SAFE agreement wasn't going to work.) I'm not saying you're wrong, just that lawyers, as a profession isn't going to be unchanged either.

reply
Maybe ask your LLM what a litigator is, as it is not any of what you described as (not) involving your attorney in.
reply
If you present ELIZA to people some will think it is AGI today.

There is a reason so many scams happen with technology. It is too easy to fool people.

reply