The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
You're kinda missing the entire point of the story.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
good sci fi is rarely about just the sci part.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
that’s one way to look at it I guess
have you pondered that we’re riding the very fast statistical machine wave at the moment, however, perhaps at some point this machine will finally help solve the BCI and unlock that pandora box, from there to fully imaging the brain will be a blink, from there to running copies on very fast hardware will be another blink, MMMMMMMMMMacevedo is a very cheeky take on the dystopia we will find on our way to our uploaded mind future
hopefully not like soma :-)
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
And a warning, I guess, in unlikely case of brain uploading being a thing.
E.g.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
Ring a bell?