upvote
This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.

The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.

reply
Yeah, that's like saying Romeo and Juliet by Shakespeare is obsolete because Romeo could have just sent Juliet a snapchat message.

You're kinda missing the entire point of the story.

reply
100% agree, but I relish the works of Willam Gibson and Burroughs who pose those questions AND getting the future somewhat right.
reply
I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.

While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.

reply
when you read this and its follow-up "driver" as a commentary on how capitalism removes persons from their humanity, it's as relevant as it was on day one.

good sci fi is rarely about just the sci part.

reply
That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.

But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.

For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.

reply
And beyond the ethical points it makes (which I agree may or may not be relevant for LLMs - nobody can know for sure at this point), I find some of the details about how brain images are used in the story to have been very prescient of LLMs' uses and limitations.

E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.

The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.

And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.

reply
[dead]
reply
Lena isn't about uploading. https://qntm.org/uploading
reply
good stuff
reply
“Irrelevant” feels a bit reductive while the practical question of what actually causes qualia remains unresolved.
reply
I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.
reply
what

that’s one way to look at it I guess

have you pondered that we’re riding the very fast statistical machine wave at the moment, however, perhaps at some point this machine will finally help solve the BCI and unlock that pandora box, from there to fully imaging the brain will be a blink, from there to running copies on very fast hardware will be another blink, MMMMMMMMMMacevedo is a very cheeky take on the dystopia we will find on our way to our uploaded mind future

hopefully not like soma :-)

reply
That seems like a crazy position to take. LLMs have changed nothing about the point of "Lena". The point of SF has never ever been about predicting the future. You're trying to criticize the most superficial, point-missing reading of the work.

Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.

reply
I have not seen as prediction as actual technology, but mostly as a horror story.

And a warning, I guess, in unlikely case of brain uploading being a thing.

reply
You need to be way less "literal", for lack of a better word. With such a narrow reading of what literature is, you are missing out.

https://qntm.org/uploading

E.g.

> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!

Ring a bell?

reply
Not sure how LLMs preclude uploading. You could potentially be able to make an LLM image of a person.
reply
Found the guy who didn't play SOMA ;)
reply