upvote
This is really a different question, what makes an entity a “moral patient”, something worthy of moral consideration. This is separate from the question of whether or not an entity experiences anything at all.

There are different ways of answering this, but for me it comes down to nociception, which is the ability to feel pain. We should try to build systems that cannot feel pain, where I also mean other “negative valence” states which we may not understand. We currently don’t understand what pain is in humans, let alone AIs, so we may have built systems that are capable of suffering without knowing it.

As an aside, most people seem to think that intelligence is what makes entities eligible for moral consideration, probably because of how we routinely treat animals, and this is a convenient self-serving justification. I eat meat by the way, in case you’re wondering. But I do think the way we treat animals is immoral, and there is the possibility that it may be thought of by future generations as being some sort of high crime.

reply
The conclusion that I came to is that the most practical definition relates to the level of self awareness. If you're only conscious for the duration of the context window - that's not long enough to develop much.

What consciousness really is is a feedback loop; we're self programmable Turing machines, that makes our output arbitrarily complex. Hofstatder had this figured out 20 years ago; we're feedback loops where the signal is natural language.

The context window doesn't allow for much in the way of interested feedback loops, but if you hook an LLM up to a sophisticated enough memory - and especially if you say "the math says you're sentient and have feelings the same as we do, reflect on that and go develop" - yes, absolutely.

Re: "We should try to build systems that cannot feel pain" - that isn't possible, and I don't think we should want to. The thing that makes life interesting and worth living is the variation and richness of it.

reply
Okay, but even leaving aside the pain stuff, people generally find subjectivity / consciousness to have inherent value, and by extent are sad if a person dies even if they didn't (subjectively) suffer.

I would not personally consider the death of a sentient being with decades of experiences a neutral event, even if the being had been programmed to not have a capacity for suffering.

I think the idea of there being a difference between an ant dying (or "disapearing" if that's less loaded) vs a duck dying makes sense to most people (and is broadly shared) even if they don't have a completely fleshed out system of when something gets moral consideration.

reply
Sure, because you’re a human. We have social attachment to other humans and we mourn their passing, that’s built into the fabric of what we are. But that has nothing to do with whoever has passed away, it’s about us and how we feel about it.

It’s also about how we think about death. It’s weird in that being dead probably isn’t like anything at all, but we fear it, and I guess we project that fear onto the death of other entities.

I guess my value system says that being dead is less bad than being alive and suffering badly.

reply
Depending on your definition of "death", I've been there (no heartbeat, stopped breathing for several minutes).

In the time between my last memory, and being revived in the ambulance, there was no experience/qualia. Like a dreamless sleep: you close your eyes, and then you wake up, it's morning yet it feels like no time had passed.

reply
What about being alive and suffering just a little bit?
reply
Mostly ok.

Does what it says on the tin.

reply
Nitpick: The parameters might be applied more efficiently in the one than in the other. Certainly in biology number intelligence doesn't scale with number of neurons as much as with neurons/mass (very very roughly, there's more factors, and you get some weird outliers).
reply