It's "see this input signal, send these output signals", which seems consistent with the title.
It seems they grow the neural tissue on a chip the neurons can interface with and send out / receive electrical impulses. They let the neurons self assemble, and "train" via reward or punishment signals (unclear to me what those are).
Either way this makes me nauseous in a way I haven't experienced much with tech. The telling thing for me is, all these people are so excited to explain, but not once, ever, in the video speak of ethics or try to mitigate concerns.
We know this is only 200,000 neurons. Dogs have 500 million. Humans have billions. But where is the line for sentience, awareness? Have we defined it? Can we, if we don't understand it ourselves? What are the plans to scale up?
It's legitimately horrifying to me.
If this concern is genuine, I think the first step is to embrace veganism. Because while we don't know the exact offset, it's pretty obvious a dog or a pig reaches it
> What are the plans to scale up?
I don't know, slavery on an unimaginable scale? That's where AI is heading too, by the way. Sooner, rather than later, those two things will be one and the same.
Scaling up these neuron cultures is rather something like "head cheese" from Greg Egan's "Rifters" novels (artificial "brains" trained to do network filtering, anti-malware combat etc.).
By Peter Watts actually.
The past 4 billion years of life for prey animals has been "get born, eat, get eaten by a predator." They have never experienced any other environment. Why do we owe them a different one?
This is a very dark path, and I could not trust the people in charge less.
Not an endorsement or a condemnation, just something I learned of recently and found surprising.
Why is it rhetoric? This goes beyond whatever malignant thing was perceived in this study, but why is it a rhetorical non-answer?
> we, deep down, know is bad
this feels like real rhetoric.
You seem hung-up on my using the word rhetoric. Just so we’re on the same page here:
> rhetoric, n : the art of speaking or writing effectively: b)the study of writing or speaking as a means of communication or persuasion
The business writing class I took in college was called Business Rhetoric. It’s not a bad word.
If you’re crafting arguments to get other people to support specific actions or products or policies or whatever, that is unambiguously rhetoric.
> this feels like real rhetoric.
Sure? Rhetoric that implores people to value their principles over theoretical security concerns or FOMO or greed? I wouldn’t exactly call that rakish.
It’s a non-answer because if you really feel doing something is bad, consider yourself a consequential actor in the world whose contributions meaningfully advance the projects you work on, then why would you want to help someone be there first to do a bad thing? If you don’t feel it’s bad, then there’s no problem. You’re just living your life. That is clearly not the position expressed by the content I responded to. If there are actual concrete concerns that don’t essentially boil down to “well they’re going to make that money before I do,” then that would be an actual answer.
When used in the negative sense it is, per https://dictionary.cambridge.org/dictionary/english/rhetoric
"disapproving -> clever language that sounds good but is not sincere or has no real meaning"
Are you implying you mean something other than this sense of the word?
Especially when this demo needs 200k neurons when organizations with vastly fewer neurons have more complex behaviors.
My favorite concrete example is "unusual" amino acids. Quite a few with remarkably useful properties have been demonstrated in the lab. For example, artificial proteins exhibiting strength on par with cement. But almost certainly no living organism could ever evolve them naturally because doing so would require reworking large portions of the abstract system that underpins DNA, RNA, and protein synthesis. Effectively they appear to lie firmly outside the solution space accessible from the local region that we find ourselves in.
I agree with your second point though that this system is massively more complex than necessary for the behavior demonstrated.
Check out the venerable fruit fly (drosophila melanogaster) and its known lifecycle and behavioral traits. They're a high profile neuroscience research target for them I believe; their connectome being fully mapped made the news pretty hard a few years ago.
Fruit flies have ~140,000 neurons.
The catch is that these brain-on-a-substrate organoids are nothing like actual structured, developed brains. They're more like randomly wired-together transistors than a proper circuit, to use an analogy.
So even though by the numbers they'd definitely have the potential to be your nightmare fuel, I'd be surprised if they're anywhere close in actuality.
We don't need to be experimenting on people, regardless of how many brain cells they may have.
There was a case a few years back about a parasitic twin attached to an Egyptian baby that had to be removed. It had a brain and semblance of a face, but nothing else. But when removing it, they gave it a name, because it was a person.
We do the same thing to plants. Why do you have no qualms about killing plants to eat the food they accumulated for their young?
A grain of wheat and a chicken egg are evolutionarily and nutritionally, maybe even ontologically, indistinguishable from one another.
Even if you accept that plants might be conscious and their suffering has to be reduced, you would still harm way fewer plants by eating them directly instead of eating other animals that consume them.
Peter Singer has been writing on the topic for decades, including others. What-about-plants needs to fade away.
2) Multiple things can be horrible at the same time. Being upset at this doesn't diminish the atrocities happening elsewhere (like war, genocide, slavery of humans). We can hold multiple things in our heads at the same time.
3) This has nothing to do with the conversation or this domain, but because you're bringing it up, I also have ethical concerns about the experience animals have of their own existence, and reduce or eliminate my consumption when possible.
I also agree, the horrors of the tech domain are usually much more subtle and indirect.
But you're right, these things are all linked and should be considered. I think often about sentience. I see the way animals express deep, complex emotions, and I think humans are a bit naive to think it's state/domain solely alloted to them.
From the video, my impression was "we have yet to figure out an effective way to reward/punish, this is just a PoC of the interface"
IMO, Integrated Information theory of consciousness (IIT) is exactly that. Everything is conscious, the difference is only in the degree to which they are conscious.
What do you mean? What is this class of people in your mind? There are tons of people who consider and talk about the ethics behind what they are doing, long before most people would think it remotely relevant (leading AI labs being an example, and I know the same to be true of various geneticists startups).
I do agree that the entire presentation in this case is bewildering.
I'm specifically talking about this presentation in this article (the video and release details of CL1 doom). Did you read it / watch it?
Would you feel any differently if a product from this tech used the user's own neurons grown from their stem cells?
I don't think this 200,000 neuron array is sentient. But I also don't think we can define the line where that may happen. I assume this company will scale. How far, and to what extent?
On the contrary, I dislike premature ethics discussion, where you end up wildly speculating what the tech might become and riffing off that, greatly padding whatever relative technical content you had. I don't want every technical paper to turn into that, ethics should be treated as a higher-level overview of concerns in a field, with a study dedicated to the ethical concerns of that field (by domain-specific ethics specialists).
Is your concern weapon automaton, or animal rights?
I'm not going to start campaigning against it or changing my life. But it still makes me deeply uncomfortable, and that's allowed.
In what sense, and as opposed to what? What aren't you allowed to feel irrationally uncomfortable, or baselessly concerned with?