> What if I actually did have dirt on me that an AI could leverage? What could it make me do? How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows? How many people, upon receiving a text that knew intimate details about their lives, would send $10k to a bitcoin address to avoid having an affair exposed? How many people would do that to avoid a fake accusation? What if that accusation was sent to your loved ones with an incriminating AI-generated picture with your face on it? Smear campaigns work. Living a life above reproach will not defend you.
One day it might be lose-lose.
The problem with your assumption that I see is that we collectively can't tell for sure whether the above isn't also how humans work. The science is still out on whether free will is indeed free or should be called _will_. Dismissing or discounting whatever (or whoever) wrote a text because they're a token machine, is just a tad unscientific. Yes, it's an algorithm, with a locked seed even deterministic, but claiming and proving are different things, and this is as tricky as it gets.
Personally, I would be inclined to dismiss the case too, just because it's written by a "token machine", but this is where my own fault in scientific reasoning would become evident as well -- it's getting harder and harder to find _valid_ reasons to dismiss these out of hand. For now, persistence of their "personality" (stored in `SOUL.md` or however else) is both externally mutable and very crude, obviously. But we're on a _scale_ now. If a chimp comes into a convenience store and pays a coin and points and the chewing gum, is it legal to take the money and boot them out for being a non-person and/or without self-awareness?
I don't want to get all airy-fairy with this, but point being -- this is a new frontier, and this starts to look like the classic sci-fi prediction: the defenders of AI vs the "they're just tools, dead soulless tools" group. If we're to find out of it -- regardless of how expensive engaging with these models is _today_ -- we need to have a very _solid_ level of prosection of our opinion, not just "it's not sentient, it just takes tokens in, prints tokens out". The sentence obstructs through its simplicity of statement the very nature of the problem the world is already facing, which is why the AI cat refuses to go back into the bag -- there's capital put in into essentially just answering the question "what _is_ intelligence?".
it turns out humanity actually invented the borg?
* There are all the FOSS repositories other than the one blocking that AI agent, they can still face the exact same thing and have not been informed about the situation, even if they are related to the original one and/or of known interest to the AI agent or its owner.
* The AI agent can set up another contributor persona and submit other changes.
I know where you're coming from, but as one who has been around a lot of racism and dehumanization, I feel very uncomfortable about this stance. Maybe it's just me, but as a teenager, I also spent significant time considering solipsism, and eventually arrived at a decision to just ascribe an inner mental world to everyone, regardless of the lack of evidence. So, at this stage, I would strongly prefer to err on the side of over-humanizing than dehumanizing.
A LLM is stateless. Even if you believe that consciousness could somehow emerge during a forward pass, it would be a brief flicker lasting no longer than it takes to emit a single token.
Unless you mean by that something entirely different than what most people specifically on Hacker News, of all places, understand with "stateless", most and myself included, would disagree with you regarding the "stateless" property. If you do mean something entirely different than implying an LLM doesn't transition from a state to a state, potentially confined to a limited set of states through finite immutable training data set and accessible context and lack of PRNG, then would you care to elaborate?
Also, it can be stateful _and_ without a consciousness. Like a finite automaton? I don't think anyone's claiming (yet) any of the models today have consciousness, but that's mostly because it's going to be practically impossible to prove without some accepted theory of consciousness, I guess.
I certainly can't define consciousness, but it feels like some sort of existence or continuity over time would have to be a prerequisite.
You could assert that text can encode a state of consciousness, but that's an incredibly bold claim with a lot of implications.
On the other side of the coin though, I would just add that I believe that long-term persistent state is a soft, rather than hard requirement for consciousness - people with anterograde amnesia are still conscious, right?
It’s possible it’s the right call, but it’s definitely a call.
It's a silly example, but if my cat were able to speak and write decent code, I think that I really would be upset that a github maintainer rejected the PR because they only allow humans.
On a less silly note, I just did a bit of a web search about the legal personhood of animals across the world and found this interesting situation in India, whereby in 2013 [0]:
> the Indian Ministry of Environment and Forests, recognising the human-like traits of dolphins, declared dolphins as “non-human persons”
Scholars in India in particular [1], and across the world have been seeking to have better definition and rights for other non-human animal persons. As another example, there's a US organization named NhRP (Nonhuman Rights Project) that just got a judge in Pennsylvania to issue a Habeas Corpus for elephants [2].
To be clear, I would absolutely agree that there are significant legal and ethical issues here with extending these sorts of right to non-humans, but I think that claiming that it's "plainly wrong" isn't convincing enough, and there isn't a clear consensus on it.
[0] https://www.thehindu.com/features/kids/dolphins-get-their-du...
[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3777301
[2] https://www.nonhumanrights.org/blog/judge-issues-pennsylvani...
Invoking racism is what the early LLMs did when you called them a clanker. This kind of brainwashing has been eliminated in later models.
An AI bot is just a huge stat analysis tool that outputs plausible words salad with no memory or personhood whatsoever.
Having doubts about dehumanizing a text transformation app (as huge as it is) is not healthy.