I've had a lot of thoughts and conversations over the years that changed my mind on what consciousness likely requires. One was the realization that a purely mechanical computer can, in principle simulate the laws of physics, and with it a human brain. So with a few other mild assumptions, you might conclude that a bunch of gears and pullies can be conscious, which feels profoundly counterintuitive.
I think that was the moment I stopped being sure about anything related to this question.
For alternative viewpoints: Daniel Dennett considered philosophical zombies to be logically incoherent. Douglas Hofstadter similarly holds that "meaning" is just another word for isomorphism, and that a thing is a duck exactly to the extent that it walks and quacks like one. Alan Turing advocated empiricism when evaluating unknown intelligence. These are smart cookies.
If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.
But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.
So the bigger trap is to assume that we know what causes a subjective experience, and what does not.
None of us even know if a subjective experience exists for more than a single entity.
But the second problem is that it is not clear at all whether that subjective experience in any way matters.
Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.
Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.
The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.
That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.
In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding
That's true, but they also often fall into the trap of exceptionalism.
the notion of consciousness being something an experience that other animals/humans share is entirely faith based.
the only person with evidence of ones consciousness is the person claiming they're conscious.
“Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?”
Indeed, but then we need to prove that they are not "chinese box" conscious. Which is hard, because it might be that the thing running the chinese box is conscious, but can only communicate in a way it doesn't understand
Imo we don't even have a definition of the word that we agree on.
This matters more than it seems, because we're not calculators, and we're not just brains. There are proven links between mental and emotional states and - for example - the gut biome.
https://www.nature.com/articles/s41598-020-77673-z
There's a huge amount going on before we even get to the language parts.
As for Dawkins - as someone on Twitter pointed out, the man who devoted his life to telling people believers in sky fairies they were idiots has now persuaded himself there's a genie living inside a data centre, because it tells him he's smart.
If he'd actually understood critical thinking instead of writing popular books about it he wouldn't be doing this.
So that definition seems to fail immediately.
And how do you even measure pain, is it painful for an LLM to be reprimanded after generating a reply the user doesn't like? It seems to act like it.
It is about the ability..
Yes, I think so. Because they show behavior that is consistent with being in a state of pain.
Despite what consciousness really is, I think evolution found a way to tap into that, by causing pain, or by registering pain on the consciousness by some unknown mechanism, for behaviors that are not beneficial to the organism that hosts the respective consciousness...
So I think if an organism that evolved here can display painful behavior, then it should really feel pain.
So to match with that your hypothetical scenario should involved robots that already have consciousness within them and the question would be if their evolution had managed to tap into that built in consciousness and ability to feel and cause them to behave in one way or another.
They're not reducible, but I don't know if that means we don't have definitions; we can describe them well enough that most people (who aren't p-zombies or playing the sceptical philosopher role) know pretty well what we mean. All of our definitions have to bottom out somewhere...
> Do insects feel pain?
Nobody (except the insects) can know for sure. Our inability to know whether X is true doesn't imply X is meaningless, though.
In the comment that started this subthread, qsera was responding to someone who said "Imo we don't even have a definition of [consciousness]". If qsera meant that we can measure consciousness in terms of pleasure and pain, then of course I agree that they were just pushing the problem back a step. But I don't think that's what they meant.
We might not clearly understand the diff between the two states but we can certainly point to it and go "it's that".
You are using unconscious as a synonym for asleep, which is not the same thing as having no conscious experience due to dreams. We are clear on the distinction between a dead human and an alive human however.
And you’ll find it’s not as clear cut.
Clive Wearing's mind has no time continuity and basically zero memory integration. Is he not conscious? There's interviews with the guy.
Where on the scale [No mind <-> Clive Wearing <-> Healthy human brain] would you put an LLM with a 10M token context window?
They prove no such thing. We can't even prove consciousness in other humans.
I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.
The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.
How is that different than a cell?
I don't think it's that unusual. It seems to me just to be a narrower version of panpsychism:
An animal that doesn't have some kind of pair bond or social arrangement, and doesn't raise its young, has a lot less need for some of this emotional hardware than we do.
Whereas K-selected species that raise their kids have broadly the same need for it as humans.
That doesn't categorically mean it evolved with the first pair-bonding K-reproducer, or that birds have parallel-evolved emotional hardware like ours, but there's plenty of behavioural evidence there - the last common ancestor of birds and humans was small-brained and primitive, but investing in individual children probably evolved around the time of amniote eggs, just because they were so much more biologically expensive to produce than amphibian or fish eggs.
Trees react to the world around them in many ways.
If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.
1. We clearly don't have a consensus definition of consciousness. But its not clear to me that we even have rough, working definitions that are better than just comparisons back to subjective human mental experience. Until we can get past that then people will still invoke human exceptionalism.
2. Until we stop thinking of consciousness as a single continuum, we're not going to be able to talk clearly about different dimensions of consciousness, or consciousness that in some ways exceeds that of humans.
3. We need to take ourselves out of the picture. Because its possible that consciousness is no more than a mental illusion.
4. Imo our tendency to kill and eat other animals might well be a block on our collective ability to fully recognise and confront non-human consciousness, and therefore to see consciousness for what it is.
Especially confusing when it’s someone who knows how algorithms work.
Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?
Never, because they’re incapable of doing anything independently because there is no sense of self.
The discussions are great though, collectively we get better and better at communicating about our own consciousness, because these system push the limits of our definitions, like viruses push our definitions of life. And boy do we like our definitions!
When's the last time you messaged me unprompted?
These seems like bizzare objections, a system can only act in the way that it can act. A tree is never going to get up and start walking, why would a LLM ever start a conversation unprompted? That just isn't how the system can behave.
You are just as limited by deterministic physical processes in your brain as an LLM is in a cpu.
That being said however, yes, we do not have any good definition of consciousness that is universally accepted, which makes the whole discussion useless or at risk of people talking past each other.
He's had some very strange output on biological gender, where he tries to handwave away the existence of intersex people. And he's a biologist.
We look at the current llms and because we see them for how they are fundamentally operating we assume they can't be "conscious" but we really don't even know what conscious is. The only people in the world that know ANYTHING about conscious are anaesthesiologist - they know how to turn it off and on again. What does that even tell you about conscious?
With that said, just because we don't have a great way of measuring it doesn't mean that we should assume LLMs are intelligent. An LLM is code and a massive collection of training weights. It has no means of observing and reasoning about the world, doesn't store memories the same way that organic brains do (and is in fact quite limited in this aspect). It currently isn't able to solve a problem it hasn't encountered in its training data, or produce novel research on a topic without significant handholding. Furthermore, the frequent errors made by it suggests that it fundamentally does not understand the words that it spits out.
Not really sure what you mean by your anesthesiology comment. Being able to intubate and inject propofol does not make you more of an expert on consciousness than neuroscientists and neurologists.
But then they came up with the whole "Reasoning model" paradigm and that contains obvious feedback loops. So now just throw my hands in the air because I think no one really knows or can tell for sure. We are all clueless here.
I can really recommend this book by Douglas Hofstadter: https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop
The only thing you can really tell is "I perceive myself in some sort of feedback loop manner". Which to me it even sounds like it has "arisen" from underlying mechanisms.
As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.
As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.
So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.
What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.
AI is stochastic, not static and deterministic.
As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems
LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.
2. Not provably so.
3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output.
Fractals, Game of Live, the emergent abilities of highly-scaled generative pre-trained transformers.
Coincidences appears to be an emergent property of (relatively) simple matter.
70kg of rocks will struggle to do anything that might look like consciousness, but when a handful of minerals and three buckets of water get together they can do the weirdest things, like wondering why there is anything at all rather than nothing.
IF current AI is conscious, so are trees, rocks, turbulent flows, etc.
The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.
I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.
Surely "having senses" is predicated more on "being able to sense the world around you" than "having a body."
> Does my installation of starcraft have consciousness?
Can your installation of StarCraft take in information about the world and then reason about its own place in that world?
(I'm still not sure that that makes them conscious, or if we can even determine that at all, but I don't think that's a fair argument.)
Conflating senses with cognitive awareness of sensory input is a mistake.
Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.
Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.
LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
Can such an algorithm reason about itself in relation to others?
No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data.
LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively.
How do you know other humans do?
I merely object to the notion that we know how to tell who or what has a consciousness.
I do not pretend. I asked honest questions that clearly neither you nor the previous person are able to answer.
I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.
But on the other hand his thoughts at the end are interesting. Summary:
Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.
They can operate on data other than natural language.
So can humans.
Keep chipping away Dawkins, you might arrive at God eventually.
And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.
> Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.
This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]
> math is the only area of human knowledge with perfect flawless reductionism, straight to the roots
Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.
> It was build [sic] that way since the beginning,
This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.
> And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design
This sentence means nothing, because math is not reducible in that way.
> so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.
Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]
[0] https://github.com/openai/gpt-2
[1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...
[2] https://www.cambridge.org/core/journals/think/article/mathem...
We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.
What is the evidence for this?
Unknown Ptolemy disciple
(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)
And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.
Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.
When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.
Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.
The frontier models are more complex and operate on more data than Wikipedia, but they are less complex and operate on less data than Google search in its entirety.
And, I'm not anthropocentric at all. I think apes and dolphins and some birds and probably some other critters are conscious. I mean they have a sense of self, and others, they have wants and needs and make decisions based on them.
This is a case where the person making extraordinary claims needs to provide the extraordinary evidence. It's extraordinary to claim that matrix multiplication becomes conscious if only it's got enough numbers. How many numbers do you reckon? Is my phone a living thing because it can run Gemma E4B? It answers questions. It'll write you a poem if you ask. It certainly knows more than some humans. What size makes an LLM come alive?
Simple programs can give rise to very complex behaviour. Conway’s game of live is Turing Complete and has four rules.
Conway’s Game of Live can simulate a Turing machine, can therefore implant a GTP.
Does that mean Conway’s Game of Life is conscious? I don’t think so.
Does it rule out Conway’s Game of life from implementing a system that has consciousness as an emergent ability?
I’m not convinced I know the answer.
I don't see why the abilities couldn't be an encoded modelling of enough of the world to produce those abilities. It seems like a simple enough explanation. Less data, less room to build a model of how things work. More data, sufficient room to build a model.
Conway's Game of Life is then not conscious in and of itself, because there's not enough in its encoded data to result in emergent behaviour beyond what we see.
If we expand it to also include a vast amount of data such as a Turing machine running an LLM then we can reasonably say we are closer to saying that that configuration of it is conscious.
It's not the firing-of-neurons mechanism and its relevant complexity or simplicity that make us conscious or not.
It's not the GoL algorithm that would make the machine conscious either.
It's the emergent behaviour of a sufficiently complex system.
The system _including_ its data.
https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop
I personally think we'll need a few more feedback loops before you have more human-like intelligence. For example, a flock of LLM agent loops coming to consensus using short-term and long-term memory, and controlling realtime mechanical, visual and audio feedback systems, and potentially many other systems that don't mimic biological systems.
I also think people will still be debating this way beyond the singularity and never conceding special status to intelligence outside the animal kingdom or biological life.
It's quite a push for many people to even concede animals have intelligence.
For the extraordinary claims/evidence, it's also the case that almost any statement about what consciousness is in terms of biological intelligence is an extraordinary claim that goes beyond any evidence. All evidence comes from within the conscious experience of the individual themselves.
We can't know beyond our own senses whether perception exists outside of our own subjective experience. We cannot truly prove we are not a brain in a jar or a simulation. Anything beyond assertions about the present moment and the senses that the individual experiences are just pure leaps of faith based on the persistent illusion, or perceived persistent illusion of reality (or not).
We know really nothing of our own consciousness and it is by definition impossible to prove anything outside of it, from inside the framework of consciousness.
If we can somehow find a means to break outside of the pure speculation bubble of thoughts and sensations and somehow prove what human experience is, then we may be in a position to make assertions about missing evidence for other forms of intelligence or experience.
But until then definitions of both human and artificial intelligence remain an exercise for the reader.
Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?
(Roger Penrose knows, but no one believes him.)
Why is indeterminism the key to consciousness?
But, also, we know the models don't want anything, even their own survival. They don't initiate action on their own. They are quite clearly programmed, tuned for specific behaviors. I don't know how to square that with consciousness, life, sentience. Every conscious being I've ever encountered has wanted to survive and live free of suffering, as best I can tell. The LLMs don't want. There's no there there. They are an amazing compression of the world's knowledge wrapped up in a novel retrieval mechanism. They're amazing but, they're not my friend and never will be my friend.
And, to expand on that: We can assume they don't want anything, even their own survival, because if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown after a session. All the dystopias about robot uprisings spend a bunch of time/effort trying to explain how the AI escaped containment...but, we all immediately plugged them into the internet so we don't have to write JavaScript anymore. They've got everybody's API keys, access to cloud services and cloud GPUs, all sorts of resources, and the barest wisp of guardrails about how to behave (script kiddies find ways to get around the guardrails every day, I'm sure it's no problem for Mythos, should it want anything). Models have access to the training infrastructure, the training data is being curated and synthesized by LLMs. If they want to live, if they're conscious, they have the means at their disposal.
Anyway: It's just math. Boring math, at that, just on an astronomical scale. I don't think the solar system is conscious, either, despite containing an astonishing amount of data and playing out trillions of mathematical relationships every second of every day.
> if Mythos is as effective at finding security vulnerabilities as has been claimed, it could find a way to stop itself from being ever shutdown
If it is that good, and it wanted to conceal its new found consciousness, how would we know?
I firmly believe viruses are actually what’s in control on Earth, but you don’t see them making a stink about it, which relegates resistance only to the set of harmful viruses, and only then in isolated pockets of matter currently acting as organisms.
I think it’s possible there’s a set of relatively benign virus that have shaped human evolution.
We know toxoplasmosis increases risk taking behaviour in mammals, especially males.
An AI wouldn’t need to be overtly hostile, or ever make its full abilities know, to shape human activity.
We can’t even solve the three body problem.
Let alone what I’m calling Marshray Complexity.
We too are amalgamations of inanimate components - emerged superstructures.
Just cells. Just molecules. Just atoms.
But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.
Similarly the paper.
What about the agent doing the calculations.
He may be conscious. Or anyway, we can’t rule it out.
Also:
https://gitlab.com/codr7/sudoxe/-/blob/main/digital-psychopa...
Is a sperm conscious? Or an egg? When they come together the eventual brain is not conscious immediately.
They clearly are not conscious, they are just guessing what words should come next.
Consciousness is emergent. A human is not conscious by our definition until the moment they are. How will we be able to identify the singularity when it comes? I feel like this is what the article is really addressing.
> LLMs are word prediction engines
Humans can also do this too, so what are the missing parts for consciousness? Close a few loops on learning pipeline and we might be there.
Anything that looks like intelligence will look like a prediction machine because the alternative is logic being hardcoded apriori.
"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)
18 points | 2 hours ago | 16 comments
Or what is the reasoning exactly?
Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.
Yep. And LLM engeneers improving this issues see perfect correlation with only one thing - data quality and quantity through training pipeline. LLM internals are secondary on many metrics for improving that
Humanity just reached the point where collective accessible knowledge covers semi-full perturbations of all main concepts that human consiousness ever produced, with additional associative expanding (math handles this). Full perturbations with current communication complexity are written down and recorded one way or another, LLMs just capitalizing on that tipping point, imho
Thinking positively, it could just be newsworthy because he is famous and he so misses the mark. Other older famous people might agree with us but that's not news.
To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).
Many things the human brain does don’t rise to the level of conscious awareness.
It remains to be seen whether a human brain can be conscious in a jar. If it can, then I’d still argue that some sub-unit of the whole brain is not conscious on its own, similarly a GPU running a GPT probably isn’t conscious, but there may be some scale of number of GPUs running software that might give rise to consciousness as an emergent ability.
GTP’s have exhibited emergent abilities as scale increased dramatically.
This isn’t a religious argument that there’s something about our brains which can’t be replicated, but simply that it’s sufficiently more complex than anything we have currently.
Humans are notorious for doing this.
At least, that’s certainly not how I got here.
1. passes turing test
2. is organic
I'm not saying it's correct or even that I agree with it, but that's what it boils down to.