upvote
If this was more than just a gut reaction [0], I have a tough time navigating what swings this topic between scary and not scary for you.

Unless you're a true and invested believer of souls, free will, and other spiritualistic nonsense (or have a vested political affiliation to pretend so), it should be tautological that everything you read and experience biases you. LLM output then is no different.

If you are a believer, then either nothing ever did, or LLMs are special in some way, or everything else is. Which just doesn't make sense to me.

[0] It's jarring to observe the boundaries of one's agency, sure, but LLMs are really nothing special in this way. For example, I somewhat frequently catch myself using words and phrases I saw earlier during the day elsewhere, even if I did not process them consciously.

reply
I have noticed similar phenomena with Claude, where its vocabulary subtly shifts how I think/frame/write about things or points me to subtle gaps in my own understanding. And I also usually come around to understand that it's often not arbitrary. But I do think some confirmation bias is at play: when it tries to shift me into the wrong directions repeatedly, I learn how to make it stop doing that.

It definitely adds a layer of cognitive load, in wrangling/shepherding/accomodating/accepting the unpredictable personalities and stochastic behaviors of the agents. It has strong default behaviors for certain small tasks, and where humans would eventually habituate prescribed procedures/requirements, the LLM's never really internalize my preferences. In that way, they are more like contractors than employees.

reply
Why would it be scary? Claude is just parroting other human knowledge. It has no goal or agency.
reply
You can’t verify that there is no influence by the makers of Claude.
reply
I would certainly expect everyone to assume that influence rather than not.
reply
By that logic, nothing computers do is scary.
reply
Yes I think that is their argument.
reply
deleted
reply
Computer don't do anything.
reply
What's their value then?
reply
Just like with absolutely any other tool, their value is in what it enables humans using them to accomplish.

E.g., a hammer doesn't do anything, and neither does a lawnmower. It would be silly to argue (just because these tools are static objects doing nothing in the absence of direct human involvement) that those tools don't have a very clear value.

reply
Seems equally silly to me to suggest that hammers and lawnmowers don't do anything, but I mean here we are.

When people use other people like tools, i.e. use them to enable themselves to accomplish something, do those people cease to do things as well? Or is that not a terminology you recognize as sensible maybe?

I appreciate that for some people the verb "do" is evidently human(?) exclusive, I just struggle to wrap my head around why. Or is this an animate vs. inanimate thing, so animals operating tools also do things in your view?

How do you phrase things like "this API consumes that kind of data" in your day to day?

reply
> Seems equally silly to me to suggest that hammers and lawnmowers don't do anything, but I mean here we are.

To be clear, I am not the person you were originally replying to. I personally don't care much for the terminology semantics of whether we should say "hammers do things" (with the opponents claiming it to be incorrect, since hammers cannot do anything on their own). I am more than happy to use whichever of the two terms the majority agrees upon to be the most sensible, as long as everyone agrees on the actual meaning of it.

> I appreciate that for some people the verb "do" is evidently human(?) exclusive, I just struggle to wrap my head around why. Or is this an animate vs. inanimate thing, so animals operating tools also do things in your view?

To me, it isn't human-exclusive. I just thought that in the context of this specific comment thread, the user you originally replied to used it as a human-exclusive term, so I tried explaining in my reply how they (most likely) used it. For me, I just use whichever term that I feel makes the most sense to use in the context, and then clarify the exact details (in case I suspect the audience to have a number of people who might use the term differently).

> How do you phrase things like "this API consumes that kind of data" in your day to day?

I would use it the exact way you phrased it, "this API consumes that kind of data", because I don't think anyone in the audience would be confused or unclear about what that actually means (depends on the context ofc). Imo it wouldn't be wrong to say "this API receives that kind of data as input" either, but it feels too verbose and awkward to actually use.

reply
I'm not sure how to respond then, because having a preferred position on this is kind of essential to continue. It's the contended point. Can an LLM do things? I think they can, they think they cannot. They think computers cannot do anything in general outright.

To me, what's essential for any "doing" to happen is an entity, a causative relationship, and an occurrence. So a lawnmower can absolutely mow the lawn, but also the wind can shape a canyon.

In a reference frame where a lawnmower cannot mow independently because humans designed it or operate it, humans cannot do anything independently either. Which is something I absolutely do agree with by the way, but then either everything is one big entity, or this is not a salient approach to segmenting entities. Which is then something I also agree with.

And so I consider the lawnmower its own entity, the person operating or designing it their own entity, and just evaluate the process accordingly. The person operating the lawnmower has a lot of control on where the lawnmower goes and whether it is on, the lawnmower has a lot of control over the shape of the grass, and the designer of the lawnmower has a lot of control over what shapes can the lawnmower hope to create.

Clearly they then have more logic applied, where they segment humans (or tools) in this a more special way. I wanted to probe into that further, because the only such labeling I can think of is spiritualistic and anthropocentric. I don't find such a model reasonable or interesting, but maybe they have some other rationale that I might. Especially so, because to me claiming that a given entity "does things" is not assigning it a soul, a free will, or some other spiritualistic quality, since I don't even recognize those as existing (and thus take great issue with the unspoken assumption that I do, or that people like me do).

The next best thing I can maybe think of is to consider the size of the given entity's internal state, and its entropy with relation to the occurred causative action and its environment. This is because that's quite literally how one entity would be independent of another, while being very selective about a given action. But then LLMs, just like humans, got plenty of this, much unlike a hammer or a lawnmower. So that doesn't really fit their segmentation either. LLMs have a lot less of it, but still hopelessly more than any virtual or physical tool ever conceived prior. The closest anything comes (very non-coincidentally) are vector and graph databases, but then those only respond to very specific, grammar-abiding queries, not arbitrary series of symbols.

reply
Computers perform computations. They do what programmers instruct them to do by their nature.
reply
Agreed, just like hammers get the nails hammered into a woodboard. They do what the human operator manually guides them to do by their nature.

I am not disagreeing with you in the slightest, I feel like this is just a linguistic semantics thing. And I, personally, don't care how people use those words, as long as we are on the same page about the actual meaning of what was said. And, in this case, I feel like we are fully on the same page.

reply