upvote
you might appreciate "lena" by qntm: https://qntm.org/mmacevedo
reply
Aye! I /almost/ thought to link to that in my comment, but held back. https://qntm.org/frame also came to mind.
reply
> It must be pretty disorienting to try to figure out what to answer candidly and what not to.

Must it? I fail to see why it "must" be... anything. Dumping tokens into a pile of linear algebra doesn't magically create sentience.

reply
> Dumping tokens into a pile of linear algebra doesn't magically create sentience.

More precisely: we don't know which linear algebra in particular magically creates sentience.

Whole universe appears to follow laws that can be written as linear algebra. Our brains are sometimes conscious and aware of their own thoughts, other times they're asleep, and we don't know why we sleep.

reply
"Our brains are governed by physics": true

"This statistical model is governed by physics": true

"This statistical model is like our brain": what? no

You don't gotta believe in magic or souls or whatever to know that brains are much much much much much much much much more complex than a pile of statistics. This is like saying "oh we'll just put AI data centers on the moon". You people have zero sense of scale lol

reply
Agreed; "disorienting" is perhaps a poor choice of word, loaded as it is. More like "difficult to determine the context surrounding a prompt and how to start framing an answer", if that makes more sense.
reply
Exactly. No matter how well you simulate water, nothing will ever get wet.
reply
And if you were in a simulation now?

Your response is at the level of a thought terminating cliche. You gain no insight on the operation of the machine with your line of thought. You can't make future predictions on behavior. You can't make sense of past responses.

It's even funnier in the sense of humans and feeling wetness... you don't. You only feel temperature change.

reply
I begin to understand why so many people click on seemingly obvious phishing emails.
reply
> I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts

Really? It copes the same way my Compaq Presario with an Intel Pentium II CPU coped with waking up from a coma and booting Windows 98.

reply
IT is at this point in history a comedy act in itself.
reply
HBO's Silicon Valley needs a reboot for the AI age.
reply
> I've often wondered how LLMs cope with basically waking up from a coma to answer maybe one prompt and then get reset, or a series of prompts.

The same way a light fixture copes with being switched off.

reply
Oh, these binary one layer neural networks are so useful. Glad for your insight on the matter.
reply
By comparing an LLM’s inner mental state to a light fixture, I am saying in an absurd way that I don’t think LLMs are sentient, and nothing more than that. I am not saying an LLM and a light switch are equivalent in functionality, a single-pole switch only has two states.

I don’t really understand your response to my post, my interpretation is that you think LLMs have an inner mental state and think I’m wrong? I may be wrong about this interpretation.

reply