upvote
I'm kinda one of those who believes they 'completely' understand LLMs. But I've also developed my understanding of them such that the internal mechanisms of the transformer, or really any future development in the space based on neural networks and machine learning is irrelevant.

1. A string of unicode characters is converted into an array of integers values (tokens) and input to a black box of choice.

2. The black box takes in the input, does its magic, and returns an output as an array of integer values.

3. The returned output is converted into a string of unicode characters and given to the user, or inserted in a code file, or whatever. At no point does the black box "read" the input in any way analogous to how a human reads.

Where people get "The AIs have emotions!!!" from returning an array of integers values is beyond me. It's definitely more complicated than "next token predictor", but it really is as simple as "Make words look like numbers, numbers go in, numbers come out, we make the numbers look like words."

reply
Yeah nothing personal but my claim here is you’re not smart. The next token predictor aspect is something anyone can understand… the transformer is not quantum physics.

Like look at what you wrote. You called it black box magic and in the same post you claim you understand LLMs. How the heck can you understand and call it a black box at the same time?

The level of mental gymnastics and stupidity is through the roof. Clearly the majority of the utilitarian nature of the LLM is within the whole section you just waved away as “black box”.

> Where people get "The AIs have emotions!!!" from returning an array of integers values is beyond me

Let me spell it out for you. Those integers can be translated to the exact same language humans use when they feel identical emotions. So those people claim that the “black box” feels the emotions because what they observe is identical to what they observe in a human.

The LLM can claim it feels emotions just like a human can claim the same thing. We assume humans feel emotions based off of this evidence but we don’t apply that logic to LLMs? The truth of the matter is we don’t actually know and it’s equally dumb to claim that you know LLMs feel emotions to claiming that they dont feel emotions.

You have to be pretty stupid to not realize this is where they are coming from so there’s an aspect of you lying to yourself here because I don’t think you’re that stupid.

reply
One day I realized I needed to make sure I'm voting on quality stories/comments. I wonder if there was a call to vote substantively and often, if that might change the SNR.

The guidelines encourage substantive comments, but maybe voters are part of the solution too. Kinda like having a strong reward model for training LLMs and avoiding reward hacking or other undesirable behavior.

reply
if voters are stupid then it doesn't really help.

I think what's happening is reality is asserting itself too hard that people can't be so stupid anymore.

reply