I don’t want to declare machines to have emotion outright, but to call mimicry evidence of falsehood is also itself false.
Now that AI labs have all these “Nevermind” texts to train on, maybe it’s getting easier to correct? (Would require some postprocessing to classify the AI outputs as successful or not before training)
I don’t know if it’s true or not but it certainly tracks given LLMs are way more polite than the average post on the internet lol
Philosophically, it's not like you're a detached observer who simply reasons over all possible hypotheses. Ever get stuck in a dead end and find it hard to dig yourself out? If you were a detached observer, it'd be pretty easy to just switch gears. But it's not (for humans).
Haha anyone else seen this?
Overall it saves me a lot of time reading when it's just focusing on the details.