upvote
It proves that this is not intelligence. This is autocomplete on steroids.
reply
Humans make very similar errors, possibly even the exact same error, from time to time.
reply
We make the model better by training it, and now that this issue has come up we can update the training ;)
reply
It proves LLMs always need context. They have no idea where your car is. Is it already there at the car wash and you simply get back from the gas station to wash it where you went shortly to pay for the car wash? Or is the car at your home?

It proves LLMs are not brains, they don't think. This question will be used to train them and "magically" they'll get it right next time, creating an illusion of "thinking".

reply
> They have no idea where your car is.

They could either just ask before answering or state their assumption before answering.

reply
For me this is just another hint on how careful one should be in deploying agents. They behave very unintuitively.
reply