upvote
Here's an evil business idea: Use the LLMs to identify the users most likely to be "vocal influencers" and then prioritize resources for them, ensuring they get the best experience. You can engineer a bubble this way.

And then the next step is to dynamically vary resources based on prediction of user stickiness. User is frustrated and thinking of trying competitor -> allocate full resources. User is profiled as prone to gambling and will tolerate intermittent rewards -> can safely forward requests to gimped models. User is an resolute AI skeptic and unlikely to ever preach the gospels of vibecoding -> no need to waste resources on him.

reply
> Legit this morning Claude was essentially unusable for me I could explicitly state things it should adjust and it wouldn't do it.

Honestly, this is my experience. Every now and again it just completely self implodes and gives up, and I’m left to pick up the pieces. Look at the other replies who are making sure I’m using the agrntic loop/correct model/specific enough prompt - I don’t know what they’re doing but I would love to try the tools they’re using.

reply
I had a similar experience between this weekend and last weekend!

Maybe Anthropic is trying to cut costs a little and we are all just gaslighting ourselves into thinking its our problem.

reply
Last year I had a friend chatting with me about how Claude had rather quickly transformed their small coding shop, except that they noticed after 3pm it consistently became incredibly dumb. I kind of laughed at the time but you know what, who knows. There's very likely some load-balancing shenanigans going on behind the scenes.
reply