>> This part of the above comment strikes me as uncharitable and overconfident. And, to be blunt, presumptuous. To claim to know a company's strategy as an outsider is messy stuff.
> I said "it seems like".
Sorry. I take back the "presumptuous" part. But part of my concern remains: of all the things you chose to wrote, you only mentioned "the Tinder/casino intermittent reinforcement strategy". That phrase is going to draw eyeballs, and you got mine at least. As a reader, it conveys you think it is the most likely explanation. I'm trying to see if there is something there that I'm missing. How likely do you think is? Do you think it is more likely than the other three I mentioned? If so, it seems like your thinking hinges on this:
> I am claiming that there is no incentive for Anthropic to address this issue because of their business model (maximize the amount of tokens spent and price per token).
First, Anthropic is not a typical profit-maximizing entity, it a Public Benefit Corporation [1] [2]. Yes, profits matter still, but there are other factors to consider if we want to accurately predict their actions.
Second, even if profit maximization is the only incentive in play, profit-maximizing entities can plan across different time horizons. Like I mentioned in my above comment, it would be rather myopic to damage their reputation with a strategy that I summarize as a short-term customer-squeeze strategy.
Third, like many people here on HN, I've lived in the Bay Area and I have first-degree connections that give me high confidence (P>80%) that key leaders at Anthropic have motivations that go much beyond mere profit maximization. The AI safety mission is a huge factor. I'm not naive. That mission collides in a complicated way with FU money potential. But I'm confident (P>60%) that a significant number (>25%) of people at Anthropic are implicitly factoring in futures where we all die or lose control due to AI within ~10 to ~20 years -- in which case being filthy rich doesn't matter much.
[1]: https://law.justia.com/codes/delaware/title-8/chapter-1/subc...
[2]: https://time.com/6983420/anthropic-structure-openai-incentiv...