Because sycophancy in humans is motivated not by the wellbeing of the person seeking advice, but by the interests of the sycophant in gaining favour.
It makes sense that this behaviour would be seen in LLMs, where the company optimizes towards of success of the chatbot rather than wellbeing of the users.
Yup. I know too many people who have a default message when asked for relationship advice: oh, my, the other person is terrible and you should break up.
It's an easy default and it causes so many problems.