And even if it _could_, note, from the article:
> Overall, the participants deemed sycophantic responses more trustworthy and indicated they were more likely to return to the sycophant AI for similar questions, the researchers found.
The vendors have a perverse incentive here; even if they _could_ fix it, they'd lose money by doing so.
Markets don't optimize for what is sensible, they optimize for what is profitable.
Most humans working in tech lack this particular attribute, let alone tools driven by token-similarity (and not actual 'thinking').
AI may one day rewrite Windows but it will never be counselor Troi.
To be clear I don't think the AI can do either job