upvote
This is why most of these AI search visibility tools focus on tracking many possible prompts at once. LLMs give 0 insight into what users are actually asking, so the only thing you can do is put yourself in the user’s shoes and try to guess what they might prompt.

Disclaimer: I've built a tool in this space (Cartesiano.ai), and this view mostly comes from seeing how noisy product mentions are in practice. Even for market-leading brands, a single prompt can produce different recommendations day to day, which makes me suspect LLMs are also introducing some amount of entropy into product recommendations (?)

reply
I don’t think there’s a clean solution yet but I’m not convinced brute force prompt enumeration scales either, given how much randomness is baked in. I guess that’s why I’ve started thinking about this less as prompt tracking and more as signal aggregation over time. Looking at repeat fetches, recurring mentions, and which pages/models seem to converge on the same sources. It doesn’t tell you what the user asked, but it can hint at whether your product is becoming a defensible reference versus a lucky mention.

From someone who's built a tool in this space, curious if you’ve seen any patterns that cut through the noise? Or if entropy is just something we have to design around.

Disclaimer: I've built a tool in this space as well (llmsignal.app)

reply