Google Search Console shows the user's query if the query is popular enough and your website is in the search results. Bing shows all queries, even if they are not popular, and if your website is in the search results.
But if AI recommends your website when answering people's questions, you cannot find out what questions the user discussed, how many times your website was shown, and in what position. You can see the UTM tag in your website analytics (for example, GPT adds utm source), but that is the maximum amount of information that will be available to you. But if a user discussed a question with AI and only got your brand name, and then found your site in a search engine, you won't be able to tell that they found you with the help of AI advice.
What’s strange is that we’re moving into a world where recommendations matter more than a click, but attribution still assumes a traditional search funnel. By the time someone lands on your site, the most important decision may have already happened upstream and you have no idea.
The UTM case you mentioned is a good example: it only captures direct "AI to site" clicks, but misses scenarios where AI influences the decision indirectly (brand mention to later search to visit). From the site’s perspective tho... yeah it looks indistinguishable from organic search. It makes me wonder whether we’ll need a completely new mental model for attribution here. Perhaps less about “what query drove this visit” and more about “where did trust originate.”
Not sure what the right solution is yet, but it feels like we’re flying blind during a pretty major shift in how people discover things.
Disclaimer: I've built a tool in this space (Cartesiano.ai), and this view mostly comes from seeing how noisy product mentions are in practice. Even for market-leading brands, a single prompt can produce different recommendations day to day, which makes me suspect LLMs are also introducing some amount of entropy into product recommendations (?)
From someone who's built a tool in this space, curious if you’ve seen any patterns that cut through the noise? Or if entropy is just something we have to design around.
Disclaimer: I've built a tool in this space as well (llmsignal.app)
What stood out to me is that AI seems far less concerned with domain age than Google is. If there’s enough contextual discussion around a product (ie. Reddit threads, blog posts, docs, comparisons) then AI models seem willing to surface it surprisingly early.
That said, what I’m still trying to understand is consistency. I’ve seen cases where a product gets recommended heavily for a week, then effectively disappears unless that external context keeps getting reinforced.
So it feels less like “rank once and you’re good” (SEO) and more like “stay present in the conversation.” Almost closer to reputation management than classic content marketing.
Curious if you’ve seen the same thing, especially around how long external mentions keep influencing AI recommendations before they decay.
A practical mental model for recommendations is less “ranking” and more confidence:
Does the model have enough context to map your product to a problem? Are there independent mentions (docs, comparisons, forum threads) that look earned vs manufactured? Is there procedural detail that makes it easy to justify recommending you (“here’s the workflow / constraints / outcomes”)? For builders, a good AEO baseline is: Publish a strong docs/use-case page that answers “when should I use this vs alternatives?” Seed real-world context by participating in existing discussions (HN/Reddit/etc.) with genuine problem-solving and specifics. Track influence with repeatable prompt tests + lightweight surveys (“how did you hear about us?”) since last-click won’t capture it.
It feels like early SEO again: less perfect instrumentation, more building the clearest and most defensible reference for your category.
SEO has made web search unusable and practitioners are the scum of the earth.
But more practically like Raymond Chen said, if every app could figure out how to keep their windows always on top, what good would it do? The same with SEO.
The Raymond Chen analogy brings up something interesting. If everyone forces themselves on top, the signal collapses. My hope is that AI systems end up rewarding genuinely useful, well explained things rather than creating another arms race...but I’m not naive about how incentives tend to play out.
A huge concern of mine has been the introduction of ads. Once ads enter LLM responses, it’s hard not to ask whether we’re just rebuilding the same incentive structure that broke search in the first place.