I've done honeypot tests with links in html comments, links in javascript comments, routes that only appear in robots.txt, etc. All of them get hit.
This is what you should imagine when your site is being scraped:
def crawl(url):
r = requests.get(url).text
store(text)
for link in re.findall(r'https?://[^\s<>"\']+', r):
crawl(link)I assume that there are data brokers, or AI companies themselves, that are constantly scraping the entire internet through non-AI crawlers and then processing data in some way to use it in the learning process. But even through this process, there are no significant requests for LLMs.txt to consider that someone actually uses it.