But do current LLMs solve that, or do they still ultimately depend on making calls to other search indexes? Seems like they could theoretically be trained to semantically match urls from their training set, but I think the models would have to be specifically architected for that, so I'm curious if anyone knows more about this.
I'd also be interested if there's any small open models working towards that.