No, at least not directly. Inference does not train models. It is possible that OpenAI may separately collect the chat data, clean it, and feed it back into the model for future iterations. Or they could have extracted URLs for future indexing.
More likely though, I suspect, is your site just managed to be indexed naturally, and LLMs are very efficient at matching obscure data to relevant queries.
Especially in a longer ChatGPT conversation or via deep-research or more agentic modes (e.g. "Pro").
ChatGPT spends quite some time and diligence on searching.
Great for content that is not hyper search engine optimized but still (or even more) relevant. It bubbles up.
Could Google be actively trying skip generated-looking sites/content?