You can output it as a memory using a simple prompt. You could probably re-use this prompt for any product with only slight modification. Or you could prompt the product to output an import prompt that is more tuned to its requirements.
This is one of the many reasons I don't think the model companies are going to win the application space in coding.
There's literally zero context lost for me in switching between model providers as a cursor user at work. For personal stuff I'll use an open source harness for the same reason.
I think this is more about which model you steer your coding harness to. You can also self-host a UI in front of multiple models, then you own the chat history.