For an LLM and this "vague" domain expertise, even if none of the LLM's training material includes certain nuggets of wisdom, if the material includes enough cases of problems and the solutions offered by domain experts, we should expect the model to find a decent relationship between them. That the LLM has never ingested an explicit documentation of the reasoning is irrelevant, because it does not perform reasoning.
We even have some infamous "dark" domains in computer science where it is nearly impossible for a human to get to the frontier because the research that underpins much of the state-of-the-art hasn't existed as public literature for decades. If you want to learn it, you either have to know a domain expert willing to help you or reinvent it from first principles.
Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver.