That is a bad premise and a false dichotomy, because most medical questions are simple, with well-known standard answers. ChatGPT and Gemini answer such questions correctly, also finding glaring omissions by doctors, even without having to look up information.
As for the medical questions that are not simple, the ones that require looking up information, the model should in principle be able to respond that it does not know the answer when this is truthfully the case, implying that the answer, or a simple extrapoloation thereof, was not in its training data.
But Gemma is a "small" model, and may not be expected to answer all questions. Medical questions are particularly sensitive, so it's quite possible they decided to err on the side of caution and plausible deniability. That doesn't rule out the model has other virtues.