That's a common phenomenon in model fitting, depending on the type of model. In both old school regression and neural networks, the fitted model does not distinguish between specific training examples and other inputs. So specific input-output pairs from the training data don't get special privilege. In fact it's often a
good thing that models don't just memorize inputt-output pairs from training, because that allows them to smooth over uncaptured sources of variation such as people all being slightly different as well as measurement error.
In this case they had to customize the model fitting to try to get the error closer to zero specifically on those attributes.