It depends how mission critical his brainstorming is for the company. LLMs can brainstorm too.
That means OP’s job may be _safer_, because they are getting higher leverage on their time.
It’s their colleague who’s ignoring AI that I see as higher risk.