Probably. According to the paper, 83.82% of automated commits were already made by algorithmic tools (non-LLM). For the remainder, a three-phase LLM approach was tried, and achieved a success rate of 30%. Based on these numbers, it probably would have been faster, cheaper, and more efficient to just enhance their current strategy rather than screwing around with text generators.
If you're not seeing the hallucinations, I'd assert you're either not using it enough, or (more likely) you don't have enough knowledge in the subject matter to notice when it's hallucinating.
I'm not interested in getting into some argument about who has "more knowledge in the subject matter". I'm genuinely curious: do you think Opus 4.6 hallucinates just as much as GPT-3.5?