I am not sure those are mutually exclusive. We all know if situations where a person knows of tiny and typically undocumented system quirks. We even have a corporate name for it: institutional knowledge. The issue is that executives think it can ALL somehow be done, when even cursory real life project lift will quickly teach one how insane average gap between documented and undocumented tends to be. Add to that near constant changes to API, versions, systems, people and I can't help but wonder at executives, who really do think this way.
I don't think so: the problem is that there exist lots of parts in the system that are quite complicated but which one very rarely has to touch - except in the rare (but happening) case that something deep in such a part goes wrong a for requirement for this part pops up.
If you "learned by doing" instead of reading, you are suddenly confronted with a very subtle and complicated subsystem.
In other words: there mostly exist two kinds of tasks:
- easy, regular adjustments
- deep changes that require a really good understanding of the system
Any time a refactoring comes up which moves code around, AI (or my coworkers) remove those comments without thinking twice, and I need to tell them "hey this is still valid".
In any case, AI is great for traversing a codebase and producing at least a draft of such documentation.