Making sense of new or significantly changed code is very taxing. Writing new code is less taxing as you're incrementally updating the model as you go, at a pretty modest pace.
LLMs can produce code at a much higher rate than humans can make sense of it, and assisted coding introduces something akin to cache thrashing, where you constantly need to build mental models of the system to keep up with the changes.
Your bandwidth for comprehending code is as limited as it always was, and taxing this ability to its limits is pretty unpleasant, and in my experience, comes at a cost of other mental capabilities.