the whole point of arc-agi 3 is that if models are AGI then they should be able to solve the same tasks as humans do given the same information, but they cant. allowing scripts and harnesses and whatnot completely defeats the purpose.
But humans aren't just a "reasoning component"; our nervous system (and body in general) provides us with significant capabilities that would be considered a "harness" for our frontal lobe. It just seems silly to me to try to solve all of this in a single leap. But I guess that they just feel burned by how relatively quickly ARC-AGI 2 was solved