the agreement problem is the one that worries me more too. you can catch replacement, it's visible. you can't catch a model that confidently validates your existing blind spots back at you
the deeper version of this: the reps that build judgment used to happen naturally in production. you shipped something wrong, it broke, you learned. that feedback loop is now compressed or gone entirely. the question isn't really "is AI replacing your thinking" it's "where are the reps happening now." if the answer is nowhere, the judgment debt is accumulating invisibly and the AI agreement problem you're describing is exactly how it stays invisible
reply