Why should throwing ideas at the wall in regards to optimizing code be any different: as long as you can measure and verify it, are okay with added complexity, and are capable of making the code itself not be crap by the end of it?
If an approach is found that improves how well something works, you can even treat the AI slop as a draft and iterate upon it yourself further.
I wouldn't call it karpathys loop I'd call it slop descent. Or descent into slop. Or something like that
This is in fact less random than how generic algorithms used to work traditionally which encoded behaviors in some data structure that then got randomly mutated or crossed with other candidates in the pool.