upvote
Back in business school they used to tell the story of how makers of razor blades would put a good blade as the first and the last blade in the pack. I suspect the LLM services of doing something like that.
reply
I haven't had the time to fully hash this take out, but a big question in the back of my mind has been - is it possible that AI model improvements come partly from finding overhang in things that look hard and impressive to humans but are actually trivial consequences of the training data? If true, then the observable performance of any widely distributed model could get worse over time as it "mines out" the work that's easy for it to do.
reply