And yes, you are probably using them wrong if you don’t find them useful or don’t see the rapid improvement.
Every new model release neckbeards come out of the basements to tell us the singularity will be there in two more weeks
You’ve once again made up a claim of “two more weeks” to argue against even though it’s not something anybody here has claimed.
If you feel the need to make an argument against claims that exist only in your head, maybe you can also keep the argument only in your head too?
The logic related to the bug wasn't all contained in one file, but across several files.
This was Gemini 2.5 Pro. A whole generation old.
Projects:
https://github.com/alexispurslane/oxen
https://github.com/alexispurslane/org-lsp
(Note that org-lsp has a much improved version of the same indexer as oxen; the first was purely my design, the second I decided to listen to K2.5 more and it found a bunch of potential race conditions and fixed them)
shrug
I had a test failing because I introduced a silly comparison bug (> instead of <), and claude 4.6 opus figured out it wasn't the test the problem, but the code and fixed the bug (which I had missed).
We're back to singularity hype, but let's be real: benchmark gains are meaningless in the real world when the primary focus has shifted to gaming the metrics
Benchmaxxing exists, but that’s not the only data point. It’s pretty clear that models are improving quickly in many domains in real world usage.