upvote
> It's the difference between raw LLM output vs LLM output that was tweaked, reviewed and validated by a competent developer.

This is one of those areas where you might have been right.. 4-6 months ago. But if you're paying attention, the floor has moved up substantially.

For the work I do, last year the models would occasionally produce code with bugs, linter errors, etc, now the frontier models produce mostly flawless code that I don't need to review. I'll still write tests, or prompt test scenarios for it but most of the testing is functional.

If the exponential curve continues I think everyone needs to prepare for a step function change. Debian may even cease to be relevant because AI will write something better in a couple of hours.

reply
This very much depends on the domain you work in. Small projects in well tread domains are incredible for AI. SaaS projects can essentially be one-shot. But large projects, projects with specific standards or idioms, projects with particular versions of languages, performance concerns, hardware concerns, all things the Debian project has to deal with, aren't 'solved' in the same way.
reply
deleted
reply