upvote
Of course. I mean, my view is that it needs to be "build the right things right", vs "build things right and then discover if they are the right things". It's a stab at premature optimisation, focusing on code elegance more than delivering working software. Code simplicity, good design, scalability, are super important for maintainability, even in the age of AI (maybe even more so).

But considering that AI will more and more "build things right" by default, it's up to us humans to decide what are the "right things to build".

Once AI knows what are the "right things to build" better than humans, this is AGI in my book, and also the end of classical capitalism as we know it. Yes, there will still be room for "human generated" market, like we have today (photography didn't kill painting, but it made it a much less of a main employment option)

In a way, AI is the great equality maker, in the past the strongest men prevailed, then when muscles were not the main means to assert force, it was the intellect, now it's just sheer want. You want to do something, now you can, you have no excuses, you just need to believe it's possible, and do it.

As someone else said, agency is eating the world. For now.

reply

  >  it needs to be "build the right things right", vs "build things right and then discover if they are the right things"
I still think this is a bad comparison and I hoped my prior comment would handle this. Frankly, you're always going to end up in the second situation[0] simply because of 2 hard truths. 1) you're not omniscient and 2) even if you were, the environment isn't static.

  > But considering that AI will more and more "build things right" by default
And this is something I don't believe. I say a lot more here[1] but you can skip my entire comment and just read what Dijkstra has to say himself. I dislike that we often pigeonhole this LLM coding conversation into one about a deterministic vs probabilistic language. Really the reason I'm not in favor of LLMs is because I'm not in favor of natural language programming[2]. The reason I'm not in favor of natural language programming has nothing to do with its probabilistic nature and everything to do with its lack of precision[3].

I'm with Dijkstra because, like him, I believe we invented symbolic formalism for a reason. Like him, I believe that abstraction is incredibly useful and powerful, but it is about the right abstraction for the job.

[0] https://news.ycombinator.com/item?id=46911268

[1] https://news.ycombinator.com/item?id=46928421

[2] At the end of the day, that's what they are. Even if they produce code you're still treating it as a transpiler: turning natural language into code.

[3] Okay, technically it does but that's because probability has to do with this[4] and I'm trying to communicate better and most people aren't going to connect the dots (pun intended) between function mapping and probabilities. The lack of precision is inherently representable through the language of probability but most people aren't familiar with terms like "image" and "pre-image" nor "push-forward" and "pull-back". The pedantic nature of this note is precisely illustrative of my point.

[4] https://www.mathsisfun.com/sets/injective-surjective-bijecti...

reply