upvote
> Boris said that writing code is a solved problem

That's just so dumb to say. I don't think we can trust anything that comes out of the mouths of the authors of these tools. They are conflicted. Conflict of interest, in society today, is such a huge problem.

reply
There are bloggers that can't even acknowledge that they're only invited out to big tech events because they'll glaze them up to high heavens.

Reminds me of that famous exchange, by noted friend of Jeffrey Epstein, Noam Chomsky: "I’m not saying you’re self-censoring. I’m sure you believe everything you say. But what I’m saying is if you believed something different you wouldn’t be sitting where you’re sitting."

reply
Its all basically: Sensationalist take to shock you and get attention
reply
> That's just so dumb to say

Depends. Its true of dumb code and dumb coders. Anorher reason why yes, smart pepple should not trust.

reply
He is likely working on a very clean codebase where all the context is already reachable or indexed. There are probably strong feedback loops via tests. Some areas I contribute to have these characteristics, and the experience is very similar to his. But in areas where they don’t exist, writing code isn’t a solved problem until you can restructure the codebase to be more friendly to agents.

Even with full context, writing CSS in a project where vanilla CSS is scattered around and wasn’t well thought out originally is challenging. Coding agents struggle there too, just not as much as humans, even with feedback loops through browser automation.

reply
It's funny that "restructure the codebase to be more friendly to agents" aligns really well with what we have "supposed" to have been doing already, but many teams slack on: quality tests that are easy to run, and great documentation. Context and verifiability.

The easier your codebase is to hack on for a human, the easier it is for an LLM generally.

reply
Turns out the single point of failure irreplaceable type of employees who intentionally obfuscated the projects code for the last 10+ years were ahead of their time.
reply
I had this epiphany a few weeks ago, I'm glad to see others agreeing. Eventually most models will handle large enough context windows where this will sadly not matter as much, but it would be nice for the industry to still do everything to make better looking code that humans can see and appreciate.
reply
It’s really interesting. It suggests that intelligence is intelligence, and the electronic kind also needs the same kinds of organization that humans do to quickly make sense of code and modify it without breaking something else.
reply
Truth. I've had much easier time grappling with code bases I keep clean and compartmentalized with AI, over-stuffing context is one of the main killers of its quality.
reply
Having picked up a few long neglected projects in the past year, AI has been tremendous in rapidly shipping quality of dev life stuff like much improved test suites, documenting the existing behavior, handling upgrades to newer framework versions, etc.

I've really found it's a flywheel once you get going.

reply
All those people who thought clean well architected code wasn’t important…now with LLMs modifying code it’s even more important.
reply
> He is likely working on

... a laundry list phone app.

reply
I think you mean software engineering, not computer science. And no, I don’t think there is reason for software engineering (and certainly not for computer science) to be plateauing. Unless we let it plateau, which I don’t think we will. Also, writing code isn’t a solved problem, whatever that’s supposed to mean. Furthermore, since the patterns we use often aren’t orthogonal, it’s certainly not a linear combination.
reply
I assume that new business scenarios will drive new workflows, which requires new work of software engineering. In the meantime, I assume that computer science will drive paradigm shift, which will drive truly different software engineering practice. If we don't have advances in algorithms, systems, and etc, I'd assume that people can slowly abstract away all the hard parts, enabling AI to do most of our jobs.
reply
Or does the field become plateaued because engineers treat "writing code" as a "solved problem?"

We could argue that writing poetry is a solved problem in much the same way, and while I don't think we especially need 50,000 people writing poems at Google, we do still need poets.

reply
> we especially need 50,000 people writing poems at Google, we do still need poets.

I'd assume that an implied concern of most engineers is how many software engineers the world will need in the future. If it's the situation like the world needing poets, then the field is only for the lucky few. Most people would be out of job.

reply
I saw Boris give a live demo today. He had a swarm of Claude agents one shot the most upvoted open issue on Excalidraw while he explained Claude code for about 20 minutes.

No lines of code written by him at all. The agent used Claude for chrome to test the fix in front of us all and it worked. I think he may be right or close to it.

reply
Did he pick Excalidraw as the project to work on, or did the audience?

It's easy to be conned if you're not looking for the sleight of hand. You need to start channelling your inner Randi whenever AI demos are done, there's a lot of money at stake and a lot of money to prep a polished show.

To be honest, even if the audience "picked" that project, it could have been a plant shouting out the project.

I'm not saying they prepped the answer, I'm saying they prepped picking a project it could definitely work on. An AI solvable problem.

reply
>writing code is a solved problem

sure is news for the models tripping on my thousands of LOC jquery legacy app...

reply
Could the LLM rewrite it from scratch?
reply
boss, the models can't even get all the api endpoints from a single file and you want to rewrite everything?!

not to mention that maybe the stakeholders don't want a rewrite, they just to modernize the app and add some new features

reply
My prediction: soon (e.g. a few years) the agents will be the one doing the exploration and building better ways to write code, build frameworks,... replacing open source. That being said software engineers will still be in the loop. But there will be far less of them.

Just to add: this is only the prediction of someone who has a decent amount of information, not an expert or insider

reply
I really doubt it. So far these things are good at remixing old ideas, not coming up with new ones.
reply
Generally us humans come up with new things by remixing old ideas. Where else would they come from? We are synthesizing priors into something novel. If you break the problem space apart enough, I don't see why some LLM can't do the same.
reply
LLM's cannot synthesize text, they can only concatenate or mix statistically. Synthesis requires logical reasoning. That's not how LLMs work.
reply
Yes it is, LLMs perform logical multi step reasoning all the time, see math proofs, coding etc. And whether you call it synthesis or statistical mixing is just semantics. Do LLMs truly understand? Who knows, probably not, but they do more than you make it out to be.
reply
I don't want to speak too much out of my depth here, I'm still learning how these things work on a mechanical level, but my understanding of how these things "reason" is it seems like they're more or less having a conversation with themselves. IE, burning a lot of tokens in the hopes that the follow up questions and answers it generates leads to a better continuation of the conversation overall. But just like talking to a human, you're likely to come up with better ideas when you're talking to someone else, not just yourself, so the human in the loop seems pretty important to get the AI to remix things into something genuinely new and useful.
reply
They do not. The "reasoning" is just adding more text in multiple steps, and then summarizing it. An LLM does not apply logic at any point, the "reasoning" features only use clever prompting to make these chains more likely to resemble logical reasoning.

This is still only possible if the prompts given by the user resembles what's in the corpus. And the same applies to the reasoning chain. For it to resemble actual logical reasoning, the same or extremely similar reasoning has to exist in the corpus.

This is not "just" semantics if your whole claim is that they are "synthesizing" new facts. This is your choice of misleading terminology which does not apply in the slightest.

reply
There's so many timeless books on how to write software, design patterns, lessons learned from production issues. I don't think AI will stop being used for open source, in fact, with the number of increasing projects adjusting their contributor policies to account for AI I would argue that what we'll see is always people who love to hand craft their own code, and people who use AI to build their own open source tooling and solutions. We will also see an explosion is needing specs for things. If you give a model a well defined spec, it will follow it. I get better results the more specific I get about how I want things built and which libraries I want used.
reply
> is the field of computer science plateaued to the point that most of what we do is linear combination of well established patterns?

Computer science is different from writing business software to solve business problems. I think Boris was talking about the second and not the first. And I personally think he is mostly correct. At least for my organization. It is very rare for us to write any code by hand anymore. Once you have a solid testing harness and a peer review system run by multiple and different LLMs, you are in pretty good shape for agentic software development. Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.

reply
> Not everybody's got these bits figured out. They stumble around and them blame the tools for their failures.

Possible. Yet that's a pretty broad brush. It could also be that some businesses are more heavily represented in the training set. Or some combo of all the above.

reply
"Writing code is a solved problem" disagree.

Yes, there are common parts to everything we do, at the same time - I've been doing this for 25 years and most of the projects have some new part to them.

reply
Novel problems are usually a composite of simpler and/or older problems that have been solved before. Decomposition means you can rip most novel problems apart and solve the chunks. LLMs do just fine with that.
reply
The creator of the hammer says driving nails into wood planks is a solved problem. Carpenters are now obsolete.
reply
Prediction: open source will stop.

Sure, people did it for the fun and the credits, but the fun quickly goes out of it when the credits go to the IP laundromat and the fun is had by the people ripping off your code. Why would anybody contribute their works for free in an environment like that?

reply
I believe the exact opposite. We will see open source contributions skyrocket now. There are a ton of people who want to help and share their work, but technical ability was a major filter. If the barrier to entry is now lowered, expect to see many more people sharing stuff.
reply
Yes, more people will be sharing stuff. And none of it will have long term staying power. Or do you honestly believe that a project like GCC or Linux would have been created and maintained over as long as they have been by the use of AI tools in the hands of noobs?

Technical ability is an absolute requirement for the production of quality work. If the signal drowns in the noise then we are much worse off than where we started.

reply
I’m sure you know the majority of GCC and Linux contributors aren’t volunteers, but employees who are paid to contribute. I’m struggling to name a popular project that it isn’t the case. Can you?

If AI is powerful enough to flood open source projects with low quality code, it will be powerful enough to be used as gatekeeper. Major players who benefit from OSS, says Google, will make sure of that. We don’t know how it will play out. It’s shortsighted to dismiss it all together.

reply
> I’m struggling to name a popular project that it isn’t the case. Can you?

There’s emacs, vim, and popular extensions of the two. OpenBSD, lots of distros (some do develop their own software), SDL,…

reply
Ok but now you have raised the bar from "open source" to "quality work" :)

Even then, I am not sure that changes the argument. If Linus Torvalds had access to LLMs back then, why would that discourage him from building Linux? And we now have the capability of building something like Linux with fewer man-hours, which again speaks in favor of more open source projects.

reply
Many did it for liberty - a philosophical position on freedom in software. They're supercharged with AI.
reply
Even as the field evolves, the phoning home telemetry of closed models creates a centralized intelligence monopoly. If open source atrophies, we lose the public square of architectural and design reasoning, the decision graph that is often just as important as the code. The labs won't just pick up new patterns; they will define them, effectively becoming the high priests of a new closed-loop ecosystem.

However, the risk isn't just a loss of "truth," but model collapse. Without the divergent, creative, and often weird contributions of open-source humans, AI risks stagnating into a linear combination of its own previous outputs. In the long run, killing the commons doesn't just make the labs powerful. It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.

Humans will likely continue to drive consensus building around standards. The governance and reliability benefits of open source should grow in value in an AI-codes-it-first world.

reply
> It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.

My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future. For instance, implementation of low-level networking code can be the combination of patterns of zeromq. The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead.

reply
>My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future.

Even if we assume that's true, what will prevent atrophy of the skillset among the elites with such a small pool of practitioners?

reply
I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI. I also have to agree, I find myself more and more lately laughing about just how much resources we waste creating exactly the same things over and over in software. I don’t mean generally, like languages, I mean specifically. How many trillions of times has a form with username and password fields been designed, developed, had meetings over, tested, debugged, transmitted, processed, only to ultimately be re-written months later?

I wonder what all we might build instead, if all that time could be saved.

reply
> I don’t believe people who have dedicated their lives to open source will simply want to stop working on it, no matter how much is or is not written by AI.

Yeah, hence my question can only be hypothetical.

> I wonder what all we might build instead, if all that time could be saved

If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.

reply
> If we subscribe to Economics' broken-window theory, then the investment into such repetitive work is not investment but waste. Once we stop such investment, we will have a lot more resources to work on something else, bring out a new chapter of the tech revolution. Or so I hope.

I'm not sure I agree with the application of the broken-window theory here. That's a metaphor intended to counter arguments in favor of make-work projects for economic stimulus: the idea here is that breaking a window always has a net negative on the economy, since even though it creates demand for a replacement window, the resources that are necessary to replace a window that already existed are just being allocated to restore the status quo ante, but the opportunity cost of that is everything else the same resources might have bee used for instead, if the window hadn't been broken.

I think that's quite distinct from manufacturing new windows for new installations, which is net positive production, and where newer use cases for windows create opportunities for producers to iterate on new window designs, and incrementally refine and improve the product, which wouldn't happen if you were simply producing replacements for pre-existing windows.

Even in this example, lots of people writing lots of different variations of login pages has produced incremental improvements -- in fact, as an industry, we haven't been writing the same exact login page over and over again, but have been gradually refining them in ways that have evolved their appearance, performance, security, UI intuitiveness, and other variables considerably over time. Relying on AI to design, not just implement, login pages will likely be the thing that causes this process to halt, and perpetuate the status quo indefinitely.

reply
> Boris said that writing code is a solved problem.

No way, the person selling a tool that writes code says said tool can now write code? Color me shocked at this revelation.

Let's check in on Claude Code's open issues for a sec here, and see how "solved" all of its issues are? Or my favorite, how their shitty React TUI that pegs modern CPUs and consumes all the memory on the system is apparently harder to get right than Video Games! Truly the masters of software engineering, these Anthropic folks.

reply
deleted
reply
That is the same team that has an app that used React for TUI, that uses gigabytes to have a scrollback buffer, and that had text scrolling so slow you could get a coffee in between.

And that then had the gall to claim writing a TUI is as hard as a video game. (It clearly must be harder, given that most dev consoles or text interfaces in video games consistently use less than ~5% CPU, which at that point was completely out of reach for CC)

He works for a company that crowed about an AI-generated C compiler that was so overfitted, it couldn't compile "hello world"

So if he tells me that "software engineering is solved", I take that with rather large grains of salt. It is far from solved. I say that as somebody who's extremely positive on AI usefulness. I see massive acceleration for the things I do with AI. But I also know where I need to override/steer/step in.

The constant hypefest is just vomit inducing.

reply
I wanted to write the same comment. These people are fucking hucksters. Don’t listen to their words, look at their software … says all you need to know.
reply
Even if you like them, I don't think there's any reason to believe what people from these companies say. They have every reason to exaggerate or outright lie, and the hype cycle moves so quickly that there are zero consequences for doing so.
reply