upvote
I was working some time ago on LLM assisted optimizations and algorithm discovery and this does not look like a novel idea.

AlphaEvolve from google is evolutionary algorithm which uses LLMs for Idea generation following very similar loop:

- https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...

- Open source implementation of the algorithm: https://github.com/algorithmicsuperintelligence/openevolve

reply
It is not novel - but with the new models it is just becoming practical.
reply
I mean, this is such low hanging fruit, you have to be careful not to step on it.
reply
Just because it is a nice meme I want to throw in Schmidhuber's work on (do not treat this comment as serious except you are Schmidhuber himself):

* Gödel Machine (2006-2007) [1]

* Optimal Ordered Problem Solver (2002) [2]

* Meta-Learning and Artificial Curiosity (1990s onward) [3]

[1] https://arxiv.org/html/2505.22954v3

[2] https://arxiv.org/abs/cs/0207097

[3] https://evolution.ml/pdf/schmidhuber.pdf

Edit: markdown formatting

reply
A genetic algorithm keeps a population, and there is a "crossing" operation.

I don't see both ingredients in Karpathy's proposed scheme.

reply
i actually do it differently

> (1) Let the LLM randomly perturbate the system.

instead of this i ask LLM to what's least likely to improve performance and then measure it.

sometimes big gains come from places you thought are least likely.

reply
This is like idiocracy for Software Devs at this point
reply
Is it? Evolution also seems to be a result of semi-random crap over the span of millenia and nobody is critiquing it like that.

Why should throwing ideas at the wall in regards to optimizing code be any different: as long as you can measure and verify it, are okay with added complexity, and are capable of making the code itself not be crap by the end of it?

If an approach is found that improves how well something works, you can even treat the AI slop as a draft and iterate upon it yourself further.

reply
It's basically saying to randomly slop something and see if it gets better. Evolution has physical principles and guard rails backing it. Here there are no principals whatsoever, just slopping the slopper to see if it's somehow less sloppy then writing a gist with a slop machine.

I wouldn't call it karpathys loop I'd call it slop descent. Or descent into slop. Or something like that

reply
Evolution very much involves random mutations that turn out useless or harmful and thus don't spread.

This is in fact less random than how generic algorithms used to work traditionally which encoded behaviors in some data structure that then got randomly mutated or crossed with other candidates in the pool.

reply
I am aware of what biological evolution is. This isn't analogous. I love my software friends, I'm a software person now too, but the level at which people take algorithms that involve any level of biomimicry as a model for actual biology is frustrating.
reply
It does burn holes in ones brain doesn't it... At least with the silly sorting algorithms we know they are supposed to be silly...
reply
Lol, I respect karpathy a lot, but this is such an obvious in your face idea that it is laughable to put someone’s name on it.

What’s next “karpathy investing” where ai in a loop builds a portfolio?

reply
I'd go a step further and say that sort of loop is probably the first thing most people who play around with agent harnesses try, pretty much the first "Hmm, what should I do now?" thing that pops into people's head.
reply
thanks, I thought as a researcher Kaparthy would include and cite relevant papers. I quickly became disappointed. I already knew openevolve and the ACE Framework paper. This is the first time I learned about Genetic Algorithm and I now have some clear roadmap for studying.
reply
Wtf, this has a name now? I thought of this exact idea literally months ago but never had the time to do any experiments on it.

At the time I dismissed it as potentially being incredibly expensive for the improvement you do get, and runs into typical pitfalls of evolutionary algorithms (in the same way evolution doesn't let an organism grow a wheel, your LLM evolution algorithm will never come up with something that requires a far bigger leap than what you allow the LLM to perturb on a single step. Also the genetic algorithm will probably result in a vibecoded mess of short-sighted decisions just like evolution creates a spaghetti genome in real life.)

I'll definitely need to look into how people have improved the idea and whether it is practical now.

reply
This is not a new idea at all, many many have had it, no one really can claim it
reply
reply
Wikipedia has humor:

> The same observation had previously also been made by many others.

reply
Don’t worry, Twitter bros already coined it.
reply
Genetic algorithms have existed since the 60s / 70s, e.g. computers learning to play a game. LLMs aren’t particularly guide at it.

I think hyperparameter tuning may actually be a kind of genetic algorithm.

reply
Hyperparameter tuning could be done by genetic algorithm. I think it’s a bit of a category error to say that it is a genetic algorithm though.

Hyperparam tuning is usually done by Bayesian Optimization though.

reply
Yeah that’s correct, it could use it, but there are better alternatives for this particular problem.
reply
You know this doesn’t work most of the time…
reply