upvote
I want to point back to my remark about everyday people.

if you don't limit yourself to "advancing the state of the art at the far frontiers of human knowledge" but allow for ordinary people to make everyday contributions in their daily lives, you get even more

This isn't a throwaway comment. I do this all the time myself, at work. Everywhere I've worked, I do this. I challenge the assumptions and try to make things better. It's not a rare thing at all, it's just not revolutionary.

Revolutions are rare. Perhaps only a handful of them have ever happened in any one particular field. But you simply will not ever go from Aristotelian physics to Newtonian physics to General Relativity by merely "synthesizing the data they were trained on", as the previous comment supposed.

Edit: I should also say something about experimentation. You can't do it from an armchair, which is all an LLM has access to (at present). Real people learn things all the time by conducting experiments in the world and observing the results, without necessarily working as formal scientists. Babies learn a lot by experimenting, for example. This is one particular avenue of new knowledge which is entirely separate from experience, education, memories, etc. because an experiment always has the potential to contradict all of that.

reply
Experimentation leads to experience, so I feel like this was included by the parent comment. And in the case of writing software, agents are able to experiment today. They run tests, check log output, search DBs... Sure, they can't have apples fall on their heads like Newton had but they can totally observe the apple falling on someones head in a video.
reply
Experimentation leads to experience

Of course it does, but only after the fact. You don't have any experience of the result of the experiment before you perform it.

Sure, they can't have apples fall on their heads like Newton had but they can totally observe the apple falling on someones head in a video

I have strong doubts that LLMs have any understanding whatsoever of what's happening in images (let alone videos). The claim (I've sometimes heard) that they possess a world model and are able to interpret an image according to that model is an extremely strong one, that's strongly contradicted by the fact that they: a) continue to hallucinate in pretty glaring ways, and b) continue to mis-identify doctored (adversarial) images that no human would mis-identify (because they don't drastically alter the subject).

reply