I feel like people tend to forget that among the many things LLMs can do these days, “using a search engine” is among them. In fact, they use them better than the majority of people do!
The conversation people think they’re having here and the conversation that actually needs to be had are two entirely different conversations.
> I don’t know about you, but I wasn’t allowed to use calculators in my calculus classes precisely to learn the concepts properly. “Calculators are for those who know how to do it by hand” was something I heard a lot from my professors.
Suppose I never learned how to derive a function. I don’t even know what a function is. I have no idea how to do make one, write one, or what it even does. So I start gathering knowledge:
- A function is some math that allows you to draw a picture of how a number develops if you do that math on it.
- A derivative is a function that you feed a function and a number into, and then it tells you something about what that function is doing to that number at that number.
- “What it’s doing” specifically means not the result of the math for that particular number, but the results for the immediate other numbers behind and in front of it.
- This can tell us about how the function works.
Now I go tell ClaudeGPTimini “hey, can you derive f(x) at 5 so that we can figure out where it came from and where it goes from there?”, and it gives me a result.
I’ve now ostensibly understood what a derivative does and what it’s used for, yet I have zero idea how to mathematically do it. Does that make any results I gain from this intuitive understanding any less valuable?
What I’ll give you is this: if I knew exactly how the math worked, then it would be far easier for me to instantly spot any errors ClaudeGPTimini produced. And the understanding of functions and derivatives outlined above may be simplistic in some places (intentionally so), in ways that may break it in certain edge cases. But that only matters if I take its output at face value. If I get a general understanding of something and run a test with it, I’ll generally have some sort of hypothesis of what kind of result I’m expecting, given that my understanding is correct. If I know that a lot of unknown unknowns exist around a thing I’m working with, then I also know that unexpected results, as well as expected ones, require more thorough verification. Science is what happens when you expect something, test something, and get a result - expected OR unexpected - and then systematically rule out that anything other than the thing you’re testing has had an effect on that result.
This is not a problem with LLMs. It’s a thing we should’ve started teaching in schools decades ago: how to understand that there are things you don’t understand. In my view, the vast majority of problems plaguing us as a species lies in this fundamental thing that far too many people are just never taught the concept of.