upvote
People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

This isn't guaranteed to play out, but it should be the default expectation until we actually see greatly diminishing outputs at the frontier of science, engineering, etc.

reply
I think that's too easy an analogy, though.

Calculators are deterministically correct given the right input. It does not require expert judgement on whether an answer they gave is reasonable or not.

As someone who uses LLMs all day for coding, and who regularly bumps against the boundaries of what they're capable of, that's very much not the case. The only reason I can use them effectively is because I know what good software looks like and when to drop down to more explicit instructions.

reply
> Calculators are deterministically correct

Calculators are deterministic, but they are not necessarily correct. Consider 32-bit integer arithmetic:

  30000000 * 1000 / 1000
  30000000 / 1000 * 1000
Mathematically, they are identical. Computationally, the results are deterministic. On the other hand, the computer will produce different results. There are many other cases where the expected result is different from what a computer calculates.
reply
A good calculator will however do this correctly (as in: the way anyone would expect). Small cheap calculators revert to confusing syntax, but if you pay $30 for a decent handheld calculator or use something decent like wolframalpha on your phone/laptop/desktop you won't run into precision issues for reasonable numbers.
reply
He’s not talking about order of operations, he’s talking about floating point error, which will accumulate in different ways in each case, because floating point is an imperfect representation of real numbers
reply
Yeap, the specific example wasn't important. I choose an example involving the order of operations and an integer overflow simply because it would be easy to discuss. (I have been out of the field for nearly 20 years now.) Your example of floating point errors is another. I also encountered artifacts from approximations for transcendental functions.

Choosing a "better" language was not always an option, at least at the time. I was working with grad students who were managing huge datasets, sometimes for large simulations and sometimes from large surveys. They were using C. Some of the faculty may have used Fortran. C exposes you the vulgarities of the hardware, and I'm fairly certain Fortran does as well. They weren't going to use a calculator for those tasks, nor an interpreted language. Even if they wanted to choose another language, the choice of languages was limited by the machines they used. I've long since forgotten what the high performance cluster was running, but it wasn't Linux and it wasn't on Intel. They may have been able to license something like Mathematica for it, but that wasn't the type of computation they were doing.

reply
I didn't consider it an order of operations issue. Order of operations doesn't matter in the above example unless you have bad precision. What I was trying to say is that good calculators have plenty of precision.
reply
But floating point error manifest in different ways. Most people only care about 2 to 4 decimals which even the cheapest calculators can do well for a good amount of consecutive of usual computations. Anyone who cares about better precision will choose a better calculator. So floating point error is remediable.
reply
Good languages with proper number towers will deal with both cases in equal terms.
reply
Determinism just means you don't have to use statistics to approach the right answer. It's not some silver bullet that magically makes things understandable and it's not true that if it's missing from a system you can't possibly understand it.
reply
That's not what I mean.

If I use a calculator to find a logarithm, and I know what a logarithm is, then the answer the calculator gives me is perfectly useful and 100% substitutable for what I would have found if I'd calculated the logarithm myself.

If I use Claude to "build a login page", it will definitely build me a login page. But there's a very real chance that what it generated contains a security issue. If I'm an experienced engineer I can take a quick look and validate whether it does or whether it doesn't, but if I'm not, I've introduced real risk to my application.

reply
Those two tasks are just very different. In one world you have provided a complete specification, such as 1 + 1, for which the calculator responds with some answer and both you and the machine have a decidable procedure for judging answers. In another world you have engaged in a declaration for which the are many right and wrong answers, and thus even the boundaries of error are in question.

It's equivalent to asking your friend to pick you up, and they arrive in a big vs small car. Maybe you needed a big car because you were going to move furniture, or maybe you don't care, oops either way.

reply
Yes. That is the point I was making.

Calculators provide a deterministic solution to a well-defined task. LLMs don't.

reply
Furthermore, it is possible to build a precise mathematical formula to produce a desired solution

It is not possible to be nearly as precise when describing a desired solution to an LLM, because natural languages are simply not capable of that level of precision... Which is the entire reason coding languages exist in the first place

reply
If you hand a broken calculator to someone who knows how to do math, and they entered 123 + 765 which produced an answer of 6789; they should instantly know something is wrong. Hand that calculator to someone who never understood what the tool actually did but just accepted whatever answer appeared; and they would likely think the answer was totally reasonable.

Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.

reply
One time when I was a kid I was playing with my older sister's graphing calculator. I had accidentally pressed the base button and now was in hex mode. I did some benign calculation like 10+10 and got 14. I believed it!

I went to school the next day and told my teacher that the calculator says that 10+10 is 14, so why does she say it's 20?

So she showed me on her calculator. She pressed the hex button and explained why it was 14.

I think a major problem with people's usage of LLMs is that they stop at 10+10=14. They don't question it or ask someone (even the LLM) to explain the answer.

reply
Totally on a tangent here, but what kind of calculator would have a hex mode where the inputs are still decimal and only the output is hex..?
reply
I probably got the actual numbers wrong in telling the story. But I do remember seeing a shift key on her calculator that would let you input abcde.
reply
> Catching an LLM hallucinating often takes a basic understanding of what the answer should look like before asking the question.

We had the same problem in the early days of calculators. Using a slide rule, you had to track the order of magnitude in your head; this habit let you spot a large class of errors (things that weren't even close to correct).

When calculators came on the scene, people who never used a slide rule would confidently accept answers that were wildly incorrect (example: a mole of ideal gas at STP is 22.4 liters. If you typo it as 2204, you get an answer that's off by roughly two orders of magnitude, say 0.0454 when it should be 4.46. Easy to spot if you know roughly what the answer should look like, but easy to miss if you don't).

reply
The calculator analogy is wrong for the same reason. Knowing and internalizing arithmetic, algebra, and the shape of curves, etc. are mathematical rungs to get to higher mathematics and becoming a mathematician or physicist. You can't plug-and-chug your way there with a calculator and no understanding.

The people who make the calculator analogy are already victims of the missing rung problem and they aren't even able to comprehend what they're lacking. That's where the future of LLM overuse will take us.

reply
> People would have said the same about graphing calculators or calculators before that.

As it happens, we generally don't let people use calculators while learning arithmetic. We make children spend years using pencil and paper to do what a calculator could in seconds.

reply
This is why I don’t understand the calculator analogy. Letting beginners use LLMs is like if we gave kids calculators in 1st grade and told Timmy he never needs to learn 2 + 2. That’s not how education works today.
reply
I think this is exactly why calculators are a great analogy, and a hint toward how we should probably treat LLMs.
reply
Unfortunately there are many posters here who believe we should, in fact, let children use calculators and not bother with learning arithmetic. It's foolishness, but that argument does get made. So I wouldn't be surprised if people also think we should let students use LLMs.
reply
> People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

Well, we still make people calculate manually for many years, and we still make people listen to lectures instead of just reading.

But will we still have people to go through years of manual coding? I guess in the future we will force them, at least if we want to keep people competent, just like the other things you mentioned. Currently you do that on the job, in the future people wont do that on the job so they will be expected to do it as a part of their education.

reply
What do people mean exactly when they bring up “Socrates saying things about writing”? Phaedrus?

> “Most ingenious Theuth, one man has the ability to beget arts, but the ability to judge of their usefulness or harmfulness to their users belongs to another; [275a] and now you, who are the father of letters, have been led by your affection to ascribe to them a power the opposite of that which they really possess.

> "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

Sounds to me like he was spot on.

reply
But did this grind humanity to a halt?

Yes - specific faculties atrophied - I wouldn't dispute it. But the (most) relevant faculties for human flourishing change as a function of our tools and institutions.

reply
Someone brought up Socrates upthread:

> People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

If the conclusion now becomes “actually, Socrates was correct but it wasn’t that bad”, then why bring up Socrates in the first place?

reply
We notably teach people how to do arithmetics by hand before we hand them calculators.
reply
> The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

In a sense, I think you are right. We are currently going through a period of transition that values some skills and devalues others. The people who see huge productivity gains because they don't have to do the meaningless grunt work are enthusiastic about that. The people who did not come up with the tool are quick to point out pitfalls.

The thing is, the naysayers aren't wrong since the path we choose to follow will determine the outcome of using the technology. Using it to sift through papers to figure out what is worth reading in depth is useful. Using it to help us understand difficult points in a paper is useful. On the other hand, using it as a replacement for reading the papers is counterproductive. It is replacing what the author said with what a machine "thinks" an author said. That may get rid of unnecessary verbosity, but it is almost certainly stripping away necessary details as well.

My university days were spent studying astrophysics. It was long ago, but the struggles with technology handling data were similar. There were debates between older faculty who were fine with computers, as long as researchers were there to supervise the analysis every step of the way, and new faculty, who needed computers to take raw data to reduced results without human intervention. The reason was, as always, productivity. People could not handle the massive amounts of data being generated by the new generation of sensors or systematic large scale surveys if they had to intervene any step of the way. At a basic level, you couldn't figure out whether it was a garbage-in, garbage-out type scenario because no one had the time to look at the inputs. (I mean no time in an absolute sense. There was too much data.) At a deeper level, you couldn't even tell if the data processing steps were valid unless there was something obviously wrong with the data. Sure, the code looked fine. If the code did what we expected of it, mathematically, it would be fine. But there were occasions where I had to point out that the computer isn't working how they thought it was.

It was a debate in which both sides were right. You couldn't make scientific progress at a useful pace without sticking computers in the middle and without computers taking over the grunt work. On the other hand, the machine cannot be used as a replacement for the grunt work of understanding, may that involves reading papers or analyzing the code from the perspective of a computer scientist (rather than a mathematician).

reply
We still expect high school students to learn to use graph paper before they use their TI-83, grade school students to do arithmetic by hand before using a calculator. This is essentially the post's point, that LLMs are a useful tool only after you have learned to do the work without them.
reply
When doing college we can only start using those tools when we understand the principles behind them.
reply
deleted
reply
Socrates does not say this about the written word. Plato has Socrates say it about writing in the beginning sections of the Phaedrus, but it is not Socrates opinion nor the final conclusion he arrives at.

And yes yes you can pull up the quote or ask your AI, but they will be wrong. The quote is from Socrates reciting a "myth", as is pretty typical in a middle late dialogue like this.

But here, alas we can recognize the utter absurdity, that this just points out why writing can be bad, as Socrates does pose. Because you get guys 2000 years in future using you and misquoting you for their dumb cause! No more logos, only endless stochastic doxa. Truly a future of sophists!

reply
But AI might actually get you there in terms of superior pedagogy. Personal Q&A where most individuals wouldn't have afforded it before.
reply
There are a lot of people in academia who are great at thinking about complex algorithms but can't write maintainable code if their life depended on it. There are ways to acquire those skills that don't go the junior developer route. Same with debugging and profiling skills

But we might see a lot more specialization as a result

reply
Do they need to write maintainable code? I think probably not, it's the research and discovering the new method that is important.
reply
They can’t write maintainable code because they don’t have real world experience of getting your hands dirty in a company. The only way to get startup experience is to build a startup or work for one
reply
Duh, the only way to get startup experience is indeed to get startup experience.

My point is that getting into the weeds of writing CRUD software is not the only way to gain the ability to write complex algorithms, or to debug complex issues, or do performance optimization. It's only common because the stuff you make on the journey used to be economically valuable

reply
> write complex algorithms, or to debug complex issues, or do performance optimization

That’s the stuff that ai is eating. The stuff I’m talking about (scaling orgs, maintaining a project long term, deciding what features to build or not build etc) is stuff very hard for ai

reply
AI is only eating some of that though. For instance, everyone who does performance work knows that perhaps the most important part of optimization is constructing the right benchmark. This is already the thing that makes intractable problems tractable. That effect is now exacerbated — AI can optimize anything given a benchmark —- but AI isn’t making great progress at constructing the benchmark itself.
reply
I dont know if id call it "hard for ai" so much as "untreaded ground"

agents might be better at it than people are, given the right structure

reply
What. Are you saying maintainable code is specifically related to startups? I can accept companies as an answer (although there are other places to cut your teeth), but startups is a weird carveout.
reply
Writing maintainable code is learned by writing large codebases. Working in an existing codebase doesn't teach you it, so most people working at large companies do not build the skill since they don't build many large new projects. Some do but most don't. But at startups you basically have to build a big new codebase.
reply
That’s a good analogy but I think we’ve already went from 0 to 10 rungs over the last couple of years. If we assume that the models or harnesses will improve more and more rungs will be removed. Vast majority of programmers aren’t doing novel, groundbreaking work.
reply