upvote
> Since the formulas did depend on each other the order of (re)calculation made a difference. The first idea was to follow the dependency chains but this would have involved keeping pointers and that would take up memory. We realized that normal spreadsheets were simple and could be calculated in either row or column order and errors would usually become obvious right away. Later spreadsheets touted "natural order" as a major feature but for the Apple ][ I think we made the right tradeoff.

It would seem that the creators of VisiCalc regarded this is a choice that made sense in the context of the limitations of the Apple ][, but agree that a dependency graph would have been better.

https://www.landley.net/history/mirror/apple2/implementingvi...

Edit: It's also interesting that the tradeoff here is put in terms of correctness, not performance as in the posted article. And that makes sense: Consider a spreadsheet with =B2 in A1 and =B1 in B2. Now change the value of B1. If you recalc the sheet in row-column OR column-row order, B2 will update to match B1, but A1 will now be incorrect! You need to evaluate twice to fully resolve the dependency graph.

reply
Even LaTeX just brute-forces dependencies such as building a table of contents, index, and footnote references by running it a few times until everything stabilizes.
reply
VisiCalc didn't do this, though. It just recalculated once, and if there were errors you had to notice them and manually trigger another recalc.
reply
Is anyone using visicalc today? I'm not sure how its past success, however fantastic, can be translated into "a dependency graph is often an overkill for a spreadsheet"
reply
The clause "it's absolutely necessary for all but the simplest toy examples" is what I was disagreeing with. But I wouldn't be surprised to hear that visicalc adopted one as soon as it was technically feasible in later versions.
reply
visicalc is not the benchmark you think it is. it's decades old. this day and age, dependency graphs for any real world use case will definitely need a dependency graph. it helps no one to suggest otherwise, and actually makes light of a specific engineering task that will for a fact be required of anyone looking to build a spreadsheet engine into a product
reply
I'm not suggesting otherwise. I'm saying that your "toy example" comment is very dismissive of something that was an extraordinary accomplishment of its day. They invented spreadsheets without it. Dependency graphs are excellent and widely useful things we should all be happy to adapt and reach for, far beyond spreadsheets. We should be grateful that they're available to all of us to build into software products so readily. I've used them repeatedly and I'm sure I will many times in the future.

What I'm trying to communicate is this: this product _invented_ spreadsheets, but you dismiss the implementation with a sneer.

reply
A still-very-common use case for spreadsheets is just to manage lists of things. For these, there are no formulas or dependencies at all. Another is simple totals of columns of numbers.

There are many common spreadsheet use cases that don't involve complicated dependency trees.

reply
It's a common CPU vs RAM decision to make. Dependency graph consumes memory, while recalculating everything for a number of iterations could happen on stack one formula at a time in a loop. On 6502 it mattered. On modern CPUs, even with RAM crisis I'm sure for 99.9% of spreadsheets any options is good enough. Say, you have 10K rows and 100 columns - it's 1M calculations to make.
reply
Keeping a dependency tree is not complicated
reply
It's more complicated than not keeping one, at least.
reply