upvote
It's not about doing more or less work; it's about doing the work faster. For instance, it's relatively common to discover that some recomputation can be faster than caching or lookup tables. Similarly, fetching more from memory also can be faster if it means you make less roundtrips.
reply
Well that's where I thought this link was going to go before it went down the simd path... We have a way to beat binary search, it is called b-trees, it has the same basic insight that you can easily take 64 elements from your data set evenly spaced, compare against all of those rapidly, and instead of bifurcating your search space once, you do the same as six times, but because you store the 64 elements in an array in memory, they only take one array fetch and you get cache locality... But as you have more elements, you need to repeat this lookup table like three or four or five times, so it costs a bit of extra space, so what if we make it not cost space by just storing the data in these lookup tables...
reply
A B-tree is not a search algorithm though, it is a data structure. While it would nice to be able to somehow instantly materialize a B-tree from a linear array, CPUs aren't quite there yet. It would also be nice not to have to deal with linear arrays where B-trees would be better fit in the first place, but we are not quite there yet either.
reply