If we make the analogy from Bertrand Russel's Principia Mathematica, he derived fully expanded expressions, i.e. trees where the leaves only may refer to the same literals, everyone claimed this madness underscored how formal verification of natural mathematics was a fools errand, but nevertheless we see successful projects like metamath (us.metamath.org) where this exponential blow-up does not occur. It is easy to see why: instead of representing proofs as full trees, the proofs are represented as DAG's. The same optimization would be required for EML to prevent exponential blow-up.
Put differently: if we allow extra buttons besides {1, EML} for example to capture unary functions the authors mentally add an 'x' button so now the RPN calculator has {1, EML, x}; but wait if you want multivariate functions it becomes an RPN calculator with extra buttons {1, EML, x,y,z} for example.
But why stop there? in metamath proofs are compressed: if an expression or wff was proven before in the same proof, it first subproof is given a number, and any subsequent invocations of this N'th subproof refers to this number. Why only recall input parameters x,y,z but not recall earlier computed values/functions?
In fact every proof in metamath set.mm that uses this DAG compressibility, could be split into the main proof and the repeatedly used substatements could be automatically converted to explicitly separate lemma proofs, in which case metamath could dispose of the single-proof DAG compression (but it would force proofs to split up into lemma's + main proof.
None of the proven theorems in metamath's set.mm displays the feared exponential blowup.
Or you could accept that there's already a large collection of known useful special functions, and work with shallower trees of those instead, e.g. https://arxiv.org/abs/1905.11481
Thats not really necessary, imagine somewhere near the top of the binary tree a leaf ("1" or "x" or ...), with the current brute force method thats a whole binary subtree with parameters going unused.
One could just as well use that whole culled binary subtree as a DAG node.
It does require more complex routing, but selecting input nodes is a sparse task, so those routing parameters can use sparse virtual parameters, say inner products of dense vectors in some vector space, so it doesn't need to take up much memory...
That is sort of comparable to how NAND simplify scaling.
Division is hell on gates.
The single component was the reason scaling went like it did.
There was only one gate structure which had to improve to make chips smaller - if a chip used 3 different kinds, then the scaling would've required more than one parallel innovation to go (sort of like how LED lighting had to wait for blue).
If you need two or more components, then you have to keep switching tools instead of hammer, hammer, hammer.
Efficient numerical libraries likewise contain lots of redundancy. For example, sqrt(x) is mathematically equivalent to pow(x, 0.5), but sqrt(x) is still typically provided separately and faster. Anyone who thinks that eml() function is supposed to lead directly to more efficient computation has missed the point of this (interesting) work.
I'm guessing is what they're really talking about. Which is not about NAND gates.
As an example, searching for sqrt(x) would require a tree of depth ~40 which is in the trillion-parameter regime.
For extreme regularization, one can even go down to 10 arbitrary precision numbers: if we have a single vector of 10 dimensions, we can re-order the components 10! different ways.
10! = 3 628 800
so we can retrieve ~3M vectors from it, and we can form about 10 T inner products.
I think you are right, each node in some other expression tree formed of primitives in some set Prim would be increased by at most max(nodes(expr)) for expr in Prim.
That's essentially what the EML compiler is doing from what I understand.
Its a way to make mathematical formulas completely unreadable. Its a way to spend more time on computing functions like log (3 ems reqd) while using more precision. Its a way to blow the mind of muggles reading hacker news.