write x#y for 1/(x-y).
x#0 = 1/(x-0) = 1/x, so you get reciprocals. Then (x#y)#0 = 1/((1/(x-y)) - 0) = x-y, so subtraction.
it's common problem to show in any (insert various algebraic structure here ) inverse and subtraction gives all 4 elementary ops.
I haven't checked this carefully, but this note seems to give a short proof (modulo knowing some other items...) https://dmg.tuwien.ac.at/goldstern/www/papers/notes/singlebi...
> This includes constants such as e, pi, and i; arithmetic operations including addition, subtraction, multiplication, division, and exponentiation as well as the usual transcendental and algebraic functions.
It cites a paper from 1935: https://www.pnas.org/doi/10.1073/pnas.21.5.252
Here is a bit more: https://mathoverflow.net/questions/57465/can-we-unify-additi...
Could that be used to derive trigonometric functions with single distinct expressions?
I'm still reading this, but if this checks out, this is one of the most significant discoveries in years.
Why use splines or polynomials or haphazardly chosen basis functions if you can just fit (gradient descent) your data or wave functions to the proper computational EML tree?
Got a multidimensional and multivariate function to model (with random samples or a full map)? Just do gradient descent and convert it to approximant EML trees.
Perform gradient descent on EML function tree "phi" so that the derivatives in the Schroedinger equation match.
But as I said, still reading, this sounds too good to be true, but I have witnessed such things before :)
It's interesting to see his EML approach whereas mine was more on generating a context sensitive homoiconic grammar.
I've had lots of success combining spectral neural nets (GNNs, FNOs, Neural Tangent Kernels) with symbolic regression and using Operad Theory and Category Theory as my guiding mathematical machinery
If we make the analogy from Bertrand Russel's Principia Mathematica, he derived fully expanded expressions, i.e. trees where the leaves only may refer to the same literals, everyone claimed this madness underscored how formal verification of natural mathematics was a fools errand, but nevertheless we see successful projects like metamath (us.metamath.org) where this exponential blow-up does not occur. It is easy to see why: instead of representing proofs as full trees, the proofs are represented as DAG's. The same optimization would be required for EML to prevent exponential blow-up.
Put differently: if we allow extra buttons besides {1, EML} for example to capture unary functions the authors mentally add an 'x' button so now the RPN calculator has {1, EML, x}; but wait if you want multivariate functions it becomes an RPN calculator with extra buttons {1, EML, x,y,z} for example.
But why stop there? in metamath proofs are compressed: if an expression or wff was proven before in the same proof, it first subproof is given a number, and any subsequent invocations of this N'th subproof refers to this number. Why only recall input parameters x,y,z but not recall earlier computed values/functions?
In fact every proof in metamath set.mm that uses this DAG compressibility, could be split into the main proof and the repeatedly used substatements could be automatically converted to explicitly separate lemma proofs, in which case metamath could dispose of the single-proof DAG compression (but it would force proofs to split up into lemma's + main proof.
None of the proven theorems in metamath's set.mm displays the feared exponential blowup.
Or you could accept that there's already a large collection of known useful special functions, and work with shallower trees of those instead, e.g. https://arxiv.org/abs/1905.11481
That is sort of comparable to how NAND simplify scaling.
Division is hell on gates.
The single component was the reason scaling went like it did.
There was only one gate structure which had to improve to make chips smaller - if a chip used 3 different kinds, then the scaling would've required more than one parallel innovation to go (sort of like how LED lighting had to wait for blue).
If you need two or more components, then you have to keep switching tools instead of hammer, hammer, hammer.
Efficient numerical libraries likewise contain lots of redundancy. For example, sqrt(x) is mathematically equivalent to pow(x, 0.5), but sqrt(x) is still typically provided separately and faster. Anyone who thinks that eml() function is supposed to lead directly to more efficient computation has missed the point of this (interesting) work.
I'm guessing is what they're really talking about. Which is not about NAND gates.
As an example, searching for sqrt(x) would require a tree of depth ~40 which is in the trillion-parameter regime.
I think you are right, each node in some other expression tree formed of primitives in some set Prim would be increased by at most max(nodes(expr)) for expr in Prim.
That's essentially what the EML compiler is doing from what I understand.
Its a way to make mathematical formulas completely unreadable. Its a way to spend more time on computing functions like log (3 ems reqd) while using more precision. Its a way to blow the mind of muggles reading hacker news.
I might have misunderstood, but from the two "Why do X when you can do just Y with EML" sentences, I think you are describing symbolic regression, which has been around for quite some time and is a serious grown-up technique these days. But even the best symbolic regression tools do not typically "replace" other regression approaches.
Same reason all boolean logic isn't performed with combinations of NAND – it's computationally inefficient. Polynomials are (for their expressivity) very quick to compute.
And are a much less arbitrary choice than NAND, vs. NOR, XOR, etc.
Using transistors as conceptual digital logic primitives, where power dissipation isn't a thing, Pass Logic is "The Way".
Moreover, the amplifying function must exist at least in some gates, to restore the logic levels, but there is no need for it to exist in all gates.
For instance, any logic circuit can be implemented using AND gates and/or OR gates made with diodes, which have no gain, i.e. no amplifying function, together with 1-transistor inverters, which provide both the NOT function and the gain needed to restore the logic levels.
The logic functions such as NAND can be expressed in several way using simpler components, which correspond directly with the possible hardware implementations.
Nowadays, the most frequent method of logic implementation is by using parallel and series connections of switches, for which the MOS transistors are well suited.
Another way to express the logic functions is by using the minimum and maximum functions, which correspond directly with diode-based circuits.
All the logic functions can also be expressed using the 2 operations of the binary finite field, addition (a.k.a. XOR) and multiplication (a.k.a. AND).
This does not lead to simpler hardware implementations, but it can simplify some theoretical work, by using algebraic results. Actually this kind of expressing logic is the one that should have been properly named as Boolean logic, as the contribution of George Boole has been precisely replacing "false" and "true" with "0" and "1" and reinterpreting the classic logic operations as arithmetic operations. It is very weird to see in some programming languages a data type named Boolean, whose possible values are "false" and "true", instead of the historically correct Boolean values, "0" and "1", which can also be much more useful in programming than "false" and "true", by simplifying expressions, especially in array operations (which is why array-oriented languages like APL use "0" and "1", not "false" and "true").
Pass logic. Digital. [0]
This is extremely basic digital circuit design. You can create digital circuits as compositions of gates. But you can often implement the same logic, with fewer transistors, using pass logic.
Pass logic is also great for asynchronous digital design.
But transistors can be N or P-channel, so it’s not a single logical primitive, like e.g. NAND-gates.
NAND also use both N and P-channel transistors, and have to use at least the same number, but frequently many more transistors for anything non-trivial.
But as you point out, as a computational basis, pass transistors have two variants.
Because the EML basis makes simple functions (like +) hard to express.
Not to diminish this very cool discovery!
This also re-opens a lot of "party pooper" results in mathematics: impossibility of representing solutions to general quintic (fine print: if we restrict ourselves to arithmetic and roots/radicals). In mathematics and physics there have been a lot of "party pooper" results which later found more profound and interesting positive results by properly rephrasing the question. A negative result for a myopic question isn't very informative on its own.
Solving the quintic in terms of transcendental functions has already been done
https://en.wikipedia.org/wiki/Bring_radical#The_Hermite%E2%8...
It seems like a neat parlour trick, indeed. But significant discovery?
All this really says is that the Taylor's expansions of e^x and ln x are sufficient to express to express trig functions, which is trivially true from Euler's formula as long as you're in the complex domain.
Arithmetic operations follow from the fact that e^x and ln x are inverses, in particular that e^ln(x) = x.
Taylor's series seem a bit like magic when you first see them but then you get to Real Analysis and find out there are whole classes of functions that they can't express.
This paper is interesting but it's not revolutionary.
> Submitted on 23 Mar 2026 (v1), last revised 4 Apr 2026 (this version, v2)
A stack of zeros and ones can be encoded in a single number by keeping with bit-shifting and incrementing.
Pushing a 0 onto the stack is equivalent to doubling the number.
Pushing a 1 is equivalent to doubling and adding 1.
Popping is equivalent to dividing by 2, where the remainder is the number.
I use something not too far off for my daily a programming based on a similar idea:Rejoice is a concatenative programming language in which data is encoded as multisets that compose by multiplication. Think Fractran, without the rule-searching, or Forth without a stack.
https://paste.sr.ht/~rabbits/cd2369cc7c72bfad0fcd83e27682095...
``` look at this paper: https://arxiv.org/pdf/2603.21852
now please produce 2x+y as a composition on EMLs ```
Opus(paid) - claimed that "2" is circular. Once I told it that ChatGPT have already done this, finished successfully.
ChatGPT(free) - did it from the first try.
Grok - produced estimation of the depth of the formula.
Gemini - success
Deepseek - Assumed some pre-existing knowledge on what EML is. Unable to fetch the pdf from the link, unable to consume pdf from "Attach file"
Kimi - produced long output, stopped and asked to upgrade
GLM - looks ok
TIL you can taunt LLMs. I guess they exhibit more competitive spirit than I thought.
It got a result.
""" Consider a mathematical function EML defined as `eml(x,y)=exp(x)−ln(y)`
Please produce `sin(x)/x` as a composition on EMLs and constant number 1 (one). """
``` 2x + y = \operatorname{eml}\Big(1,\; \operatorname{eml}\big(\operatorname{eml}(1,\; \operatorname{eml}(\operatorname{eml}(1,\; \operatorname{eml}(\operatorname{eml}(L_2 + L_x, 1), 1) \cdot \operatorname{eml}(y,1)),1)\big),1\big)\Big) ```
for me Gemini hallucinated EML to mean something else despite the paper link being provided: "elementary mathematical layers"
Reminds me of the Iota combinator, one of the smallest formal systems that can be combined to produce a universal Turing machine, meaning it can express all of computation.
eml(z, eml(x, 1))
= e^z - ln(eml(x, 1))
= e^z - ln(e^x)
= e^z - x
and the claim is that, after it's expanded, z will be such that this whole thing is equal to -x. but with some algebra, this is happening only if e^z = 0,
and there is no complex number z that satisfies this equation. indeed if we laboriously expand the given formula for z (the left branch of the tree), we see that it goes through ln(0), and compound expressions.x^-1 has the same problem.
both formulae work ...sort of... if we allow ln(0) = Infinity and some other moxie, such as x / Infinity = 0 for all finite x.
looks like it computes ln(1)=0, then computes e-ln(0)=+inf, then computes e-ln(+inf)=-inf
> EML-compiled formulas work flawlessly in symbolic Mathematica and IEEE754 floating-point… This is because some formulas internally might rely on the following properties of extended reals: ln 0 = −∞, e^(−∞) = 0.
And then follows with:
> But EML expressions in general do not work ‘out of the box’ in pure Python/Julia or numerical Mathematica.
Thus, the paper’s completeness claim depends on a non-standard arithmetic convention (ln(0) = -∞), not just complex numbers as it primarily advertises. While the paper is transparent about this, it is however, buried on page 11 rather than foregrounded as a core caveat. Your comment deserves credit for flagging it.
This is the standard convention when doing operations in the extended real number line, i.e. in the set of the real numbers completed with positive and negative infinities.
When the overflow exception is disabled, any modern CPU implements the operations with floating-point numbers as operations in the extended real number line.
So in computing this convention has been standard for more than 40 years, while in mathematics it has been standard for a couple of centuries or so.
As always in mathematics, when computing expressions, i.e. when computing any kind of function, you must be aware very well which are the sets within which you operate.
If you work with real numbers (i.e. in a computer you enable the FP overflow exception), then ln(0) is undefined. However, if you work with the extended real number line, which is actually the default setting in most current programming languages, then ln(0) is well defined and it is -∞.
>>> import math
>>> math.log(0.0)
Traceback (most recent call last):
File "<python-input-2>", line 1, in <module>
math.log(0.0)
~~~~~~~~^^^^^
ValueError: expected a positive input, got 0.0
though if you use numpy floats you only get a warning: >>> import numpy as np
>>> np.log(np.float64(0))
<python-input-1>:1: RuntimeWarning: divide by zero encountered in log
np.float64(-inf)
JavaScript works as expected: > Math.log(0.0)
-Infinity
> Math.log(-0.0)
-Infinity
> Math.log(-0.1)
NaN−z = 1 − (e − ((e − 1) − z))
Few ideas that come to my mind when reading this:
1. One should also add absolute value (as sqrt(x*x)?) as a desired function and from that min, max, signum in the available functions. Since the domain is complex some of them will be a bit weird, I am not sure.
2. I think, for any bijective function f(x) which, together with its inverse, is expressible using eml(), we can obtain another universal basis eml(f(x),f(y)) with the added constant f^-1(1). Interesting special case is when f=exp or f=ln. (This might also explain the EDL variant.)
3. The eml basis uses natural logarithm and exponent. It would be interesting to see if we could have a basis with function 2^x - log_2(y) and constants 1 and e (to create standard mathematical functions like exp,ln,sin...). This could be computationally more feasible to implement. As a number representation, it kinda reminds me of https://en.wikipedia.org/wiki/Elias_omega_coding.
4. I would like to see an algorithm how to find derivatives of the eml() trees. This could yield a rather clear proof why some functions do not have indefinite integrals in a symbolic form.
5. For some reason, extending the domain to complex numbers made me think about fuzzy logics with complex truth values. What would be the logarithm and exponential there? It could unify the Lukasiewicz and product logics.
That's awesome. I always wondered if there is some way to do this.
Quick google seach brings up https://github.com/pr701/nor_vm_core, which has a basic idea
https://gist.github.com/CGamesPlay/9d1fd0a9a3bd432e77c075fb8...
They're primitive in the sense that you can't compute exp(x) or log(x) using a finite combination of other elementary functions for any x. If you allow infinite many operations, then you can easily find infinite sums or products of powers, or more complicated expressions to represent exp and log and other elementary functions.
> Or do we assume that we have an infinite lookup table for all possible inputs?
Essentially yes, you don't necessarily need an "implementation" to talk about a function, or more generally you don't need to explicitly construct an object from simpler pieces: you can just prove it satisfies some properties and that it is has to exist.
For exp(x), you could define the function as the solution to the diffedential equal df/dx = f(x) with initial condition f(0) = 1. Then you would enstablish that the solution exists and it's unique (it follows from the properties of the differential equation), call exp=f and there you have it. You don't necessarily know how to compute for any x, but you can assume exp(x) exists and it's a real number.
In other words, this result does not aim to improve computability or bound the complexity of calculating the numerical value. Rather, it aims to exhibit this uniform, finite tree structure for the entire family of elementary expressions.
To clarify my earlier point: the author isn't trying to build a practical calculator or generate human-readable algebra. Using exp and ln isn't a cheat code because the goal is purely topological. The paper just proves that this massive, diverse family of continuous math can be mapped perfectly onto a uniform binary tree, without secretly burying a state machine inside the operator.
They use the complex version of logarithm, that has a lot of branching problems.
[1] https://github.com/tromp/AIT/blob/master/ait/minbase.lam
I read the paper. Is there a table covering all other math operations translated to eml(x,y) form?
You can find this link on the right side of the arxiv page:
https://arxiv.org/src/2603.21852v2/anc/SupplementaryInformat...
e^ix = cos x + i sin x
which means: e^-ix = cos -x + i sin -x
= cos x - i sin x
so adding them together: e^ix + e^-ix = 2 cos x
cos x = (e*ix - e^-ix) / 2
So I guess the real part of that.Multiplication, division, addition and subtraction are all straightforward. So are hyperbolic trig functions. All other trig functions can be derived as per above.
One thing I wonder now: NAND is symmetric while this isn't, could something similar be found where function(x, y) = function(y, x)?
So, what happens if you take say the EML expression for addition, and invert the binary tree?
https://th.if.uj.edu.pl/~odrzywolek/homepage/personal/pc/pc_...
Although you also need to encode where to put the input.
The real question is what emoji to use for eml when written out.
Some Emil or another, I suppose. Maybe the one from Ratatouille, or maybe this one: https://en.wikipedia.org/wiki/Emil_i_L%C3%B6nneberga
I'm kidding, of course. You can encode anything in bits this way.
Simply because bool algebra doesn't have that many functions and all of them are very simple to implement.
A complex bool function made out of NANDs (or the likes) is little more complex than the same made out of the other operators.
Implementing even simple real functions out of eml() seems to me to add a lot of computational complexity even with both exp() and ln() implemented in hardware in O(1). I think about stuff sum(), div() and mod().
Of course, I might be badly wrong as I am not a mathematician (not even by far).
But I don't see, at the moment, the big win on this.
> Everyone learns many mathematical operations in school: fractions, roots, logarithms, and trigonometric functions (+, −, ×, /, sqrt, sin, cos, log, …), each with its own rules and a dedicated button on a scientific calculator. Higher mathematics reveals that many of these are redundant: for example, trigonometric ones reduce to the complex exponential. How far can this reduction go? We show that it goes all the way: a single operation, eml(x, y), replaces every one of them. A calculator with just two buttons, EML and the digit 1, can compute everything a full scientific calculator does. This is not a mere mathematical trick. Because one repeatable element suffices, mathematical expressions become uniform circuits, much like electronics built from identical transistors, opening new ways to encoding, evaluating, and discovering formulas across scientific computing.
It gets a bit more difficult for the complex domain because you need rotation.
I think the bigger question is whether it will be more energy-optimal or silicon density-optimal than math libraries that are currently baked into these processors (FPUs).
There are also some edge cases "exp(exp(x))" and infinities that seem to result in something akin to "division by zero" where you need more than standard floating-point representations to compute - but these edge cases seem like compiler workarounds vs silicon issues.
I'm curious how this would compare to the dedicated sse or xmx instructions currently inside most processor's instruction sets.
Lastly, you could also create 5-depth or 6-depth EML tree in hardware (fpga most likely) and use it in lieu of the rust implementation to discover weight-optimal eml formulas for input functions much quicker, those could then feed into a "compiler" that would allow it to run on a similar-scale interpreter on the same silicon.
In simple terms: you can imagine an EML co-processor sitting alongside a CPUs standard math coprocessor(s): XMX, SSE, AMX would do the multiplication/tile math they're optimized for, and would then call the EML coprocessor to do exp,sin,log calls which are processed by reconfiguring the EML trees internally to process those at single-cycle speed instead of relaying them back to the main CPU to do that math in generalized instructions - likely something that takes many cycles to achieve.
Derivatives: No. Exercise: Write the derivative of f(x)=eml(x,x)
Integrals: No. No. No. Integrals of composition are a nightmare, and here they use long composition chain like g(x)=eml(1,eml(eml(1,x),1)).
Posts like these are the reason i check HN every day
Like when the Apollo guidance computer was made, the bottleneck was making integrated chips so they only made one, the NOR gate, and a whackton of routing to build out an entire CPU. Horribly complex routing, very simplified integrated circuit construction
Eg ln is a rather complicated construct, it's not even a function. That's because for complex numbers, e^x is not bijective, and thus its inverse ain't a function.
So using that complicated construct to define something simpler like addition invites extra complexity.
It’s a derivation of the Y combinator from ruby lambdas
It's one of those facts that tends to blow minds when it's first encountered, I can see why one would name a company after it.
More on topic:
> No comparable primitive has been known for continuous mathematics: computing elementary functions such as sin, cos, sqrt, and log has always required multiple distinct operations.
I was taught that these were all hypergeometric functions. What distinction is being drawn here?
When you have a function with many parameters it becomes rather trivial to express simpler functions with it.
You could find a lot of functions with 4 parameters that can express all elementary functions.
Finding a binary operation that can do this, like in TFA, is far more difficult, which is why it has not been done before.
A function with 4 parameters can actually express not only any elementary function, but an infinity of functions with 3 parameters, e.g. by using the 4th parameter to encode an identifier for the function that must be computed.
Granted, but the claim in the abstract says:
>> computing elementary functions such as sin, cos, sqrt, and log has always required multiple distinct operations
And I don't see how this is true as to hypergeometric functions in a way that isn't shared by the approach in the paper.
> Finding a binary operation that can do this, like in TFA, is far more difficult, which is why it has not been done before.
> A function with 4 parameters can actually express not only any elementary function, but an infinity of functions with 3 parameters, e.g. by using the 4th parameter to encode an identifier for the function that must be computed.
These statements seem to be in direct conflict with each other; you can use the second parameter of a binary function to identify a unary function just as you can use the fourth parameter of a quaternary function to identify a trinary one.
With binary functions you can compose them using a very complex composition graph.
With unary functions you can compose them only linearly, so in general it is impossible to make a binary function with unary functions.
You can make binary functions from unary functions only by using at least one other binary function. For instance, you can make multiplication from squaring, but only with the help of binary addition/subtraction.
So the one function that can be used to generate the others by composition must be at least binary, in order to be able to generate functions with an arbitrary number of parameters.
This is why in mathematics there are many domains where the only required primitives are a small number of binary functions, but there is none where strictly unary functions are sufficient. (However, it may be possible to restrict the binary functions to very simple functions, e.g. making a tuple from components, for instance the CONS function of LISP I.)
I have replied to your last statement:
> "you can use the second parameter of a binary function to identify a unary function just as you can use the fourth parameter of a quaternary function to identify a trinary one."
As I have explained above, what you propose does not work. It works in functions with 3 or more parameters, but it does not work in binary functions, because you cannot make binary functions from unary functions (without using some auxiliary binary functions).
I have no idea what you're trying to say. If you can use one parameter to identify a desired function, then obviously you can use a function of arity n+1 to define as many functions of arity n as you want, and it doesn't matter what the value of n is.
For example:
selector(3, "sin") = sin 3
selector(3, "log2") = log₂ 3
This works going from arity 4 to arity 3, and it also works going from arity 2 to arity 1. Your "response" talks about going from arity 1 to arity 2, a non sequitur.
This requires expressing binary functions, like addition and multiplication.
You cannot do this by using only the set of unary functions, which can indeed be generated by a function with 2 parameters, one of which selects an unary function.
Of course the redundant primitives aren't free, since they add code size or die area. In choosing how many primitives to provide, the designer of a numerical library aims to make a reasonable tradeoff between that size cost and the speed benefit.
This paper takes that tradeoff to the least redundant extreme because that's an interesting theoretical question, at the cost of transforming commonly-used operations with simple hardware implementations (e.g. addition, multiplication) into computational nightmares. I don't think anyone has found a practical application for their result yet, but that's not the point of the work.
Traditional processors, even highly dedicated ones like TMUs in gpus, still require being preconfigured substantially in order to switch between sin/cos/exp2/log2 function calls, whereas a silicon implementation of an 8-layer EML machine could do that by passing a single config byte along with the inputs. If you had a 512-wide pipeline of EML logic blocks in modern silicon (say 5nm), you could get around 1 trillion elementary function evaluations per second on 2.5ghz chip. Compare this with a 96 core zen5 server CPU with AVX-512 which can do about 50-100 billion scalar-equivalent evaluations per second across all cores only for one specific unchanging function.
Take the fastest current math processors: TMUs on a modern gpu: it can calculate sin OR cos OR exp2 OR log2 in 1 cycle per shader unit... but that is ONLY for those elementary functions and ONLY if they don't change - changing the function being called incurs a huge cycle hit, and chaining the calculations also incurs latency hits. An EML coprocessor could do arcsinh(x² + ln(y)) in the same hardware block, with the same latency as a modern cpu can do a single FMA instruction.
The fact that addition, subtraction, and multiplication run quickly on typical processors isn't arbitrary--those operations map well onto hardware, for roughly the same reasons that elementary school students can easily hand-calculate them. General transcendental functions are fundamentally more expensive in time, die area, and/or power, for the same reasons that elementary school students can't easily hand-calculate them. A primitive where all arithmetic (including addition, subtraction, or negation) involves multiple transcendental function evaluations is not computationally faster, lower-power, lower-area, or otherwise better in any other practical way.
The comments here are filled with people who seem to be unaware of this, and it's pretty weird. Do CS programs not teach computer arithmetic anymore?
Practical terms: Jacobian (heavily used in weather and combustion simulation): The transcendental calls, mostly exp(-E_a/RT), are the actual clock-cycle bottleneck. The GPU's SFU computes one exp2 at a time per SM. The ALU then has to convert it (exp(x) = exp2(x × log2(e))), multiply by the pre-exponential factor, and accumulate partial derivatives. It's a long serial chain for each reaction rate.
The core of this is the Arrhenius rate, (A × T^n × exp(-E_a/(R×T))), which involves an exponentiation, a division, a multiplication, and an exponential. On a GPU, that's multiple SFU calls chained with ALU ops. In an EML tree, the whole expression compiles to a single tree that flows through the pipeline in one pass.
GPU (PreJacGPU) is currently the state of the art for speed on these simulations - a moderate width 8-depth EML machine could process a very complex Jacobian as fast as the gpu can evaluate one exp(). Even on a sub-optimal 250mhz FPGA, an entire 50x50 Jacobian would be about 3.5 microseconds vs 50 microseconds PER Jacobian on an A100.
If you put that same logic path into an ASIC, you'd be about 20x the fPGA's speed - in the nanoseconds per round. And this is not like you're building one function into an ASIC it's general purpose. You just feed it a compiled tree configuration and run your data through it.
For anything like linear algebra math, which is also used here, you'd delegate that to the dedicated math functions on the processor - it wouldn't make sense to do those in this.
I think you're missing the reason why the GPU kicks you out of the fast path when you need that special function. The special function evaluation is fundamentally more expensive in energy, whether that cost is paid in area or time. Evaluation of the special functions with throughput similar to the arithmetic throughput would require much more area for the special functions, which for most computation isn't a good tradeoff. That's why the GPU's designers chose to make your exp2 slow.
Replacing everything with dozens of cascaded special functions makes everything uniform, but it's uniformly much worse. I feel like you're assuming that by parallelizing your "EML tree" in dedicated hardware that problem goes away; but area isn't free in either dollars or power, so it doesn't.
[0] https://github.com/francisrstokes/githublog/blob/main/2024/5...
Exp and ln, isn't the operation its own inverse depending on the parameter? What a neat find.
This is a function from ℝ² to ℝ. It can't be its own inverse; what would that mean?
and
eml(eml(1,x),1) = e^e * x
eml(1,eml(x,1)) = e - x
Which then if you iterate gives x (ie is inverse of itself).
eml(1,eml(eml(1,eml(x,1)),1)) = x
This work is about continuous systems, even if the reduction of many kinds of functions to compositions of a single kind of function is analogous to the reductions of logic functions to composing a few kinds or a single kind of logic functions.
I actually do not value the fact that the logic functions can be expressed using only NAND. For understanding logic functions it is much more important to understand that they can be expressed using either only AND and NOT, or only OR and NOT, or only XOR and AND (i.e. addition and multiplication modulo 2).
Using just NAND or just NOR is a trick that does not provide any useful extra information. There are many things, including classes of mathematical functions or instruction sets for computers, which can be implemented using a small number of really independent primitives.
In most or all such cases, after you arrive to the small set of maximally simple and independent primitives, you can reduce them to only one primitive.
However that one primitive is not a simpler primitive, but it is a more complex one, which can do everything that the maximally simple primitives can do, and it can recreate those primitives by composition with itself.
Because of its higher complexity, it does not actually simplify anything. Moreover, generating the simpler primitives by composing the more complex primitive with itself leads to redundancies, if implemented thus in hardware.
This is a nice trick, but like I have said, it does not improve in any way the understanding of that domain or its practical implementations in comparison with thinking in terms of the multiple simpler primitives.
For instance, one could take a CMOS NAND gate as the basis for implementing a digital circuit, but you understand better how the CMOS logic actually works when you understand that actually the AND function and the NOT function are localized in distinct parts of that CMOS NAND gate. This understanding is necessary when you have to design other gates for a gate library, because even if using a single kind of gate is possible, the performance of this approach is quite sub-optimal, so you almost always you have to design separately, e.g. a XOR gate, instead of making it from NAND gates or NOR gates.
In CMOS logic, NAND gates and NOR gates happen to be the simplest gates that can restore at output the same logic levels that are used at input. This confuses some people to think that they are the simplest CMOS gates, but they are not the simplest gates when you remove the constraint of restoring the logic levels. This is why you can make more complex logic gates, e.g. XOR gates or AND-OR-INVERT gates, which are simpler than they would be if you made them from distinct NAND gates or NOR gates.
What comes to my mind as an alternative which I would subjectivity finer is "axe". Think axiom or axiology.
Anyone with other suggestions? Or even remarks on this one?
On my side I like direct sementic connections, but find convoluted indirections conflated through lazy sigles strongly repulsive. I can appreciate an acronym that make and the direct connection and playful indirect reference to expanded terms.
nix run github:pveierland/eml-eval
EML Evaluator — eml(x, y) = exp(x) - ln(y)
Based on arXiv:2603.21852v2 by A. Odrzywołek
Constants
------------------------------------------------------------------------------
1 K=1 d=0 got 1 expected 1 sym=ok num=ok [simplify]
e K=3 d=1 got 2.718281828 expected 2.718281828 sym=ok num=ok [simplify]
0 K=7 d=3 got 0 expected 0 sym=ok num=ok [simplify]
-1 K=17 d=7 got -1 expected -1 sym=ok num=ok [simplify]
2 K=27 d=9 got 2 expected 2 sym=ok num=ok [simplify]
-2 K=43 d=11 got -2 expected -2 sym=ok num=ok [simplify]
1/2 K=51 d=15 got 0.5 expected 0.5 sym=ok num=ok [simplify]
-1/2 K=67 d=17 got -0.5 expected -0.5 sym=ok num=ok [simplify]
2/3 K=103 d=19 got 0.6666666667 expected 0.6666666667 sym=ok num=ok [simplify]
-2/3 K=119 d=21 got -0.6666666667 expected -0.6666666667 sym=ok num=ok [simplify]
sqrt2 K=85 d=21 got 1.414213562 expected 1.414213562 sym=ok num=ok [simplify]
i K=75 d=19 got i expected i sym=ok num=ok [i²=-1, simplify]
pi K=153 d=29 got 3.141592654 expected 3.141592654 sym=ok num=ok [simplify]
Unary functions (x = 7/3)
------------------------------------------------------------------------------
exp(x) K=3 d=1 got 10.3122585 expected 10.3122585 sym=ok num=ok [simplify]
ln(x) K=7 d=3 got 0.8472978604 expected 0.8472978604 sym=ok num=ok [simplify]
-x K=17 d=7 got -2.333333333 expected -2.333333333 sym=ok num=ok [simplify]
1/x K=25 d=8 got 0.4285714286 expected 0.4285714286 sym=ok num=ok [simplify]
x - 1 K=11 d=4 got 1.333333333 expected 1.333333333 sym=ok num=ok [simplify]
x + 1 K=27 d=9 got 3.333333333 expected 3.333333333 sym=ok num=ok [simplify]
2x K=67 d=17 got 4.666666667 expected 4.666666667 sym=ok num=ok [simplify]
x/2 K=51 d=15 got 1.166666667 expected 1.166666667 sym=ok num=ok [simplify]
x^2 K=41 d=10 got 5.444444444 expected 5.444444444 sym=ok num=ok [simplify]
sqrt(x) K=59 d=16 got 1.527525232 expected 1.527525232 sym=ok num=ok [simplify]
Binary operations (x = 7/3, y = 5/2)
------------------------------------------------------------------------------
x + y K=27 d=9 got 4.833333333 expected 4.833333333 sym=ok num=ok [simplify]
x - y K=11 d=4 got -0.1666666667 expected -0.1666666667 sym=ok num=ok [simplify]
x * y K=41 d=10 got 5.833333333 expected 5.833333333 sym=ok num=ok [simplify]
x / y K=25 d=8 got 0.9333333333 expected 0.9333333333 sym=ok num=ok [simplify]
x ^ y K=49 d=12 got 8.316526261 expected 8.316526261 sym=ok num=ok [simplify]And i is obviously `sqrt(-1)`
Here is my attempt. I think they should be optimal up to around 15 eml.nodrs, the latter might not be:
# 0
1=1
# 1
exp(x)=eml(x,1)
e-ln(x)=eml(1,x)
e=exp(1)
# 2
e-x=e-ln(exp(x))
# 3
0=e-e
ln(x)=e-(e-ln(x))
exp(x)-exp(y)=eml(x,exp(exp(y)))
# 4
id(x)=e-(e-x)
inf=e-ln(0)
x-ln(y)=eml(ln(x),y)
# 5
x-y=x-ln(exp(y))
-inf=e-ln(inf)
# 6
-ln(x)=eml(-inf,x)
ln(ln(x))=ln(ln(x))
# 7
-x=-ln(exp(x))
-1=-1
x^-1=exp(-ln(x))
ln(x)+ln(y)=e-((e-ln(x))-ln(y))
ln(x)-ln(y)=ln(x)-ln(y) # using x - ln(y)
# 8
xy=exp(ln(x)+ln(y))
x/y=exp(ln(x)-ln(y))
# 9
x + y = ln(exp(x))+ln(exp(y))
2 = 1+1
# 10
ipi = ln(-1)
# 13
-ipi=-ln(-1)
x^y = exp(ln(x)y)
# 16
1/2 = 2^-1
# 17
x/2 = x/2
x2 = x2
# 20
ln(sqrt(x)) = ln(x)/2
# 21
sqrt(x) = exp(ln(sqrt(x)))
# 25
sqrt(xy) = exp((ln(x)+ln(y))/2)
# 27
ln(i)=ln(sqrt(-1))
# 28
i = sqrt(-1)
-pi^2 = (ipi)(ipi)
# 31
pi^2 = (ipi)(-ipi)
# 37
exp(xi)=exp(xi)
# 44
exp(-xi)=exp(-(xi))
# 46
pi = (ipi)/i
# 90+x?
2cos(x)=exp(xi)+exp(-xi))
# 107+x?
cos(x) = (2cos(x))/2
# 118+x?
2sin(x)=(exp(x*i)-exp(-xi))/i # using exp(x)-exp(y)
# 145+x?
sin(x) = (2sin(x))/2
# 217+3x?
tan(x) = 2sin(x)/(2cos(x))
My dearest congrats to the author in case s/he shows around this site ^^.
Table 4 shows the "size" of the operators when fully expanded to "eml" applications, which is quite large for +, -, ×, and /.
Here's one approach which agrees with the minimum sizes they present:
eml(x, y ) = exp(x) − ln(y) # 1 + x + y
eml(x, 1 ) = exp(x) # 2 + x
eml(1, y ) = e - ln(y) # 2 + y
eml(1, exp(e - ln(y))) = ln(y) # 6 + y; construction from eq (5)
ln(1) = 0 # 7
After you have ln and exp, you can invert their applications in the eml function eml(ln x, exp y) = x - y # 9 + x + y
Using a subtraction-of-subtraction to get addition leads to the cost of "27" in Table 4; I'm not sure what formula leads to 19 but I'm guessing it avoids the expensive construction of 0 by using something simpler that cancels: x - (0 - y) = x + y # 25 + {x} + {y}xy = eml(eml(1, eml(eml(eml(eml(1, eml(eml(1, eml(1, x)), 1)), eml(1, eml(eml(1, eml(y, 1)), 1))), 1), 1)), 1)
From Table 4, I think addition is slightly more complicated?
const eml = (x,y) => Math.exp(x) - Math.log(y);
const mul = (x,y) => eml(eml(1,eml(eml(eml(1,eml(eml(1,eml(1,x)),1)),eml(1,eml(eml(1,eml(y,1)),1))),1)),1);
console.log(mul(5,7));
> 35.00000000000001For larger or negative inputs you get a NaN because ECMAScript has limited precision and doesn't handle imaginary numbers.
This also shows why EML is not practical for computation.
exp(a) = eml(a, 1) ln(a)=eml(1,eml(eml(1,a),1))
Plugging those in is an excercise to the reader
Because of how exp and log turn addition into multiplication and vice versa, once you have the one, you get the other easily.
"All" is a tall claim. Have a look at https://perso.ens-lyon.fr/jean-michel.muller/FP5.pdf for example. Jump to slide 18:
> Forget about Taylor series
> Taylor series are local best approximations: they cannot compete on a whole interval.
There is no need to worry about "sh-tt-ng" on their result when there is so much to learn about other approximation techniques.
I personally mostly do my everyday work using taylor expansion (mostly explicit numerical methods in comp. EM because they're cheaper these days and it's simpler to write down) so it's what first comes to mind.
So it really helps when people explain (1) their context** and (2) their reasoning. Communicating well is harder than people think. Many comments are read by hundreds or more (thousands?) of people, most of whom probably have no idea who we are, what we know, or what we do with our brains on a regular basis. It is generous and considerate to other people to slow down and really explain where we're coming from.
So, when I read "most people use Taylor approximations"...
1. my first question is "on what basis can someone say this?"
2. for what domains might this somewhat true? False?
3. but the bigger problem is that claims like the above don't teach. i.e. When do Taylor series methods fall short? Why? When are the other approaches more useful?
Here's my quick take... Taylor expansions tends to work well when you are close to the expansion point and the function is analytic. Taylor expansions work less well when these assumptions don't hold. More broadly they don't tend to give uniform accuracy across a range. So Taylor approximations are usually only local. Other methods (Padé, minimax, etc) are worth reaching for when other constraints matter.
* I think this is a huge area we're going to need to work on in the age where anyone can sound like an expert.
** In the case above, does "comp. EM" mean "computational electromagnetics" or something else? The paper talks about "EML" so it makes me wonder if "EM" is a typo. All of these ambiguities add up and make it hard for people to understand each other.
Replying to some of your questions (1 and 2), this is from the perspective of a computational scientist, and a less theoretical type who works closely with experimentalists. This I am closer to a user of codes to model experiments than I am to someone who does a lot of analytic or fundamental theory, although my experience and perspective is probably close to others who are computational-ish in other domains like engineering, for the reasons I'll explain below.
For 3, most physically useful simulations that are not merely theoretical exercises (that is, simulations that are more predictive or explanative of actual experiments scientists want to do) will not consist of analytic functions you can write down. First, say that I suppose initial conditions in a problem has an aspect that is analytic (me setting my laser profile as a gaussian pulse), once the interaction with a plasma target occurs, the result you obtain (and thus predictions a simulation will make that can be compared to experiment) will not be gaussian but will evolve due to the complex physics modeled in the simulation. And a gaussian as an initial condition is already an approximation to an actual experiment. An "easy sim" for me is doing a best fit to the waist from a profile they'll read from a power meter, and using a gaussian that closely matches that, while a more realistic simulation would be me directly taking that data they have on an excel sheet and feeding that into the simulation directly as an initial condition. In most real world scenarios, most ICs already aren't analytic and must be solved numerically. By the way, this isn't that different for how engineers use computational codes too. Not many airplane wings are spheres or cylinders, you'd likely have to import the design for a wing from a CAD file into say an aerodynamics fluid code.
So in all these cases, the bottleneck isn't really approximating analytic functions you can write down either in closed form or even in series form as to the nth degree. Many people in the computational domain do not need accuracy beyond two or three terms in a taylor series. This is because it is usually easier to just cut down dx and do more steps in total rather than using a large dx and requiring more terms...and this before using any more sophisticated approximations. No code I know uses Pade approximations, I just know that some libraries for special functions (that may be one or two function calls exist in a code I use) use them.
Also, just a quick example you can try. Let's look at exp, for small argument (this only really works for small argument, you obviously can't do taylor expansion well for large argument). Consider the following:
>>> np.exp(0.4231)
np.float64(1.5266869570289792)
I will see how many terms I need to get 4 digits of accuracy (note that I had four digits in my input, so even ignoring sig figs, I probably shouldn't expect better than 4 digits for a result, note that numpy itself is numerical too so shouldn't be considered exact although i'll trust the first four digits here too)
>>> x = 0.4231
>>> 1
1
>>> 1 + x
1.4231
>>> 1 + x + x**2/2
1.512606805
>>> 1 + x + x**2/2 + x**3/(3*2)
1.5252302480651667
>>> 1 + x + x**2/2 + x**3/(3*2) + x**4/24
1.5265654927553847
Note that by 3 terms (x**3) I'm already off in the last digit by 1, by 4 terms, it's converged enough. Given a fp register, you can reuse powers of x from the last term, this is already dangerously cheap, why do you even need better than this in a practical sense? Most results I see in the wild do not even reach requiring this level of precision in a single simulation. I am a scientist however, it's possible engineers need more precision.
For us, more sophisticated methods on the "calculating transcendental functions" level are not really required, which is why they don't appear in codes I usually see. What we need better are things that make the actual elemental operations, like fma and the like, faster. Things like avx512 are far more interesting to me, for example.
But no...
This is about continuous math, not ones and zeroes. Assuming peer review proves it out, this is outstanding.
Elementary functions such as exponentiation, logarithms and trigonometric functions are the standard vocabulary of STEM education. Each comes with its own rules and a dedicated button on a scientific calculator;
What?
and No comparable primitive has been known for continuous mathematics: computing elementary functions such as sin, cos, √ , and log has always required multiple distinct operations. Here we show that a single binary operator
Yeah, this is done by using tables and series. His method does not actually facilitate the computation of these functions.
There is no such things as "continuous mathematics". Maybe he meant to say continuous function?
Looking at page 14, it looks like he reinvented the concept of the vector valued function or something. The whole thing is rediscovering something that already exists.
The point of this paper is not to revolutionize how a scientific calculator functions overnight, its to establish a single binary operation that can reproduce the rest of the typical continuous elementary operations via repeated application, analogous to how a NAND or NOR gate creates all of the discrete logic gates. Hence, "continuous mathematics" as opposed to discrete mathematics. It seems to me you're being overly negative without solid reasoning.
But he didn't show this though. I skimmed the paper many times. He creates multiple branches of these trees in the last page, so it's not truly a single nested operation.
Some of us had the wondrous epiphany as children that we could build any digital device we could dream of (yes, up to and including a full computer, CPU, RAM, peripherals, and all) out of SN7400 NAND gates that we could buy down at the local Radio Shack, if only we could scrape together enough change to buy the requisite number of parts and sockets and Tefzel wire.
Obviously, I can't speak for all of us who had that epiphany, but I strongly suspect that most of us who had that epiphany would find this result joyous.
The plan is to use this new "structurally flawless mathematical primitive" EML (this is all beyond me, was just having some fun trying to make it cook things together) in TPUs made out of logarithmic number system circuits. EML would have DAGs to help with the exponential bloat problem. Like CERN has these tiny fast "harcode models" as an inspiration. All this would be bounded by the deductive causality of Pedro Domingoses Tensor Logic and all of this would einsum like a mf. I hope it does.
Behold, The Weather Dominator!
https://notebooklm.google.com/notebook/e0a54a54-c644-4c89-9d...