They're primitive in the sense that you can't compute exp(x) or log(x) using a finite combination of other elementary functions for any x. If you allow infinite many operations, then you can easily find infinite sums or products of powers, or more complicated expressions to represent exp and log and other elementary functions.
> Or do we assume that we have an infinite lookup table for all possible inputs?
Essentially yes, you don't necessarily need an "implementation" to talk about a function, or more generally you don't need to explicitly construct an object from simpler pieces: you can just prove it satisfies some properties and that it is has to exist.
For exp(x), you could define the function as the solution to the diffedential equal df/dx = f(x) with initial condition f(0) = 1. Then you would enstablish that the solution exists and it's unique (it follows from the properties of the differential equation), call exp=f and there you have it. You don't necessarily know how to compute for any x, but you can assume exp(x) exists and it's a real number.
In other words, this result does not aim to improve computability or bound the complexity of calculating the numerical value. Rather, it aims to exhibit this uniform, finite tree structure for the entire family of elementary expressions.
To clarify my earlier point: the author isn't trying to build a practical calculator or generate human-readable algebra. Using exp and ln isn't a cheat code because the goal is purely topological. The paper just proves that this massive, diverse family of continuous math can be mapped perfectly onto a uniform binary tree, without secretly burying a state machine inside the operator.
They use the complex version of logarithm, that has a lot of branching problems.
So it isn't exploiting the branching for computation.
For example, IIRC ln( -inf.0 + y * i ) = ´+inf.0 + pi * sign(y)