upvote
Not in this case because the dependencies are the same:

Naive: https://godbolt.org/z/Gzf1KM9Tc

Horner's: https://godbolt.org/z/jhvGqcxj1

reply
Still, it's no worse than the naïve formula, which has exactly the same data dependencies and then more.

_Can_ you even make a reasonable high-ILP scheme for a polynomial, unless it's of extremely high degree?

reply
For throughput-dominated contexts, evaluation via Horner's rule does very well because it minimizes register pressure and the number of operations required. But the latency can be relatively high, as you note.

There are a few good general options to extract more ILP for latency-dominated contexts, though all of them trade additional register pressure and usually some additional operation count; Estrin's scheme is the most commonly used. Factoring medium-order polynomials into quadratics is sometimes a good option (not all such factorizations are well behaved wrt numerical stability, but it also can give you the ability to synthesize selected extra-precise coefficients naturally without doing head-tail arithmetic). Quadratic factorizations are a favorite of mine because (when they work) they yield good performance in _both_ latency- and throughput-dominated contexts, which makes it easier to deliver identical results for scalar and vectorized functions.

There's no general form "best" option for optimizing latency; when I wrote math library functions day-to-day we just built a table of the optimal evaluation sequence for each order of polynomial up to 8 or so and each microarchitecture and grabbed the one we needed unless there were special constraints that required a different choice.

reply
Ah, I didn't know of either scheme. Still, am I right in that this mainly makes sense for degrees above five or six or so?
reply