upvote
There are applications that are currently doing this without hardware support and accepting much worse than 95% performance loss to do so.

This hardware won’t make the technique attractive for ALL computation. But, it could dramatically increase the range of applications.

reply
Agreed. When I was working on TEEs/confidential computing, just about everyone agreed that FHE was conceptually attractive (trust the math instead of trusting a hardware vendor) but the overhead of FHE was so insanely high. Think 1000x slowdowns turning your hour-long batch job into something that takes over a month to run instead.
reply
Current FHE on general CPUs is typically 10,000x to 100,000x slower than plaintext, depending on the scheme and operation. So even with a 5,000x ASIC speedup you are still looking at roughly 20-100x overhead vs unencrypted compute.

That rules out anything latency-sensitive, but for batch workloads like aggregating encrypted medical records or running simple ML inference on private data it starts to become practical. The real unlock is not raw speed parity but getting FHE fast enough that you can justify the privacy tradeoff for specific regulated workloads.

reply
10,000x to 100,000x / 5,000x = 2 to 10x, not 20 to 100x.
reply
deleted
reply
Now we know why Intel more or less abandonned SEAL and rejected GPU requests.
reply