I have a business analytics friend that knows SQL because it's part of his workflows.
But Excel, Notion, Power BI, and other low/no-code tools all have their own data filtering and transformation languages (or dialects). He'd rather spend his time learning more about his line of business, than an aspect of yet another cloud tool that gets forced on him.
no Doom running on cel.
I recently wanted to expose some basic user auto tagging/labeling based on the json data.
I chose cel, over python, SQL because I could just import the runtime in C++, or any language that implements it (python, js etc..)
Safely running a sandboxed python execution engine is significantly more effort and lower performance.
At this cel excels.
Where it didn't was user familiarity and when the json data itself was complex.
"Guaranteed to terminate" actually means "guaranteed to terminate in finite but possibly arbitrarily large time" which is really not a useful property.
There's no practical difference between a filter that might take 1 billion years to run and one that might take more than a billion years.
https://github.com/google/cel-spec/blob/master/doc/langdef.m...
And your service puts an upper bound on input size and cel expression size. (True for all practical applications.)
You can actually get a guarantee tha t you can't construct a billion year expression. And even guarantee that all expressions will evaluate in let's say 60 secs.
Turing completeness by itself does not guarantee this but it is a necessary prerequisite for these guarantees.
There is a practical solution to it called “metering”, like gas mechanism in Ethereum’s EVM or cost calculation for complex GraphQL queries.
In the common use-cases for CEL that I've seen, you don't want to skip evaluation and fail open or closed arbitrarily. That can mean things like "abusive user gets access to data they should not be allowed to access because rule evaluation was skipped".
You also may have tons of rules and be evaluating them very often, so speed is important.