For example, the lambda calculus is the base for many useful programming languages. But the lambda calculus maps to a "closed Cartesian category". And many, many other interesting things in math can be mapped to a closed Cartesian category.
So now you can ask, "What if affine or linear logic were a programming language?" And the answer is, "You'd get a language with safe resource management, like Rust." Or you might ask, "What if probability were a programming language?"
Or on a smaller scale, a parameterized collection type with "map" is a functor. Add a single-element constructor and a "flatten" operation, and you have a monad. Functions with parameterized types are often natural transformations. And so on. This can then be directly analogized to constructs in different areas of math. Which might sometimes produce an interesting idea or two.
So category theory isn't always used to add something new and profound. Sometimes it's just a handy way to see something already there.
Without reading too much into what this framework does, I'd say category theory could be useful for some ML problems (i.e. layer composition, gradient propagation, etc.) - but I'd think it would be more useful as an analytical tool than as actual lib/code structures.