upvote
"Formulas that update backwards" isn't really the main idea behind neural networks. It's an efficient way of computing gradients, but there are other ways. For example forward propagation would compute a jacobian-matrix product of input wrt output with an identity matrix. Backpropagation is similar to bidi-calc to the same extent as it is similar to many other algorithms which traverse some graph backward.

I think you should be able to use bidi-calc to train a neural net, altough I haven't tried. You'd define a neural net, and then change it's random output to what you want it to output. However as I understand it, it won't find a good solution. It might find a least squares solution to the last layer, then you'd want previous layer to output something that reduces error of the last layer, but bidi-calc will no longer consider last layer at all.

reply
All those words and you forget to provide people the breadcrumbs to learn more for themselves.

The term of interest is "backpropagation".

reply
Won’t another breadcrumb be Prolog and “declarative programming”[1].

Wasn’t Prolog invented to formalise these kinds of problems of making the inputs match what the desired output should be.

[1] https://en.wikipedia.org/wiki/Declarative_programming

reply
Yes, I'm glad to see a comment on Prolog. I think of it as _the_ foundational programming language for solving such problems. It isn't so much that it's a back propagation language; it's just that, based on which variables are bound at a given point, it will go forward deductively, or backwards inductively.
reply
Prolog has basically nothing to do with calculus.
reply