https://www.proquest.com/openview/2a5f2e00e8df7ea3f1fd3e8619...
A few of my own experiments in this time with unification over the binders as variables themselves shows there’s almost always a post HM inference sitting there but likely not one that works in total generality.
To me that spot of trying to binding unification in higher order logic constraint equations is the most challenging and interesting problem since it’s almost always decidable or decidably undecidable in specific instances, but provably undecidable in general.
So what gives? Where is this boundary and does it give a clue to bigger gains in higher order unification? Is a more topological approach sitting just behind the veil for a much wider class of higher order inference?
And what of optimal sharing in the presence of backtracking? Lampings algorithm when the unification variables is in the binder has to have purely binding attached path contexts like closures. How does that get shared?
Fun to poke at, maybe just enough modern interest in logic programming to get there too…
(Caveat that I don't claim to be a λProlog or expert.)
All examples showcase the typing discipline that is novel relative to Prolog, and towards day 10, use of the lambda binders, hereditary harrop formulas, and higher order niceness shows up.
[1]: https://www.lix.polytechnique.fr/~dale/lProlog/proghol/extra...
https://www.lix.polytechnique.fr/Labo/Dale.Miller/lProlog/fe...
It might sound weird and crazy, but it quite literally blew my mind at the time !
I personally found it by asking for a specific language recommendation from ChatGPT, and one of the suggestions was Prolog.
Second you really need to understand and fine tune cuts, and other search optimization primitives.
Finally in what concerns Game AIs, it is a mixture of algorithms and heuristics, a single paradigm language (first order logic) like Prolog, can't be a tool for all nails.
In the Classsic AI course we had to implement gaming AI algorithms (A*, alpha-beta pruning, etc) and in Prolog for one specific assignment. After trying for a while, I got frustrated and asked the teacher if I could do it in Ruby instead. He agreed: he was the kind of person who just couldn't say no, he was too nice for his own good. I still feel bad about it.
Rest In Peace, Alexandre.
I know you likely mean regular Prolog, but that's actually fairly easy and intuitive to reason with (code dependent). Lambda Prolog is much, much harder to reason about IMO and there's a certain intractability to it because of just how complex the language is.
Implementing other programming languages and proving theorems are the low-hanging fruits since you get variable binding without name management, but I genuinely think it has profound implications for expert systems since it essentially removes a massive amount of complexity from contextual reasoning. Being able to account for patient history when providing a diagnosis, for example.
One possible disadvantage of static types is that it can make the code more verbose, but agents really don't care, quite the opposite.