On my TODO list next is to hook up the various O(x)Caml memory profiling tools: we have statmemprof which does statistical sampling, and then the runtime events buffer, and (hopefully) stack activity in OxCaml's case from the compiler.
This provides a pretty good automation loop for a performance optimising coding agent: it can choose between heap vs local, or copy vs reference, or fixed layout (for SIMD) vs fragmentation (for multicore NUMA) depending on the tasks at hand.
Some references:
- Statmemprof in OCaml : https://tarides.com/blog/2025-03-06-feature-parity-series-st...
- "The saga of multicore OCaml" by Ron Minsky about how Jane Street viewed performance optimisation from the launch of OCaml 5.0 to where they are today with OxCaml https://www.youtube.com/watch?v=XGGSPpk1IB0
That's exactly what substructural logic/type systems allows you to do. Affine and linear types are one example of substructural type systems, but you can also go further in limiting moves, exchanges/swaps etc. which helps model scenarios where allocation and deallocation must be made explicit.
Experimental and of course one can debate whether Haskell is mainstream but I figured it merits a mention.
https://www.reddit.com/r/rust/comments/vo31dw/comment/ieao7v...
- it will use O(n) space where n is some measure of one of the parameters (instead of n you could have some sort of function of multiple measures of multiple parameters)
- same but time use instead of space use
- same but number of copies
- the size of an output will be the size of an input, or less than it
- the allocated memory after the function runs is less than allocated memory before the function runs
- given the body of a function, and given that all the functions used in the body have well defined complexities, the complexity of the function being defined with them is known or at least has a good upper bound that is provably true