If you are into this, read ahead: https://mlajtos.mu/posts/new-kind-of-paper
The thing is, that APL people are generally very academic. They can absolutely perform engineering tasks very fast and with concise code, but in some hypothetical average software shop, if you start talking about function ranking and Naperian functors your coworkers are going to suspect you might need medical attention. The product manager will quietly pull out their notes about you and start thinking about the cost of replacing you.
This is for several reasons, but the most important one is that the bulk of software development is about inventing a technical somewhat formal language that represents how the customer-users talk and think, and you can't really do that in the Iverson languages. It's easy in Java, which for a long time forced you to tell every method exactly which business words can go in and come out of them. The exampleMethod combines CustomerConceptNo127 from org.customer.marketing and CustomerConceptNo211 from org.customer.financial and results in a CustomerConceptNo3 that the CEO wants to look at regularly.
Can't really do that as easily in APL. You can name data and functions, sure, but once you introduce long winded names and namespaced structuring to map over a foreign organisation into your Iverson code you lose the tersity and elegance. Even in exceptionally sophisticated type systems in the ML family you'll find that developers struggle to do such direct connections between an invented quasilinguistic ontology and an organisation and its processes, and more regularly opt for mathematical or otherwise academic concepts.
It can work in some settings, but you'll need people that can do both the theoretical stuff and keep in mind how it translates to the customer's world, and usually it's good enough to have people that can only do the latter part.
This doesn't match my experience at all. I present you part of a formal language over an AST, no cover functions in sight:
p⍪←i ⋄ t k n pos end(⊣⍪I)←⊂i ⍝ node insertion
i←i[⍋p[i←⍸(t[p]=Z)∧p≠⍳≢p]] ⍝ select sibling groups
msk←~t[p]∊F G T ⋄ rz←p I@{msk[⍵]}⍣≡⍳≢p ⍝ associate lexical boundaries
(n∊-sym⍳,¨'⎕⍞')∧(≠p)<{⍵∨⍵[p]}⍣≡(t∊E B) ⍝ find expressions tainted by user input
These are all cribbed from the Co-dfns[0] compiler and related musings. The key insight here is that what would be API functions or DSL words are just APL expressions on carefully designed data. To pull this off, all the design work that would go into creating an API goes into designing said data to make such expressions possible.In fact, when you see the above in real code, they are all variations on the theme, tailored to the specific needs of the immediate sub-problem. As library functions, such needs tend to accrete functions and function parameters into our library methods over time, making them harder to understand and visually noisier in the code.
To my eyes, the crux is that our formal language is _discovered_ not handed down from God. As I'm sure you're excruciatingly aware, that discovery process means we benefit from the flexibility to quickly iterate on the _entire architecture_ of our code, otherwise we end up with baked-in obsolete assumptions and the corresponding piles of workarounds.
In my experience, the Iversonian languages provide architectural expressability and iterability _par excellence_.
With the code snippets, I tried to show how expressing customer-facing concepts doesn't require more code than detailed, internal concepts. Notice how the semantics captured by each example get progressively "larger" and closer to the frontend.
The phenomenon that developers no longer understand what their code is for after a few months is well known, and it usually concerns code that was written quickly with almost exclusively programming language primitives and very few symbols carrying domain meaning. This is much easier to achieve with highly abstract primitives like those in the Iverson languages.
There's a related phenomenon where developers too slavishly apply 'do not repeat yourself'/DRY, and interconnectedness grows too quickly in a code base that will inevitably become quite large. It is hard to resist this impulse, and in my amateur experience it is even harder in Iverson languages. Maybe I'm wrong and this doesn't happen in practice for some reason, but I've never come across articles about how to avoid it when working in e.g. APL or J so either the problem doesn't manifest or the professionals in these languages doesn't have solutions. Or I just didn't read the right material, which I'm sure you'll correct if that's the case.
I agree that solutions can come out very elegant and that exploratory programming can be very interesting, at least in J, the flavour I know best (in part thanks to the APK), and judging from recorded live programming and lectures. However, what I've heard from people with experience from TakeCare informs my conclusions above. It will be interesting to see whether CGM manages to recruit enough developers to keep it going. They're already advertising in terms of 'do you have some years of experience? have you ever been interested in Python, R or Haskell? come experience real magic in APL with us', so it seems their current employees don't manage to attract enough developers by word of mouth.
The data rarely changes, but you have to put a name on it, and those names are dependent on policies. That's the issue most standard programming languages. In functional and APL, you don't name your data, you just document its shape[0]. Then when your policies are known, you just write them using the functions that can act on each data type (lists, set, hash, primitives, functions,...). Policy changes just means a little bit of reshuffling.
[0]: In the parent example, CustomerConceptNo{127,211,3) are the same data, but with various transformations applied and with different methods to use. In functional languages, you will only have a customer data blob (probably coming from some DB). Then a chain of functions that would pipe out CustomerConceptNo{127,211,3) form when they are are actually need (generally in the interface. But they be composed of the same data structures that the original blob have, so all your base functions do not automatically becomes obsolete.
Funny you mention Common 'do what the hell you want lol' Lisp in the same breath as Clojure and APL.
If I ran a one or two person shop and didn't expect to have to grow and shrink the team with consultants at short notice I might use CL or Pharo.
APL is a symbolic language that is very unlike any other language anyone learns during their normal education. I think that really limits adoption compared to spreadsheets.
Spreadsheets were unstable & cumbersome to debug if longer than a sheet, very slow iterative convergence and encouraged sloppy unwieldy coding, but of course, excelled at the presentation level. This resulted in endless number of "FORTRAN/C++ REPL" tools emerging to fill the gap.
To appreciate the revolutionary design of APL & its descendants, notice most of industrial tools that emerged in 90s & 2000's emulated it under the hood - MATLAB/sage, Mathematica, STATA/R/SAS, Tableau, and even CERN ROOT/Cling - in trading & quant finance Q/Kdb+ is still SOTA.