It's especially noticeable when teaching functional programming to people trained in OO: Some people's model just breaks, while others quickly see the similarities, and how one can translate from a world of vars to a world of monads with relative ease. The bones of how computation works aren't changing, just how one puts together the pieces.
Between that latter group and the bottom portion of the middle it sparked a big culture war. Eventually leading to leadership declaring that FP was arcane wizardry, and should be eradicated.
However, I've seen developers who were in this field for decades, and they still followed just recipes without understanding them.
So, I'm not entirely sure, that the distinction is this clear. But of course, it depends how we define "senior". Senior can be developers who try to understand the underlying reasons and code for a while. But companies seem to disagree.
Btw regarding functional programming. When I first coded in Haskell, I remember that I coded in it like in a standard imperative languages. Funnily, nowadays it's the opposite: when I code in imperative languages, it looks like functional programming. I don't know when my mental model switched. But one for sure, when I refactor something, my first todo is to make the data flow as "functional" as possible, then the real refactoring. It helps a lot to prevent bugs.
What really broke my mind was Prolog. It took me a lot to be able to do anything more than simple Hello World level things, at least compared to Haskell for example.
No real value is this comment, I'm just happy to share a moment over the brain-fuck that is Prolog (ironically Brainfuck made a whole lot more sense).
The problem is, as is evident by this article and thread, it's difficult to measure (and thus communicate) expertise, but it's really easy to measure years of experience.
Then, I met software and computer science abstractions, they all seemed so arbitrary to me, I often didn't even understand what the recipe was supposed to cook. And though I have gotten better over time (and can now write good solutions in certain domains), to this day I did not develop a "physics" level understanding of software or computer science.
It feels really strange and messes with your sense of intelligence. Wondering if anyone here has a similar experience and was able to resolve it.
math and logic are closer to a basis for software abstraction - but they were scary to business people so a "fake language" was invented atop them - you have "objects" that don't actually exist as objects, they are just "type based dispatch/selection mechanism for functions", "classes" that are firstly "producers of things and holders of common implementation" and only secondarily also work to "group together classes of objects"
I do not think OOP ever really worked out well as can be evidenced by it no longer being as popular and people having almost entirely abandoned "Cat > Animal > Object" inheritance hierarchies.
I've always had trouble internalizing the "physics" of physics or chemistry, as if it were all super arbitrary and there was no order to it.
Computation and maths on the other hand just click with me. Philosophy as well btw.
I guess I deal better with handling completely abstract information and processes and when they clash with the real world I have a harder time reconciling.
Besides OO -> Functional this applies everywhere else in Computer Science. If you understood the fundamentals no new framework, language or paradigm can shock you. The similarities are clear once you have a fitting world model.
Read why programming languages have the structures what they have. Challenge them. They are full with mistakes. One infamous example is the "final" keyword in Java. Or for example, Python's list comprehension. There are better solutions to these. Be annoyed by them, and search for solutions. Read also about why these mistakes were made. Figure out your own version which doesn't have any of the known mistakes and problems.
The same with "principles" or rule of thumbs. Read about the reasons, and break them when the reasons cannot be applied.
And use a ton of programming languages and frameworks. And not just Hello World levels, but really dig deep them for months. Reach their limits, and ask the question, why those limits are there. As you encounter more and more, you will be able to reach those limits quicker and quicker.
One very good language for this, I think, is TypeScript. Compared to most other languages its type inference is magic. Ask why. The good thing of it is that its documentation contains why other languages cannot do the same. Its inference routinely breaks with edge cases, and they are well documented.
Also Effective C++ and Effective Modern C++ were my eye openers more than a decade ago for me. I can recommend them for these purposes. They definitely helped me to loose my "junior" flavor. They explain quite well the reasons as far as I remember.
I've been doing this for coming up on thirty years now, mostly at one large company, and I spent a significant number of hours every week fielding questions from people who are newer at it who are having trouble with one thing or another. Often I can tell immediately from the question that the root of the problem is that their world model (Naur would call it their Theory) is incomplete or distorted in some way that makes it difficult for them to reason about fixing the problem. Often they will complain that documentation is inadequate or missing, or that we don't do it the way everyone else does, or whatever, and there's almost always some truth to that.
The challenge then is to find a way to represent your own theory of whatever the thing is into some kind of symbolic representation, usually some combination of text and diagrams which, shown to a person of reasonable experience and intelligence, would conjure up a mental model in the reader which is similar to your own. In other words you want to install your theory into the mind of another person.
A theory of the type Naur describes can't be transplanted directly, but I think my job as a senior developer is to draw upon my experience, whether it was in the lecture hall or on the job, to figure out a way of reproducing those theories. That's one of the reasons why communication skills are so critical, but its not just that; a person also needs to experience this process of receiving a theory of operation from another person many times over to develop instincts about how to do it effectively. Then we have to refine those intuitions into repeatable processes, whether its writing documents, holding classes, etc.
This has become the most rewarding part of my work, and a large part of why I'm not eager to retire yet as long as I feel I'm performing this function in a meaningful way. I still have a great deal to learn about it, but I think that Naur's conception of what is actually going on here makes it a lot more clear the role that senior engineers can play in the long term function of software companies if its something they enjoy doing.
I agree so much with this. It's why I feel so stifled when an e.g. product manager tries to insulate and isolate me from the people who I'm trying to serve -- you (or a collective of yous) need to have access to both expertise in the domain you're serving, and expertise in the method of service, in order to develop an appropriate and satisfactory solution. Unnecessary games of telephone make it much harder for anyone to build an internal theory of the domain, which is absolutely essential for applying your engineering skills appropriately.
Another facet of this is my annoyance at other developers when they persistently incurious about the domain. (Thankfully, this has not been too common.)
I don't just mean when there are tight deadlines, or there's a customer-from-heck who insists they always know best, but as their default mode of operation. I imagine it's like a gardener who cares only about the catalogue of tools, and just wants the bare-minimum knowledge to deal with any particular set of green thingies in the dirt.
Edit: The main role of PM is to decide which features to build, not how those features should be built or how they should work. Someone has to decide what to build, that is the PM, but most PM are not very good at figuring out the best way for those features to work so its better if the programmers can talk to users directly there. Of course a PM could do that work if they are skilled at it, but most PM wont be.
So that we're on the same page, what I think should be PM responsibilities:
If I have a user story: "As a customer I want to purchase a product so that I can receive it at my address" - PM defines this user story as they have insight to decide if such feature is needed.
PM should then define acceptance criteria: "Given customer is logged in When they view Product page Then 'Add product to basket' button should appear", "Given 'Add product to basket' button When customers click on it Then Product information modal should appear" etc - PM should know what users actually want, ie whether modals should appears, or not; whether this feature should be available for logged users only, or not.
How this will work shouldn't matter to PM; these are AC they've defined.
Of course the process of defining AC should involve developers (and QA), because AC should be exhaustive to delivering given feature
Even the most verbose specifications too often have glaring ambiguities that are only found during implementation (or worse, interoperability testing!)
In practice, it isn't.
Product designers have to intuit the entire world model of the customer. Product managers have to intuit the business model that bridges both. And on and on.
Why do engineers constantly have these laughably mind blowing moments where they think they are the center of the universe.
Software people do what they do better than anyone else. I mean obviously! Just listening to a non-software person discuss software is embarrassing. As it should be.
There's something close to mathematics that SWEs do, and yet it's so much more useful and economically relevant than mathematics, and I believe that's the bulk of how the "center of the universe" mindset develops. But they don't care that they're outclassed by mathematicians in matters of abstract reasoning, because they're doers and builders, and they don't care that they're outclassed by people in effective but less intellectual careers, because they're decoding the fundamental invariants of the universe.
I don't know. I guess I care so much because I can feel myself infected by the same arrogance when I finally succeed in getting my silicon golems to carry out my whims. It's exhilarating.
Similarly, by siloing the world model in one or two heads, you disable the team dynamics from contributing to building a better solution: eg. a product manager/designer might think the right solution is an "offline mode" for a privacy need without communicating the need, the engineering might decide to build it with an eventual consistency model — sync-when-reconnected — as that might be easier in the incumbent architecture, and the whole privacy angle goes out the window. As with everything, assuming non-perfection from anyone leads to better outcomes.
Finally, many of the software engineers are the creative type who like solving customer problems in innovative ways, and taking it away in a very specialized org actually demotivates them. Many have worked in environments where this was not just accepted, but appreciated, and I've it seen it lead to better products built _faster_.
If the programmer gets to intimately understand the user's experience software would be easier to use. That's why I support the idea of engineers taking support calls on rotation to understand the user.
Both can be true at the same time, a product manager who retains the big picture of the business and product, and engineers who understand tiny but important details of how the product is being used.
If there were indeed perfect product managers, there would no need for product support.
Thesis A is something like: the value of the programmer comes from their practical ability to keep developing the codebase. This ability is specific to the codebase. It can only be obtained through practice with that codebase, and can't be transferred through artefacts, for the same reason you can't learn to play tennis by reading about it (a "Mary's Room" argument).
This ability is what Naur calls "theory". I think the term is a bit confusing (to me, the word is associated with "theoretical" and therefore to things that can be written down). I feel like in modern discourse we would usually refer to this as a "mental model", a "capability", or "tacit knowledge".
Then there's Thesis B, which comes more from a DDD lineage, and which is something like: the development of a codebase requires accumulation of specific insights, specific clarifying perspectives about problem-domain knowledge. The ability for programmers to build understanding is tied to how well these insights are expressed as artefacts (codebase structure, documentation, communication documents).
I feel like some disagreements in SWE discourse come from not balancing these two perspectives. They're actually not contradictory at all and the result of them is pretty common-sensical. Thesis A explains the actual mechanism for Thesis B, which is that providing scaffolding for someone learning the codebase obviously helps, and vice-versa, because the learned mental model is an internally structured representation that can, with work, be externalised (this work is what "communication skills" are).
Or maybe I'm just a little bit insane. Or both.
Everyone should subscribe to the Future of Coding (recently renamed to the Feeling of Computing) podcast if you haven't already: https://feelingof.com/
(Second, albeit more theoretical, would be A Critique of Cybernetics by Jonas)
Of course the model is incomplete compared to reality. That's in the definition of a model, isn't it? And what is deemed a problem in one perspective might be conceived as a non problem in an other, and be unrepresentable in an other.
I try to train and mentor those that are junior to me. I try to show them what is possible, and patterns that result in failure. This training is often piecemeal and incomplete. As much as I can, I communicate why I do the things I do, but there are very few things I tell them not to do.
I am often surprised at the way people I have trained solve problems, and frequently I learn things myself.
Training is less successful for those who aren’t interested in their own contributions, and who view the job only as a means to get paid. I am not saying those people are wrong to think that way, but building a world view of work based on disinterest isn’t going to let people internalize training.
I think it becomes difficult to train the next layer up though, which is a sum-total of life experience. And I think this is what the parent poster was referring to.
For example, I read a lot of Agatha Christie growing up. At school I participated in problem-solving groups, focusing on ways to "think" about problems. And I read Mark Clifton's "Eight keys to Eden".
All of that means I approach bug-fixing in a specific mental way. I approach it less as "where is the bug" and more like "how would I get this effect if I was wanting to do it". It's part detective novel, part change in perspective, part logical progression.
So yes, training is good, and I agree that needs to be one. But I can not really teach "the way I think". That's the product of a misspent youth, life experience, and ingrained mental patterns.
"Transmissionism" is a term I've seen to describe this
"Seeing the work reveals what matters. Even if the master were a good teacher, apprenticeship in the context of on-going work is the most effective way to learn. People are not aware of everything they do. Each step of doing a task reminds them of the next step; each action taken reminds them of the last time they had to take such an action and what happened then. Some actions are the result of years of experience and have subtle reasons; other actions are habit and no longer have a good justification. Nobody can talk better about what they do and why they do it than they can while in the middle of doing it."
complexity is
not what you believe it is
please try listening
Very cool
that can only be moved around,
not eliminated.
To keep it all in a clump
Than spread it about
Who wrote emails in haiku
It got old quickly
....
Sorry, I couldn't resist!!
Most smart juniors have no problem with learning. Perceptual exposure and deliberate practice works almost mechanically. However, if someone can't tell you what examples you should be exposed to, you'll learn crap.
My guy LeCun believes in deterministic systems describing reality even more than LLMs. He is literally a symbolic logic die hard.
I'd love to talk more live. I think I have some ideas you'd be interested in. Find me in my profile.
This is also why average people with little time to commit find it hard to realize the importance and depth of AI. It's a full on university education exploring those.
Long before the discussion of the morality of AI went mainstream, I ran into a problem with making what appeared to be ethical choices in automation, and then went on a journey of trying to figure this all ethics thing out (took courses in university, read some books...)
I made an unexpected discovery reading Jonathan Haid's... either Righteous Mind or the Happiness Hypothesis. He claimed that practicing ethics, as is common in religious societies is an integral and important part of being a good person. This is while secular societies often disregard this aspect and imagine ethics to be something you learn exclusively by reading books or engaging in similar activity that has exclusively the descriptive side, but no practice whatsoever.
I believe this is the same with expertise. Part of it is gained through practice, and that is an unskippable part. Practice will also usually require more time than the meta-discussion of the subject.
To oversimplify it, a novice programmer who listened to every story told by a senior, memorized and internalized them, but sill can't touch-type will be worse at everyday tasks pertaining to their occupation. It's not enough to know touch-typing exists, one must practice it and become good at it in order to benefit from it. There are, of course, more, but less obvious skills that need practice, where meta-knowledge simply can't be used as a substitute. There are cues we learn to pick up by reading product documentation which will tell us if the product will work as advertised, whether the product manufacturer will be honest or fair with us, will the company making the product go out of business soon or will they try to bait-and-switch etc.
When children learn to do addition, it's not enough to describe to them the method (start counting with first summand, count the number of times of the second summand, the last count is the result), they actually must go through dozens of examples before they can reliably put the method to use. And this same property carries over to a lot of other activities, even though we like to think about ourselves as being able to perform a task as soon as we understand the mechanism.
Great thread.
The best educators I had had exactly that approach: you sometimes start with theory, but other times with challenges which make you feel the difficulty, and understand the value of the theory you are co-developing with the educator (they just have the benefit of knowing exactly where we'll end up, but when time allows, they do let you take a wrong turn too). Even if you start with theory, diving into a challenge where you are allowed not to apply the learnings should quickly tell you why the theoretical side makes sense.
As with everything in life, great educators are few but once you have them, you can apply the same approach yourself even if the educator is unable to steer you the right way.
If you never received this type of education, then what you received could arguably be called a waste of time.
Yea, but, I have a search engine that contains all the original uncompressed training data, so I'm back on top. How we collectively forgot this is amazing to me.
> and they need to have the right project that provides the opportunity to learn what needs to be learnt.
It takes _time_. I solve problems the way I do because I've had my fair share of 2am emergency calls, unexpected cost blowups, and rewrite failures in my career. The weariness is in my bones at this point.
Agree about expertise being inseparable from the 'world model'. When someone tells us something, they're assuming that we know a certain amount of background knowledge but, in reality, we never have exactly the missing pieces that the speaker is assuming we have because our world model is different. It can lead to distortions and misunderstandings.
Even if someone repeats back to us variants of what we've told them at a later time, it doesn't mean that they've internalized the exact same knowledge. The interpretation can be different in subtle and surprising ways. You only figure out discrepancies once you have a thorough debate. But unfortunately, a lot of our society is built around avoiding confrontation, there is a lot of self-censorship, so actually people tend to maintain very different world models even though the surface-level ideas which they communicate appear to be similar.
Individuals in modern society have almost complete consensus over certain ideas which we communicate and highly divergent views concerning just about everything else which we don't talk about... And as our views diverge more, it narrows down the set of topics which can be discussed openly.
Actually, maybe even worse (not directed at parent) - I think some "seniors" have a stick so far up their err keyboard, and think they are so wise beyond words that they refuse to share their "all knowing expertise" with anyone else as a form of gatekeeping or perhaps fear of being "found out" (that they are not actually keyboard "Gods").
Really though, just wright shit down even if the first draft isn't great. Write it down, check it into the codebase.