> This has never happened and never will. You simply are not omniscient. Even if you're smart enough to figure everything out the requirements will change underneath you.
I am one of those "battle-scarred twenty-year+ vets" mentioned in the article, currently working on a large project for a multinational company that requires everything to be specified up-front, planned on JIRA, estimates provided and Gantt charts setup before they even sign the contract for the next milestone.
I've worked on this project for 18 months, and I can count on zero hands the times a milestone hasn't gone off the rails due to unforeseen problems, last-minute changes and incomplete specifications. It has been an growing headache for the engineers that have to deliver within these rigid structures, and it's now got to the point that management itself has noticed and is trying to convince the big bosses we need a more agile and iterative approach.
Anyone who claims upfront specs are the solution to all the complexity of software either has no real world experience, or is so far removed from actual engineering they just don't know what they're talking about.
Nothing will get you to hit every milestone. However you can make progress if you have years of experience in that project and the company is willing to invest in the needed time to make things better (they rarely are)
My approach, especially for a project with a lot of unknowns, is usually to jump in right away and try to build a prototype. Then iterate a few times. If it's a small enough thing, a few iterations is enough to have a good result.
If it's something bigger, this is the point where it's worth doing some planning, as many of the problems have already been surfaced, and the problem is much better understood.
And things like "race conditions"/lack of scalability due to improper threading architecture aren't especially easy to fix(!)..
Also, there's a certain point where you can't avoid management sabotaging things.
Of course, it requires some discipline to not just yolo the prototype into production when that’s not appropriate.
It's sort of the old General Eisenhower quote: "In preparing for battle I have always found that plans are useless, but planning is indispensable."
I discussed some of this in https://www.ebiester.com/agile/2023/04/22/what-agile-alterna... and it gives a little bit of history of the methods.
We are nearly 70 years into this discussion at this point. I'm sure Grace Hopper and John Mauchly were having discussions about this around UNIVAC programs.
> But I do still think there's a lot of value into coming up with a good plan before jumping in.
Definitely, with emphasis on a _good_ plan. Most "plans" are bad and don't deserve that name.
> be specified up-front, planned on JIRA
Making a plan up-front is a good approach. A specification should be part of that plan. One should be ready to adapt it when needed during execution, but one should also strive to make the spec good enough to avoid changing.
HOWEVER, the "up-front specification" you mentioned was likely written _before_ making a plan, which is a bad approach. It was probably written as part of something that was called "planning" and has nothing to do with actual planning. In that case, the spec is pure fiction.
> estimates provided
Unless this project is exceptional, the estimates are probably fiction too.
> and Gantt charts setup
Gantt charts are a model, not a plan. Modeling is good; it gives you insight into the project. But a model should not be confused with a plan. It is just one tiny fragment you need to build a plan, and Gantt charts are just one of many many many types of models needed to build a plan.
> before they even sign the contract for the next milestone
That's a good thing. Signing a contract is an irreversible decision. The only contract that should be signed before planning is done is the contract that employs the planners.
> Anyone who claims upfront specs are the solution
See bove. A rigid upfront spec is usually not a plan, but pure fiction.
> My approach, especially for a project with a lot of unknowns, is usually to jump in right away and try to build a prototype.
Whether this is called planning or "jumping in" is a difference in terminology, not in the approach. The relevant clue is that you are experimenting with the problem to understand it, but you are NOT making irreversible decisions. By the terminology used in that book, you are _planning_, not _executing_.
> after the 2000 pages specification document was written, and passed down from the architects to the devs
If the 2000 page spec has never been passed to the devs while writing it, it's not part of a plan, it's pure fiction. Trying to develop software against that spec is part of planning.
You need smaller documents - this is the core technology we are using. This is how one subsystem is designed - often this should be on a whiteboard because once you get into the implementation details you need to change the plan, but the planning was useful. This is how to use core parts of the system so new comers can start working quick.
You need disciple to accept that sometimes libfoo is the best way to solve a problem in isolation, but since libbar is used elsewhere and can solve the problem your local problem will use libbar despite making your local problem uglier. Have a small set of core technologies that everyone knows and uses is sometimes more valuable than using the best tool for the job - but only sometimes.
My best project to date was a largely waterfall one - there was somewhere around 50-60 pages of A4 specs, a lot of which I helped the clients engineer. As with all plans, a lot of it changed during implementation, actually I figured out a way of implementing the same functionality, but automating it to a degree where about 15 of those could be cut out.
Furthermore, it was immensely useful because by the time I actually started writing code, most of the questions that needed answers and would alter how it should be developed had already come up and could be resolved, in addition to me already knowing about some edge cases (at least when it came to how the domain translates into technology) and how the overall thing should work and look.
Contrast that to some cases where you're just asked to join a project and help out and you jump into the middle of ongoing development, not going that much about any given system or the various things that the team has been focusing on in the past few weeks or months.
> It’s not hard to see that if they had a few really big systems, then a great number of their problems would disappear. The inconsistencies between data, security, operations, quality, and access were huge across all of those disconnected projects. Some systems were up-to-date, some were ancient. Some worked well, some were barely functional. With way fewer systems, a lot of these self-inflicted problems would just go away.
Also this reminds me of https://calpaterson.com/bank-python.html
In particular, this bit:
> Barbara has multiple "rings", or namespaces, but the default ring is more or less a single, global, object database for the entire bank. From the default ring you can pull out trade data, instrument data (as above), market data and so on. A huge fraction, the majority, of data used day-to-day comes out of Barbara.
> Applications also commonly store their internal state in Barbara - writing dataclasses straight in and out with only very simple locking and transactions (if any). There is no filesystem available to Minerva scripts and the little bits of data that scripts pick up has to be put into Barbara.
I know that we might normally think that fewer systems might mean something along the lines of fewer microservices and more monoliths, but it was so very interesting to read about a case of it being taken to the max - "Oh yeah, this system is our distributed database, file storage, source code manager, CI/CD environment, as well as web server. Oh, and there's also a proprietary IDE."
But no matter the project or system, I think being able to fit all of it in your head (at least on a conceptual level) is immensely helpful, the same way how having a more complete plan ahead of time can be helpful with a wide variety of assumptions vs "we'll decide in the next sprint".
And by doing this sort of exercise, you can avoid wasting time on dead ends, bad design, and directionless implementation. It's okay if requirements change or you discover something later on that requires rethinking. The point is to make your thinking more robust. You can always amend a design document and fill in relevant details later.
Furthermore, a mature design begins with the assumption that requirements (whether actual requirements or knowledge of them) may change. That will inform a design where you don't paint yourself into a corner, that is flexible enough to be adapted (naturally, if requirements change too dramatically, then we're not really talking about adaptation of a product, but a whole new product).
How much upfront design work you should do will depend on the project, of course. So there's a middle way between the caricature of waterfall and the caricature of agile.