> you lay out a huge specification that would fully work through all of the complexity in advance, then build it.
This has never happened and never will. You simply are not omniscient. Even if you're smart enough to figure everything out the requirements will change underneath you.But I do still think there's a lot of value into coming up with a good plan before jumping in. A lot of software people like to jump in and I see them portray the planning people as trying to figure everything out first. (I wonder if we reinforce the jumping in head first mentality because people figure out you can't plan everything) A good plan helps you prevent changing specs and prepares you for hiccups. It helps by having others but basically all you do is try to think of all the things that could go wrong. Write them down. Triage. If needed, elevate questions to the decision makers. Try a few small scale tests. Then build out. But building out you're always going to find things you didn't see. You can't plan forever because you'll never solve the unknown unknowns until you build, but also good prep makes for smoother processes. It's the reason engineers do the math before they build a bridge. Not because the math is a perfect representation and things won't change (despite common belief, it's not static) but because the plan is cheaper than the build and having a plan allows you to better track changes and helps you determine how off the rails you've gone.
It is also perplexing to me that people think they can just plan everything out and give it to a LLMs. Do you really believe your manager knows everything that needs to be done when they assign jobs to you? Of course not, they couldn't. Half the job is figuring out what the actual requirements are.
>> you lay out a huge specification that would fully work through all of the complexity in advance, then build it.
> This has never happened and never will. You simply are not omniscient. Even if you're smart enough to figure everything out the requirements will change underneath you.
I am one of those "battle-scarred twenty-year+ vets" mentioned in the article, currently working on a large project for a multinational company that requires everything to be specified up-front, planned on JIRA, estimates provided and Gantt charts setup before they even sign the contract for the next milestone.
I've worked on this project for 18 months, and I can count on zero hands the times a milestone hasn't gone off the rails due to unforeseen problems, last-minute changes and incomplete specifications. It has been an growing headache for the engineers that have to deliver within these rigid structures, and it's now got to the point that management itself has noticed and is trying to convince the big bosses we need a more agile and iterative approach.
Anyone who claims upfront specs are the solution to all the complexity of software either has no real world experience, or is so far removed from actual engineering they just don't know what they're talking about.
pretty crazy that shit like this is still happening in 2026
Agreed. Since this blog has posts since 2007 I can only think that the author is in the "so far removed" group.
Working on a project for 18 months doesn't give you enough insight into it to know what is good or now about it. You need several more year before you can usefully figure out what changes will help get you to make milestones. (other than trivially obvious things, which might be the low handing fruit - but sometimes they are the better way to do things but the real problem makes that stand out instead).
Nothing will get you to hit every milestone. However you can make progress if you have years of experience in that project and the company is willing to invest in the needed time to make things better (they rarely are)
> A lot of software people like to jump in and I see them portray the planning people as trying to figure everything out first.
My approach, especially for a project with a lot of unknowns, is usually to jump in right away and try to build a prototype. Then iterate a few times. If it's a small enough thing, a few iterations is enough to have a good result.
If it's something bigger, this is the point where it's worth doing some planning, as many of the problems have already been surfaced, and the problem is much better understood.
I've seen some issue with this approach is that management will want to sell the prototype, bypassing the "rewrite from the lesson learned" step, and then every shortcut took into the prototype will bite you, a lot..
And things like "race conditions"/lack of scalability due to improper threading architecture aren't especially easy to fix(!)..
The Anna Karenina principle looms large in software engineer projects. Basically there are an infinite failure modes that can occur due to small actions or wrong thinking by one or more influential people, but there is only one way to make large projects successful. Basically the team has to have sufficient expertise to cover the surface area, and those individuals need enough trust from leadership to navigate the million known and unknown pitfalls that await.
You don't ever make the prototype public.
Also, there's a certain point where you can't avoid management sabotaging things.
Sometimes you don't know what needs to be built until you build it. These end-to-end prototypes are how to enhance your understanding and develop deeper intuition about possibilities, where risks lie, etc.
I'd like you to go look at PRINCE2 and SSADM. Or read the original Royce paper - https://www.praxisframework.org/files/royce1970.pdf was written explicitly to term this Antipattern "Waterfall." (Note that Royce marks it as an antipattern.)
I discussed some of this in https://www.ebiester.com/agile/2023/04/22/what-agile-alterna... and it gives a little bit of history of the methods.
We are nearly 70 years into this discussion at this point. I'm sure Grace Hopper and John Mauchly were having discussions about this around UNIVAC programs.
The book "How Big Things Get Done" by Bent Flyvbjerg nicely answers all the concerns mentioned in this thread. I'll answer here to avoid littering replies everywhere.
> But I do still think there's a lot of value into coming up with a good plan before jumping in.
Definitely, with emphasis on a _good_ plan. Most "plans" are bad and don't deserve that name.
> be specified up-front, planned on JIRA
Making a plan up-front is a good approach. A specification should be part of that plan. One should be ready to adapt it when needed during execution, but one should also strive to make the spec good enough to avoid changing.
HOWEVER, the "up-front specification" you mentioned was likely written _before_ making a plan, which is a bad approach. It was probably written as part of something that was called "planning" and has nothing to do with actual planning. In that case, the spec is pure fiction.
> estimates provided
Unless this project is exceptional, the estimates are probably fiction too.
> and Gantt charts setup
Gantt charts are a model, not a plan. Modeling is good; it gives you insight into the project. But a model should not be confused with a plan. It is just one tiny fragment you need to build a plan, and Gantt charts are just one of many many many types of models needed to build a plan.
> before they even sign the contract for the next milestone
That's a good thing. Signing a contract is an irreversible decision. The only contract that should be signed before planning is done is the contract that employs the planners.
> Anyone who claims upfront specs are the solution
See bove. A rigid upfront spec is usually not a plan, but pure fiction.
> My approach, especially for a project with a lot of unknowns, is usually to jump in right away and try to build a prototype.
Whether this is called planning or "jumping in" is a difference in terminology, not in the approach. The relevant clue is that you are experimenting with the problem to understand it, but you are NOT making irreversible decisions. By the terminology used in that book, you are _planning_, not _executing_.
> after the 2000 pages specification document was written, and passed down from the architects to the devs
If the 2000 page spec has never been passed to the devs while writing it, it's not part of a plan, it's pure fiction. Trying to develop software against that spec is part of planning.
Yes it did, however it never works in pratice when it comes to integration testing two years later after the 2000 pages specification document was written, and passed down from the architects to the devs.
2000 page specification documents are rarely useful (if ever?).
You need smaller documents - this is the core technology we are using. This is how one subsystem is designed - often this should be on a whiteboard because once you get into the implementation details you need to change the plan, but the planning was useful. This is how to use core parts of the system so new comers can start working quick.
You need disciple to accept that sometimes libfoo is the best way to solve a problem in isolation, but since libbar is used elsewhere and can solve the problem your local problem will use libbar despite making your local problem uglier. Have a small set of core technologies that everyone knows and uses is sometimes more valuable than using the best tool for the job - but only sometimes.
Indeed. And writing out a design is actually a good method for thinking through the design. It helps uncover assumptions, including those that are flawed. It allows you to weigh various design options explicitly. It provides a place for identifying and resolving ambiguity and lack of clarity in the requirements. Contracts can be distilled in the process. Such design docs can also focus and direct implementation; you have a clearer picture of the parts and contours of your system. In a way, it is like programming, but at a conceptually higher, architectural level, where you work through and chew on the thing to flesh out and validate it in the very act of specifying.
And by doing this sort of exercise, you can avoid wasting time on dead ends, bad design, and directionless implementation. It's okay if requirements change or you discover something later on that requires rethinking. The point is to make your thinking more robust. You can always amend a design document and fill in relevant details later.
Furthermore, a mature design begins with the assumption that requirements (whether actual requirements or knowledge of them) may change. That will inform a design where you don't paint yourself into a corner, that is flexible enough to be adapted (naturally, if requirements change too dramatically, then we're not really talking about adaptation of a product, but a whole new product).
How much upfront design work you should do will depend on the project, of course. So there's a middle way between the caricature of waterfall and the caricature of agile.
> This has never happened and never will. You simply are not omniscient. Even if you're smart enough to figure everything out the requirements will change underneath you.
My best project to date was a largely waterfall one - there was somewhere around 50-60 pages of A4 specs, a lot of which I helped the clients engineer. As with all plans, a lot of it changed during implementation, actually I figured out a way of implementing the same functionality, but automating it to a degree where about 15 of those could be cut out.
Furthermore, it was immensely useful because by the time I actually started writing code, most of the questions that needed answers and would alter how it should be developed had already come up and could be resolved, in addition to me already knowing about some edge cases (at least when it came to how the domain translates into technology) and how the overall thing should work and look.
Contrast that to some cases where you're just asked to join a project and help out and you jump into the middle of ongoing development, not going that much about any given system or the various things that the team has been focusing on in the past few weeks or months.
> It’s not hard to see that if they had a few really big systems, then a great number of their problems would disappear. The inconsistencies between data, security, operations, quality, and access were huge across all of those disconnected projects. Some systems were up-to-date, some were ancient. Some worked well, some were barely functional. With way fewer systems, a lot of these self-inflicted problems would just go away.
Also this reminds me of https://calpaterson.com/bank-python.html
In particular, this bit:
> Barbara has multiple "rings", or namespaces, but the default ring is more or less a single, global, object database for the entire bank. From the default ring you can pull out trade data, instrument data (as above), market data and so on. A huge fraction, the majority, of data used day-to-day comes out of Barbara.
> Applications also commonly store their internal state in Barbara - writing dataclasses straight in and out with only very simple locking and transactions (if any). There is no filesystem available to Minerva scripts and the little bits of data that scripts pick up has to be put into Barbara.
I know that we might normally think that fewer systems might mean something along the lines of fewer microservices and more monoliths, but it was so very interesting to read about a case of it being taken to the max - "Oh yeah, this system is our distributed database, file storage, source code manager, CI/CD environment, as well as web server. Oh, and there's also a proprietary IDE."
But no matter the project or system, I think being able to fit all of it in your head (at least on a conceptual level) is immensely helpful, the same way how having a more complete plan ahead of time can be helpful with a wide variety of assumptions vs "we'll decide in the next sprint".