My problem with all methodologies are the lack of coherent stakeholders on the business side. Product owners are a poor proxy at best but at least they're a definitive throat to choke when things go awry.
Being told for the 20th engagement in a row that the right people for a conversation about a feature can't spare the time to be in the the room to workshop it or provide meaningful feedback (so why are you paying $300/hr for my time?) is de-motivating.
As a rule I find most human beings exceptionally poor at thinking structurally and critically about why they do anything at all, and strategies or behaviors which would improve those things either by outcome or experience.
They just go on raging their fists in the dark and at the same time running away from any daylight that might make them change.
After a certain point in their careers, it really does feel like a large percentage of people stop asking "why?" when faced with new requirements, standards, etc.
Maybe it's just a natural part of getting older, as habits and perspectives become more ingrained over the course of time.
I'm not sure it's about getting older, but it does seem somehow related to age.
In my experience, the people who won't ask "why" aren't simply "older." They're the in-between. Young people ask "why." But also older people, maybe 50+.
My experience has been that it's the project managers and their overlords in the 25-49 range that seem to think they know everything and everyone else is unclued.
The young ask "why" because they're curious and want to learn. The old as "why" because they want to understand what's behind decisions. And managers hate explaining themselves.
> My problem with all methodologies are the lack of coherent stakeholders on the business side.
For internal or contracted apps, there is usually a well-defined customer. There may be other stakeholders, too, but that first one is the key.
If the problem is getting them (or their representative, and not a dev side proxy, engaged, well, that's the problem you need to fix.)
“Agile” methodologies that try to sidestep this, e.g., by designating a IT org proxy are broken in the same way as “Marxist” methodologies that try to sidestep the requirement for mature capitalist development and proletarian class consciousness, e.g., via an privileged revolutionary vanguard.
> Product owners are a poor proxy at best but at least they're a definitive throat to choke when things go awry.
That's pretty much exactly what is not needed.
> Being told for the 20th engagement in a row that the right people for a conversation about a feature can't spare the time to be in the the room to workshop it or provide meaningful feedback (so why are you paying $300/hr for my time?) is de-motivating.
Sure, and if that is happening often then prioritization isn't fitting business priorities, even if it is fitting what business says are their priorities.
Thanks! It's a familiar read. These days I prefer the pragmatic approach.
- Test features, not units (the time spent vs time saved is meh). So when something breaks, you can take it from there. Start digging.
- Build features
- Fix bugs by first writing a test, and then getting green
- Never go over-fancy with inheritance and whatnot. Readable longer functions are preferred over smaller once that break up the logic. Keep the poor soul in mind that has to go through your code in 3 years time. That could be you yourself ;-)
- Comment the "why"
That's it really. For web applications. Seems to work perfectly well for a code base that's over 10 years old and has seen many a developer come and go. Yes there's a bunch of legacy, but not a single part of it that's hard to reverse engineer with some effort.
Closely related to tests are monitoring the production environment. Every time there was an incident and it wasn't visible at the source (whether it was found due to customers complaining or conversion rates dropping), no matter how unlikely it seems for it to happen again (we made a unit test so we should be safe now, right?), always monitor for it.
This might be my ops side speaking, but often undesired behavior aren't outright bugs (a value not being set is expected, but that empty value led to an empty template which led to the frontend framework not reading an event might not be), the same class of issues can be monitored for. This is a combination of logging and monitoring, of both events (where the application checks for unexpected results) and of state (where something external to the application reacts to for example things like active users without login sessions).
> Test features, not units (the time spent vs time saved is meh).
Might I add that testing for security should be a priority. Some agile shops move so fast because they aren’t worried about small bugs. All security issues are bugs and often they are ones that aren’t along normal feature usage.
> Never go over-fancy with inheritance and whatnot.
So much this.
Personally, I've grown to prefer object composition for bucketing and sharing reusable functionality across classes over the years. Testing with mocks becomes far easier, and it really seems to simplify the mental model.
Nowadays, I only use inheritance when working with generics and generalizable interfaces.
I haven't practiced structured agile (in my case Scrum) in years. Mostly because every single sprint becomes so derailed by "urgent" outside requirements that any sort of cohesion and planning goes out the window.
Good luck telling the client that their P0 ticket will need to wait until next sprint. Or more accurately: tell the product owner or account manager to tell the client. Which in my experience often results in them caving to the client demands.
Then you have clients who won't pay for the planning and retrospective time.
I think my record is 4 consecutive Sprints before things went completely off process. I just gave up.
This is always the issue, no institutional buy-in.
One thing I learned a long time ago is that a "bad" process that is followed universally by everyone (dev, pjm, pm) will always out perform a "great" process with only token buy-in and constant exceptions.
Consistency allows even the worst process to become familiar enough to be improved on and reduces the overload of having to context switch unexpectedly, and it's amazing how much time can be wasted by people not knowing what process to use and having to either waste someone else's time to find out or do whatever their best guess is, probably wasting time much later on.
If you've ever seen someone operating a terrible piece of old data entry software they've been using for decades it's amazing how fast someone can be even at jumping through pointless hoops to get the job done - often a problem when introducing a newer process that might seem more efficient from the outside!
It's easier to take the path of least resistance and usually you won't get shot for it. Trying to fix things often makes you look like a trouble maker, especially if something doesn't work as well as it could (which often happens when iterating through change).
> Good luck telling the client that their P0 ticket will need to wait until next sprint
A key cause of that being a problem is stacking the entire team with a full sprint of vital work. Leave some time for issues. If no issues come up, pick up some nice-to-have tickets in the now spare time at the end of the sprint, or get started on work for next sprint.
If you absolutely must give everyone a full sprint of work every time, make sure some of it is from the nice-to-have pile so it can be put off when the surprise P0s come in (and in the absence of surprises, you get to clear off some of what are now nice-to-have bits of work but later might otherwise get surrounded by code/infrastructure that assumes they'll never happen so morph into nasty technical debt and/or their very own surprise P0). But if you try this, good luck explaining upwards why nice-to-have tickets are on the board while there are more important ones in the pile…
Even if you do have room in the Sprint, the formal Scrum methodology says adding a ticket requires tossing the Sprint. I suppose if you are removing an equal number of story points you can get away with it if you look the other way.
But then you have another issue, the sprint gets oversubscribed or behind and instead of stuff moving to next Sprint the team works overtime. Now all the sudden the velocity calculations look like you have one velocity but really it's because you added extra capacity that sprint.
Won't take long until "extra capacity" because "the normal way."
Isn't this a capacity planning issue? I thought if you are 100% complete on every sprint, then you are planning incorrectly. It's not supposed to be a report card every 2 weeks but an approach to review your work to build for the long term, right?
You're supposed to use your velocity to figure out how many story points you can reasonably complete in a Sprint. But if the velocity is padded (by adding invisible capacity... extra hours unaccounted for in the velocity calculation) then you end up with unrealistic expectations.
If your velocity is correct you should end up with 100% completion and your employees having a work life balance. But I rarely see that.
If a developer finishes all their tickets they should help out other developers. And if by some stretch you overestimate... which I think may be a worlds first for software ;)... you're supposed to grab the next highest priority ticket from the backlog.
But I've also seen an approach where the sprint purposely has more than the velocity in it as kind of a "stretch goal" but everyone on the team knows anything bellow the velocity line is unlikely to get done.
I've also seem places where the stretch goal becomes the goal the client is told will be completed... those places are toxic, if you find yourself working at one, run.
Edit: If you are 100% complete on your first Sprint then yes, you probably incorrectly planned capacity. It takes 3 sprints to get a velocity. But once you have a velocity, it should be pretty stable.
Edit 2: This is why you prioritize your whole backlog, not just the current sprint. For the chance you end up with 100% completion. No use scheduling > 100%. IMO that just lends to the problem I mentioned of the stretch goal becoming the real goal.
Does it make sense to expect 100% completion? Maybe for something new our estimates are off by X% and future ones get more accurate. But then there's unplanned tasks/work from infrastructure issues or a high priority issue from upper management, or some other group.
So then the idea is - well, "fix" the other unplanned work by adding more process to those other groups...
BUT then overall the business changes anyway.
conclusion - the point of all this is:
1 - we create an illusory sense of control so the business can feel good about it
2 - so at least we don't have to be overwhelmed by an infinite todo list and can end each day with some sense of work/life balance
If you're a good Scrum Master, the goal is to get as close to 100% plus or minus BUT never as a punitive thing. If you miss the target that shouldn't mean your team failed, that should mean that the velocity was wrong and will be reflected in the velocity for the next Sprint.
I personally think forcing the team to 100% or punishing them for not hitting 100% is extremely toxic.
For me, bugs are 0 story points, and misestimated tickets are never re-estimated. If you re-estimate or adjust a ticket you are messing up velocity. Velocity when done right has a built in allowance for bugs and misestimates. It takes discipline.
Eventually, since story points per Sprint is based on a measured velocity, I've found you end up in a natural flow where you hit 100% without trying.
The one part where it gets tricky is the burn down/up chart when used as a way to tell the team to speed up vs just an estimation tool.
Also, that gets to one of the issues with agile, is that what the velocity says can be done may be less than what the stake holder wants and it takes a very strong account manager to be able to explain that rather than agreeing to the unrealistic.
If it is following a structure specified outside the working organization, it is not Agile, but instead the exact thing that Agile was explicitly a reaction against.
> Mostly because every single sprint becomes so derailed by "urgent" outside requirements that any sort of cohesion and planning goes out the window.
An approach that puts following a plan over responding to change is...not Agile.
> Then you have clients who won't pay for the planning and retrospective time.
Why does your contract not define what is a billable hour and charge an appropriate rate for the time?
> I think my record is 4 consecutive Sprints before things went completely off process.
If you aren't adapting your process continuously (or, rather evaluating it continuously and adapting as needed), you are giving process a priority over the concrete team (in the broad sense, including the customer) and set of challenges faced that is exactly what the first value statement in the Agile Manifesto is about not doing.
From my experience it depends on the company culture, leadership and the way agile is implemented and started with in a company.
Big companies with very old and complex architectures coupled with old school separation of business and IT and a bag of bureaucracy. Will properly need a PMO, or SAFe like implementation. Sometimes they are in luck and they are able to start with departments or a business units and move faster. But even if you have that the wider organisation will lag heavily. Most if not all can't become agile in an instance with implementing the perfect agile way of working. Even transforming to an agile organisation is best done in an agile way.
Of course, the agility here happens by accident, because in the first week of the program increment you realize that something is much harder than anticipated, and hundreds of person-hours of planning are thrown overboard.
I don't like SAFe for many reasons but I have seen it used successfully as an agile steppingstone. Some of the more original ideas with in SAFe (there aren't many) can even be useful. Anyway I don't think the idea of an ATO is a bad idea or anti agile. Even within an agile environment projects can exists and can be done in an agile way.
Due to badly run prince2/PMI projects people are allergic to anything remotely sounding like a project. So I get the autor when he thinks of ATO == PMO == PMI == waterfall == nightmare. But for larger organisation you will properly need something like a ATO to be successful. Like anything else a badly run, top down driven ATO will fail and make life difficult for anyone involved (this is also made clear in the original McKinsey article).
I like your balanced thinking, in writing the article I was trying to be balanced as well.
Unfortunately when I've seen ATOs in practice it's a band-aid to addressing the actual problems (usually leadership or leadership needing to make structural and large cultural changes).
Dear "Hackers", it doesn't matter what McKinsey does.
Consulting's business is inventing processes and then selling them. If they pitch something that is not agile, it only means they believe they can make more money with something else.
> Dear "Hackers", it doesn't matter what McKinsey does.
It absolutely matters what McKinsey does. Practically every business in America has a significant number of shallow-thinking MBA types who have next to zero understanding of the business they work in, and few practical skills. These folks survive and thrive by aggressively promoting whatever idiotic thing McKinsey and the fine folks at Harvard Business Review roll out next.
If there are buyers of anything, even useless and harmful, someone will sell it to them. McKinsey, and others, are reading the market appetite of buyers and figuring out how to choose and package information in a way that is pleasing and meets the immediate usefulness measure of individuals in their markets. It's basically the same as the narcotics trade. And as both are hugely profitable, no amount of reasonable criticism will gain any traction with the participants.
In my experience, the term “Agile” itself has been so widely applied that it lost any useful meaning it might have had a long time ago. The ideas it prompted are the interesting part.
I’ve never fully agreed with the original Agile Manifesto anyway, mostly because I think it tries to separate concepts that aren’t independent in practice. For example, if you don’t have a comprehensive requirements spec, how can you possibly know whether you have working software?
I do think the early Agile advocacy served a useful purpose in highlighting how slow and heavy a lot of software development processes were at the time. I think a lot of good ideas have been popularised as a result of that awareness and the various alternative processes that have been tried as a result. I also think some not so good ideas have been popularised, and some ideas that work well in the right context but not universally.
IMHO, it is much more interesting to discuss those specific ideas. How can we build software more incrementally, so we can see real progress and identify any potential problems earlier when they might be easier to fix? Is there a risk of becoming too short-sighted, so we create an illusion of progress, yet overall it takes longer to build a product we can ship? How can we organise our code and our teams to be more adaptable as requirements evolve over time? How does our situation change depending on whether we’re building a line-of-business web application, a game for a next generation console, or the management system for an aircraft carrier? There are lots of interesting ideas and useful discussions to be had, but they can be had just as well without applying the loaded term “Agile” anywhere.
> For example, if you don’t have a comprehensive requirements spec, how can you possibly know whether you have working software?
Short answer: iterative prototypes.
Long answer: I have yet to see a single project in all my decades of experience in this industry where the requirements didn't change during the project. A "comprehensive requirements spec" is and always was a myth. Agile wasn't even the first to say this bit out loud. Software is always developed by a series of iterative prototypes, only sometimes does the process acknowledge this.
The waterfall method (which is what is being invoked when "comprehensive requirements spec" is stated) was, and is, always a fictional story told by people both during and after the project. The entire difficulty of a software project lies in figuring out what needs to be done. Well that and fighting with the tools and modules ;)
I’ve said it before, and I’ll probably say it again, it’s not a fiction. I’ve watched millions of dollars get wasted by that approach. It’s a great way to deliver nothing on time unless you know the domain well and your customer really knows what they want. But it is, sadly, quite real.
Oh it's real, I agree. What I meant by "fiction" is that the success of waterfall, and the specifics of how waterfall are implemented, is fictional. As in during the "delivery phase", it is found that requirements were missed and so people will retroactively act as if those requirements were actually in the "requirements phase" all along. I did a lot of government contract work and saw the insanity in action.
Lots of retroactive changing of gantt charts, too :)
To be clear, having a comprehensive requirements spec has nothing to do with using a waterfall model, and I absolutely was not invoking that model when I used the term. The point is that at any given point in time, either you know what you are currently trying to build or you don’t, and you can only meaningfully define whether your software is working to the extent that you do.
Also, you know, specs can and have bugs. You can have the most comprehensive requirements and specification and a 100% correct implementation of the spec and still end up with software that just doesn’t work, because the spec and the requirements had incorrect assumptions, or worse, had internal contradictions baked into them. Happens all the time. Best to just accept it as inevitable and design the process so that it can cope with it.
OK, but then how do you know what each prototype is required to do?
It’s true that you might be lucky enough to work on a project where your employer/client/customer is willing to let you make some of the decisions. In that case, you are effectively defining some of the requirements yourself and hoping that your employer/client/customer will be happy with your decisions. That’s a risk you can mutually agree to take, and sometimes it will pay off royally, and sometimes it will end in disaster, and probably most of the time it will end somewhere in between.
However, in my experience, the more important the software you’re building is, the less likely it is that you will have that kind of flexibility for the developers. At the extreme end of the industry, if you’re building something like the control system for medical equipment that will kill the patient if it goes wrong or a billion dollar satellite that you get one shot at launching, you probably aren’t going to be doing it through informal discussions with a “product owner” and relying on TDD to make sure everything is working (where “working” in that case is really a euphemism for “whatever test conditions an individual developer felt might matter”).
I have yet to see a single project in all my decades of experience in this industry where the requirements didn't change during the project.
Of course, but every time the requirements change, you still need to identify what the change is so you can change your implementation accordingly. There are many ways you can address that problem, some of which involve one big requirements spec document and some of which don’t, but if you don’t end up with some clear and well-maintained spec each time, you don’t know what you’re trying to build and you have no meaningful definition of “working software”.
In everything except the most exceptional circumstances knowing what to build will be a blend of written requirements and shared knowledge within the team. You can advocate for comprehensive requirements which pushes the effort towards the written spec side, or you can advocate for "working software over comprehensive documentation" which pushes the effort towards using more shared knowledge. Depending on the nature of your project, one or the other might be preferable.
For most projects, both "knowing what to build" and "working software" will never be 100% clear.
> OK, but then how do you know what each prototype is required to do?
You define an area of exploration for each prototype. Like "we think that the hard part of this problem is getting the encryption right, so we're going to explore that in this prototype".
And I thought SpaceX disproved this whole waterfall approach to rocket science? They have used iterative prototypes to build better rockets considerably faster than the waterfall approach used by the rest of the industry.
I think perhaps you are reading a meaning into my comments that is not intended. At no point have I argued, nor would I ever argue, for some waterfall-based approach and magical everything-fully-known-up-front documentation.
My argument is that whatever stage you’re at in your development process, if you don’t have clear requirements, you don’t know what you’re trying to build and you have no way to define whether your software is working or not. The comparison from the Agile Manifesto makes little sense, because having comprehensive documentation of your requirements and acceptance criteria is a prerequisite for even being able to define the working software that the Manifesto says is more valuable.
Of course you might do all kinds of prototypes and proofs-of-concept to help you explore the problem space and clarify the requirements in the early stages of your project, and maybe again later if there are significant changes. But this work isn’t about building working software, because you don’t know what that means yet; it’s about doing experiments to help you find out.
yeah but if your "comprehensive spec" is neither comprehensive or even accurate, how is your software working any better? If "working" means "what the spec says" instead of "what the customer/stakeholder wanted" then your "working" software isn't working.
Providing the customer with a working prototype to criticise often (always) allows them to refine their vision of what they wanted and helps them communicate what they need. It's always easier to criticise something than create - saying "the interaction around X doesn't feel right, can we make it cornflower blue?" is a lot easier than "the X interaction must be three shades of grey off cornflower blue" in a spec document.
That's been my experience, anyway. Insisting on a comprehensive spec at the start has always been more effort than its worth, because no-one knows what they actually want at the start.
This has become a frustrating discussion, because it seems like several people read the words “comprehensive spec” and inferred some sort of waterfall process with everything implausibly known up-front. That is not at all what I’m arguing for. In fact, I’m not sure I’ve ever seen anyone argue for it in a professional context. It just seems to be the hypothetical bogeyman in these discussions.
In the case you’re talking about, creating a prototype and then soliciting feedback from the client might be an excellent way to go. More generally, building a project in stages with regular interaction with and feedback from the customer or other stakeholders is often helpful. My point is that you still need to decide what you’re putting in that prototype or any of those earlier stages, because ultimately some developer has to write that code and management need to tell them what to create.
So you might well have a spec, in whatever form, that has initially has gaps or ambiguities relative to the final product you’ll want at the end of the whole process. That’s fine, and as you say, it is very common. But it should still reflect everything you know you want at the current time.
If and when you gain new information, such as feedback from your client after showing them a prototype, you can update your requirements to fill in the gaps or clarify the ambiguities in light of the additional knowledge you now have. You still have a clear record of what is needed, taking into account all the actual decisions that have been made and all the actual feedback you’ve received, and that is (to the best of your knowledge at that time) what defines whether or not you currently have working software.
Put another way, if you don’t maintain a good record of that information, you’re just building your software based on hearsay. Some of that information about what the customer/stakeholder actually wants will inevitably be lost or distorted with the passage of time and perhaps the changing make-up of your team. And then you definitely don’t have working software, whether or not you realise it.
"Working software over comprehensive documentation"
"That is, while there is value in the items on
the right, we value the items on the left more"
I have had many discussions with both agilest and let's say traditionalist about these statements. funnily enough people from both "sides" sometimes mis interpet (or at least that is my view) the statement as; agile means no documentation. To me it is about just enough documentation. You should know what needs to be build, what you expect from it etc.
But in the end it is better to have working software than to have piles of documentation. So there is value is having documentation but the end goal is working software.
If you are old enough you remember months and sometimes years spend at documentation, scribbling over details, etc. At the end of the documentation phase they started building, than a building phase that could take months or years. After the building phase they found out the things that seemed handy at the time where not working in real life or where obsoleet. So a analyst should create a RFC which again would take lots of time before it would finally be made
The idea of having working software over comprehensive documentation is that it is better to start building the most important feature(s) and release it as soon as possible. This makes sure you add value as soon as possible. Documentation by it self doesn't add value, but working software does. This doesn't mean that the important feature should not be documented, but it should be documented enough to add value as quickly as possible. That way you get feedback from the users and change what is needed instead of talking for days about what something should do and what problems might arise from some theoretically edge case. Also it makes prioritisation much easier. Maybe an edge case can be tackled by a change of process in state of a difficult software solution. thus having time to work on a feature that adds more value
I would be the first to agree that a lot of documentation of little or no value has traditionally been created during software development. I saw an interesting report once about some research on where software developers actually look for information while doing their work. Sure enough, it turns out that some types of documentation are very useful and some are mostly useless, and which is which is very much as you probably expect if you’ve been developing for a while.
Still, when it comes to things like defining the requirements and acceptance criteria for software, “just enough documentation” and “comprehensive documentation” are often the same. Otherwise, you literally don’t know what you’re trying to build. That is not to say that you have to have one big requirements spec that never changes (or one big set of requirements specs including numerous cross-references, if it’s that kind of project) or that every change must go through an extensive change management process involving 27 people and a month-long delay to fix a spelling mistake in the original spec. But you need something that clearly defines what the software is supposed to be.
As I noted in another comment, if you are making software without that, you don’t really have a useful definition of “working software” to prefer. Obviously you can still go ahead and make something. It’s just that both sides are then taking on the risk that the development team’s assumed requirements when they fill in the gaps will be satisfactory to whoever is paying the bill. That could end anywhere from spectacularly successful to a dismal failure, which doesn’t seem like an ideal basis for a software development process to me. Perhaps a more common scenario, particularly for software developed in-house or outsourced on a T&M charging basis, is that when differences arise you go back and correct them afterwards, so you still get what the end customer needs but it takes longer and costs more than was really necessary to get there. This is a more insidious danger, because if things do work out in the end, you might not have any quantifiable way to assess what you lost or maybe any awareness that you lost out at all. But you still lost out all the same.
The idea of having working software over comprehensive documentation is that it is better to start building the most important feature(s) and release it as soon as possible. This makes sure you add value as soon as possible.
Sometimes. It depends entirely on your situation. Shipping a line-of-business application with 75% of the desired functionality that you can then incrementally develop further in production might offer a lot more than 75% of the value. Shipping 50% of the software that controls the mechanics of a modern car might offer 0% of the value, and if shipping that 50% early means the 100% point when the car is actually useable is delayed then doing so actually has negative value, and the only early feedback you’re going to get is that no-one has bought the car yet.
So they idea of agile is to work closely to with the business and other stakeholders. That is why people do refinements to create shared understanding. Specification by example is also a good way to get shared understanding. If you have shared understanding, that is to say everyone understands the end goal it is easier to have less documentation.
If you work in an old school environment where you get work packages and you just have to do the work. You will need more documentation. simply because you can't ask any questions to the requestor.
Value is not only measured at the end of a total delivery. So for instance your car example. It could be really valuable to deliver the ABS part of the software to test it and any other safety and control functions. All these functions can get tested before the whole is delivered. If some less important features might not make it in time before the car launch you could decide to bring the car to market with the most important features and add the farting feature with an OTA update
So they idea of agile is to work closely to with the business and other stakeholders.
Sure, though can we please not pretend that Agile introduced the concepts of talking to other people between the start of a project and delivery or of trying to get everyone involved on the same page?
If you have shared understanding, that is to say everyone understands the end goal it is easier to have less documentation.
But how do you know that everyone really does understand the end goal, and crucially understand it the same way, if you haven’t fully defined what that goal is? If you have fully defined it, great, you have a spec.
If you work in an old school environment where you get work packages and you just have to do the work. You will need more documentation. simply because you can't ask any questions to the requestor.
Has any such environment actually existed since before most of us were born, though? This seems like the big waterfall straw man again.
Value is not only measured at the end of a total delivery.
Agreed.
So for instance your car example. It could be really valuable to deliver the ABS part of the software to test it and any other safety and control functions.
Yes, it could. Then again, modern car control software integrates numerous effects that determine the actual response required from different mechanical components to driver inputs and environmental factors, of which ABS is just one case. What ultimately matters is that the overall system is working properly by the time the car is ready to drive, and everything else is a stepping stone towards that end goal.
If building incrementally and working on the ABS in isolation allows for early testing and feedback and that in turn allows for more efficient correction of any defects in the original implementation, yes, absolutely, that has value.
On the other hand, if that kind of isolation and early testing is unlikely to find important defects in the finished system, perhaps because it turns out that the way you build individual systems is fairly reliable and the defects mostly arise when integrating the different systems, and if building everything in isolation and then integrating later delays your overall progress and the time when you can start the integration testing, then the more isolated approach has actually cost time and money rather than saving it.
I want to emphasize that my point here isn’t about this specific example. It is that every software project has its own context and it is important to use a development process appropriate in each context. Sometimes, it’s OK to guess at what your software should do and see if it works out. The kinds of startups we talk about all the time on HN often do this, obviously with varying degrees of success. But I think we should recognise that the whole business model in that case is essentially built around permanent experimentation and high variability outcomes. It’s not a way to build better software as most of us might define the term. It’s a strategy of throwing stuff at the wall to see what sticks, which happens to be a commercially viable strategy for some businesses in the current economic environment, where the definition of “working software” is basically “software we can run that will make us enough money to continue”. Somehow, I doubt that is what the authors of the Agile Manifesto had in mind when they wrote that line.
Again the manifesto doesn't state that you should not spec. Nor did I make the claim that before agile nobody talked to each other
Sadly I have seen work packages delivered to software teams without any context just specs. Sadly I have worked and seen projects that went down the documentation rabbit hole and spend years trying to come up with the ideal spec before any thing has been build. (and I'm not even that old). You would see that kind of stuff with in government but also large enterprise and even at mid sized companies. And it still exists today
I also have seen successful waterfall projects and I have seen failed agile projects (although those tent to fail faster and thus less costly)
The agile way is not the only way, there are instances that other ways of working might be better.
But for me personally the best work experiences where all companies that worked agile or where transitioning to agile. I worked this way (and helped in the transition) at companies ranging from scale ups to big enterprise.
A many a time I have had these same discussions with people at the beginning of a transition. Mostly people that never worked this way or had a bad experience.
Yes, if a huge organisation tries to manage a huge software project top-down in waterfall fashion, it’s easy to imagine how things might not end well. I’m not sure anyone here is advocating that, though; certainly I am not.
My original comment on the Agile Manifesto, which seems to have attracted most of the responses I’ve read here today, was only intended to make the point that without knowing what you’re trying to build, you have no way to define whether or not you have succeeded. I therefore don’t think it makes much sense to talk about valuing working software over comprehensive documentation in general terms as the Manifesto does. You can only know your software is working to the extent that you know how it’s supposed to work in the first place.
> My original comment on the Agile Manifesto, which seems to have attracted most of the responses I’ve read here today, was only intended to make the point that without knowing what you’re trying to build, you have no way to define whether or not you have succeeded. I therefore don’t think it makes much sense to talk about valuing working software over comprehensive documentation in general terms as the Manifesto does. You can only know your software is working to the extent that you know how it’s supposed to work in the first place.
The Manifesto is pretty explicit that “over” in the “over” statements means exactly what it says, and isn't a misspelling of “instead of”; the Manifesto very much does not claim that there isn't some degree of documentation that serves an essential role in defining and achieving correctly-functioning software and that that isn't instrumentally important.
I do wish, though, that rather than “over” statements with an explanation that “over” means “over” and not something else the manifesto had used something like “is/are served by” in place of “over” (with some light editing of the items on the sides to fit that structure):
Individuals and interactions *are served by* processes and tools.
Working software *is served by* appropriate documentation.
Customer collaboration *is served by* contract structure.
Ability to respond to change *is served by* planning.
They are about subordination, not exclusion, of the things on the right in favor of those on the left.
FWIW, I like your characterisation better. Saying that you value something as a means to achieve something else that you also value makes logical sense.
The kind of comprehensive documentation they were referring to were garbage 1000+ page documents that projects often created listing all requirements, specs, and design elements. In Waterfall and related BDUF approaches the intent was to produce this before any code. A months or years looking effort before even starting…
The point in the manifesto is to get to just enough (which is usually not 1000s of pages) and then start coding. Not to skip it entirely. The other issue with those documents form my experience was that unless it was a replacement of an existing system, the documents were wrong every time. And you’d lose more months fixing them or just continue to diverge and let them lie.
Re: your last paragraph. We’ve done that for aircraft. Ship a 50-80% solution that gets you into flight test instead of waiting for 100%. Then finish it over the next months or years. It’s very effective. You just have to be smart enough to identify what is the MVP.
The mind of comprehensive documentation they were referring to were garbage 1000+ page documents…
Perhaps, but I don’t believe that is how the documentation point in the Manifesto has always been interpreted since then, and I suspect no-one else here does either.
As I said in my original comment, I think the ideas around Agile can be an interesting and useful area for discussion, but I’ve never fully agreed with the Agile Manifesto as-written. The danger with making these kinds of short, profound statements about a field as nuanced as software development is always that you immediately have to start backtracking and clarifying that what you really meant was… And as soon as you’re doing that, probably the original statement has lost most of any value it might have had anyway and it’s more interesting and productive to discuss the nuanced issues that you brought up afterwards instead.
In Waterfall and related BDUF approaches the intent was to produce this before any code.
OK, but it’s not as if everyone was trying to write software that way before the Agile Manifesto came along. I’ve been writing software for money since well before the Manifesto was published, I’d say I’ve worked on a fairly broad range of applications over the years, and I don’t think I’ve ever seen that kind of process in actual use either before or since the publication. Waterfall just seems to be the preferred straw man for Agile advocacy.
You just have to be smart enough to identify what is the MVP.
Or, more generally, you have to be smart enough to identify which milestones in terms of functionality and performance add real business value. That is almost always some kind of step function. Reaching an MVP that you can deploy to start collecting feedback from end users on a realistic product and ideally bringing in some revenue as well is obviously one of the more important steps when developing certain kinds of software, but really there is nothing unique about it and even for those kinds of software the same principles apply both before and after shipping the MVP.
That then invites the question, how do we identify what we need to have ready before we take the next step, MVP or otherwise? And just like before, the options are essentially to check our requirements or to wing it and hope that we’ve guessed well.
Without wanting to make this unnecessarily personal, I’m genuinely curious to know what sort of work you were doing at the time and how long ago that was. Anecdotally, I wrote my first code for money about 30 years ago and have been a developer by profession for well over 20 at this point, and I’ve never encountered anything like it. Even working on firmware or device control software, where you tend to know more of your requirements early on because they are dictated by the physical equipment involved, the process has always been incremental and changes in requirements have always happened along the way and been incorporated accordingly. Of course that’s just my own personal experience; I’m not disputing that yours might have been very different, just a little surprised.
About 15 years ago, aircraft safety systems at that point. One area where Waterfall is almost justifiable, but most of the other projects (in the same office) did not use Waterfall because (as that project demonstrated) it was crap. V-Model or Iterative & Incremental (slow-motion Scrum) were used with much greater success on all other projects. Success as in, they were rarely late by more than a rounding error (days, maybe a couple weeks). When they were late by larger margins the issues causing the delays were discovered early and were almost always requirements issues or hardware issues (that is, the former was partly on us, the latter never was).
I've seen Waterfall used on other projects since then including a major information system that resulted in about $1 billion of waste, fortunately I've only been adjacent to them not on them. The billion dollar waste relied on (as Waterfall often does) late integration, none of the pieces worked together even though everyone built everything per the spec they'd been given.
If you've not encountered it, that's great. Doesn't mean it doesn't exist.
"Individuals and interactions over processes and tools"
The issue I have always had here is that in order to go fast and have short cycle times you need quite a bit of automation and the more you rely on automation and make it work the easier it is to maintain iterative agile development. That requires processes and tools maintained and the more you rely on individual personal preferences and nuances the longer those cycle times will be.
Once you get down to continuous delivery you don't have much choice but to have automated tests written for everything, the basis of which might have been a fantastic interaction with a stackholder with a good back and forth on the right tests, but its far too reductive to just always value people over process when solid engineering is critical to low cycle time for software agility.
It seems the manifesto was written from a contract interaction perspective for consultants. With the product mindset however these types of scenarios are a lot less common but the engineering doesn't go away. Its just one aspect that has annoyed me where Xtreme programming got more into defining what a good project actually ought to be able to do and the sort of processes that were necessary to make it work even if some of it was flawed it hung together better than a set of values.
"Agile ... emphasizes customer collaboration over contract negotiation, individuals over processes, responsiveness to change over following a plan and results over documents."
I think someone needs to tell this to my company's agile consultant. That actually sounds useful.
Did your agile consultant go over outcome over output?
It basically drives home that doing work for the sake of work isn't useful. It's only useful if it's something that produces a desired outcome where that outcome is usually a service or features that customers actually want, but it also bleeds into doing technical things that indirectly let you meet customer requirements faster like having good test coverage, a good release process, etc..
The Agile Manifesto itself is actually a fantastic set of tenets. Unfortunately most people I've talked to who are trying to push 'agile' have neither read it, or have enough experience in software development or projects to understand why it's so insightful.
I have long returned back to waterfall, it just happens to take two weeks at a time.
Pure agile never worked on non-software industries, due to the way all parties expect to deal with requirements, demos, documentation, long term roadmaps, budget allocation for next year projects,...
Totally agree that non-software benefits from traditional project planning. In a project management class it was taught that the project framework should be adapted to the project.
It sounds really formal but planning how you plan the project is key for some things. For example, I would guess building a nuclear power plant isn't an iterative type of project. Lots of risks and lots of sequential activities. A team could probably spend weeks to months just planning the plan.
Even more basic examples, like when is the software ready with 100% feature capabilities, so that everyone can take a training.
Schedule, travel accommodations, teaching materials, trainers,.... need to be accounted for and planned on due time, on synch with go live for the new infrastructure.
Shocker that organisations that could never work out what agile was and no intention of adopting the principles of it are still redefining what it means. Nothing changed they have been doing that for 2 decades ever since it took off, most large organisations never want to empower their staff so they fail at step 0, they don't want what agile is selling. They are finding it awfully hard to find staff however if they don't list it!
> Title is click bait, if McKinsey killed it then so did every other org that misuses “agile”.
(...)
> I leave it to everyone else to make up their own opinion about that.
Yeah the article is pretty bad. Indeed it all boils down to your run-of-the-mill regurgitated and recycled complain that organizations abuse trendy keywords without any intention to actually follow through.
Agile is an idea, over the past ten to fifteen years it has been turned into a process complete with certifications. That difference captures the essence of the difference between working in a corporate environment and working in a startup environment - and I've worked in both environments. The "suits" like processes, methodologies, and certifications. Startups, especially self-funded startups, have to get results otherwise they can't make money. As negative as Corporate Agile may sound, it's a far cry better than what they were doing before.
Bottom line? "McKinsey Agile" isn't intended for startups and small organizations or large organizations whose product is tech.
> As negative as Corporate Agile may sound, it's a far cry better than what they were doing before.
Bottom line? "McKinsey Agile" isn't intended for startups and small organizations or large organizations whose product is tech.
Two decades later and the movement has succeeded. Agile’s underlying principles and values are now table stakes for any organisation. The degree to which an organisation is applying agile is now a matter of small increments of productivity rather than a revolutionary game-changer.
So it's not dead, it has become standard operating procedure.
It does seem a bit self contradictory when a few paragraphs back the author has said how the term has been scrubbed of all meaning and things like self organising teams have been abandoned in favour of selling new wrapping on old ideas.
They certainly didn’t kill it, they just added to the long list of people who say they are agile but continue to operate in a command and control world because top leadership can’t accept reality.
Even with something that gets it so close to correct like Scaled Agile, they’ll tell you it only works if senior leadership is on board. Otherwise everyone will be paddling upstream. I’ve seen it work beautifully and I’ve seen it completely screwed up.
At one company within a few hours from me, about 2/3 of the company were actively campaigning for it after knowledge spread through the group. CEO couldn’t be bothered to show up for an overview presentation.
Eventually, the board removed the CEO and everybody is doing much better.
How are people working within Red Hat? Are kernel patches and systemd features developed using agile concepts such as user stories, story points, retrospectives and sprint planning? Or is it more a group of hackers scratching itches?
I'm really curious about this. I can imagine very "managed" methodologies being attractive for gigantic teams working on an enterprise client project where the complexity stems from the endless stream of feature requests.
I have trouble imagining how it helps a team working on, say, a kernel, a compiler, network infra, a browser, scientific code, or a HPC project (because it seems to me that complexity there comes more often from raw algorithmic difficulty). But maybe it does help.
Stories from anyone with such first-hand experience would be great!
In my career so far, I've most enjoyed working on kanban-esque teams. Just a bunch of tickets, maybe sorted by priority, for anybody to jump on. A wireframe to reference and quick slack updates once a day. Ironically, this was my least favorite job, but the working style has turned out to be my favorite.
Since then I've worked on agile teams and found it to be less useful. I'm sure there are reasons for why, but strictly from a personal point of view, it's much more fun and relaxing to just do kanban.
Exact same experience. A kanban board with clear priorities was my best experience so far in my near 25 years of career in this field. This was thanks to the big bosses which were pretty chill in regard to when things would be ready, as long as it was "as soon as possible" they were happy.
All the Agile an scrum bullshit get introduced the moment the higher ups want to have exact estimates and deadlines and just shipping features is the only prioirity. Despite all the stupid effort to predict task sizes and times in the end they're always wrong, things take more time (or less) and we wasted a ton of time and mental energy on the shared taboo lie that we're estimating stuff.
I am by no means a fan of agile as a 'religion' and in my role on the edge of the Dev world (Technical Delivery), I have seen many teams try to take the book of agile and implement it end-end, which focuses them more on the mechanisms than the 'doing'.
Agile has some good common sense strategies for (to use other phrases) common ways of working, problem triage, workload assessment, resourcing, progress tracking and measuring success but it's often deployed 'big bang' when (like ITIL), an *overseen* progressive change management exercise would be less of a shock to all concerned.
I stress 'overseen', because when you just dump the processes and tools on a team you often get situations where, for example, the sprint backlog just becomes a dumping ground for everything in progress and everything yet to be scheduled - with the icing on the cake (in one recent encounter) being that the backlog is exported into a spreadsheet for prioritisation and then collaborative updates to this are used to edit comments in the user stories!
I don't mind if agile is 'killed' because a lot of the common sense it conveys is either already being used (in perhaps a less-than-ideal joined up way) or can be used under a different brand name. Mind you, some organisations that have made their bread and butter from the concept might have something to say about this!
The overall aim should be for any team to work in a known and consistent way to meet their deliverables. Call it agile, call it foo; whatever!
To me it's been never alive in a first place. I develop products, some I own, some are built for customers. From my lengthy experience the process is unique enough for every case. I could care less what every particular case called.
>"Now almost every job applicant goes to great lengths to explain how agile they are"
I do not remember ever talking about agile with my customers.
This line from the article is applicable to a wide range of Agile add-ons and recipes: "An unnecessary rebrand of an existing concept... that goes against the fundamental principles behind agile."
SAFe might be the pinnacle of Agile practitioners rebranding everything as part of their particular extensions of Agile.
Scaling up Agile in big enterprises is important. But rebranding things senior management already does and knows how to do is counterproductive and cult-like. Not every part of a business benefits from Agile.
I had my hopes up that McKinsey had started to preach against agile. But this seems just like the agile micromanaging dystopia is about to defile other fields than programming.
Agile wont work especially if middle management aren't behind it 100% And they won't because if Agile done right will eliminate the need of middle managers. Think Traditional big bank vs Credit Unions. Where as big banks they are in control while Credit Unions are the account holders are in control. I been a scrum master for 2 years and the biggest fight I had was stopping and preventing sabotage. Ultimately it wears on you and you give up.
Agile needs a small team, high trust and cooperation, and shared common motivation. You won't find that in the sprawling politics ridden corporation, plastered with slogans promoting team work, communication, keeping it real, but sorely lacking these virtues in reality
It'd be nice if there were some way to filter out job listings where some heavyweight development methodology was in effect. Some make it obvious in the description text, dropping SCRUM/Agile-related buzzwords. Others sounded awesome, promising an enlightened developer-centric focus and autonomy. Then I've shown up to find I'll be a mechanistic story point generator, wasting everyone's time.
This is an area I've had to exercise acceptance on. Even if SCRUM dies, the selling points for it (control, measurement, and interchangeability) are too powerful such that something else won't come in and replace it. However, I would like to know a team thinks that way, so we can steer clear of each other.
I worked on a project heavily involved with McKinsey a few years ago. Maybe it was just that particular team, but, it was an ... interesting way of working.
In the history of my career, I've watched the evolution of agile as either a direct or indirect participant. My first experience with it was at a large telecom where I worked on a team that operated using agile development practices, but didn't realize it while simultaneously watching two separate development teams implement it[0], including purchasing/adjusting things to include any agile hyphenated idea[1].
Mine was a team of six, responsible for about 11 critical, internal, applications. We didn't consciously decide on an agile approach, nor did we ever use that term[2]. It was simply the only way we could operate and manage to get anything done. As we were serving internal teams with fully in-house developed applications we didn't have contract negotiations or someone on the other side of it concerned with anything other than "when can we have it"[3]
We were a geographically distributed team and did stand-ups virtually via chat-rooms, but outside of that, the "Agile Values" were all practiced, accidentally, nearly perfectly.
Shortly after that job, I joined software development shop that was undergoing a large transformation. They'd been a two-developer operation that was started in order to help land customers for the larger consulting business -- now when the product we were helping organizations deploy had some limitation, we had a team of folks who could write something to fill the gap. And because of the product, every customer had a gap. We quickly grew to seven and brought someone in to help manage the chaos with "agile". It worked about as well as I expect it can; this time we more closely followed a scrum process, daily stand-ups, sprints, retrospectives, etc.
I moved on to another shop a couple of years later. They were a larger organization doing bleeding-edge development -- my role involved writing software for large products that a third-party wanted to create but didn't have the in-house talent to do so. At this job, however, agile worked a lot like I had been hearing from other developers -- it didn't.
Here's the thing -- if you look at the three jobs, the one I would have expected to be the most successful here would have been at the last shop. They're mature, reasonably sized, not over-or-under managed and the kind of work we do has a high degree of "unknowns" to it, making a "complete project plan" impossible until very near the late stages. I would have expected the first to be a complete failure given the size of the organization (15,000 employees, three official "company languages", very globally distributed, very difficult to hire high-quality candidates[4]).
All that to say, my current crop of co-workers and management are excellent -- the best I've worked with -- but it doesn't work at all here!. There are probably a million little "we don't do this exactly right or that exactly right", but the biggest -- and in hindsight, most obvious -- difference isn't us, it's the customers.
At the first company, the only concern was "I need this as soon as possible, I can't pay for more people, and have very little for licenses or hardware." In recognition that they were often giving us so little to work with, and a history of delivering, they left us alone[5]. If we took too long, an explanation was enough -- it didn't cost the company more than the salaries they were paying us already.
At the second company, we were developing software related to a third party (OCS/Lync/Skype for Business) which had all of about four organizations, globally, that specialized in that. We charged more than double what my hourly rate is billed at where I'm at, now, and our customers waited months for project starts. That we didn't give them a firm "here's what it's going to cost, out the door" and required them to work with us, the way we work, wasn't negotiable. We turned down work if they couldn't accept the terms.
At the third company, though we are doing innovative work, there is enough competition that our customers generally focus heavily on the price tag. Since our market tends to be non-tech companies, the idea of not having some form of firm estimate in place is a non-starter. Estimates require plans, plans require more knowns than unknowns. The problem is that when you have a commitment that you're going to complete something for a certain price, that becomes the driving factor behind everything else. Change is allowed ... provided a change agreement is negotiated ... so adjusting to changing requirements means "we don't do it", "we eat the cost" or "we send them a new estimate, which is usually much larger than the customer is expecting". For the software we're building, there's always someone else out there that will do that for them. We don't have the money to eat the cost, and change orders can often be relationship-souring experiences.
I've thought about this problem a lot as I'm making a transition to another company (one that is already operating in an accidentally-agile manner). There's room to implement some of the things I liked about agile, here, and I'm inclined to believe that the nature of the work/customers will lend itself well to this approach. It's making me believe that a lot of the talk about agile being "broken" or "poorly implemented" has a lot more to do with the details. Maybe software development ala "the third company" simply can't be done any better. If you can't find a customer who will accept -- up front -- ambiguity in "final cost", you can't really implement agile in that context.
[0] Well, Agile/Scrum if memory serves.
[1] Agile Workspaces, which went over like a lead balloon when the developers with large offices were told they were moving out to tiny, identical, unassigned desks.
[2] As I'm writing this, I recall the term "RAD", "Rapid Application Development". I know it was a thing but nobody on the team could tell you what it was. It was used entirely because management liked how it sounded.
[3] This is similar, but not exactly the same as, "How long is it going to take?" which is often hiding a financial question: "How many of the hours that I'm paying you to build it is it going to take you to build?"
[4] Nobody wanted to work for us. We were both a "boring telecom" and a company that tended to want to hire developers at their big HQ, located in a region of the country with 1% unemployment for that job category (during its worst times). And, of course, they struggled during the 2000s dot-bomb, "Surprise Fired" half the IT/dev staff one day on a Friday (replaced with over-seas contractors) which gave them a terrible reputation among candidates.
[5] My team was the only development team reporting up through the Infrastructure/Support organization. They tried to move us under development every time someone up the reporting chain (anywhere in IT) noticed the out-of-place department, nearly every year (this was the second most common org-related meeting, the other was "why do we write this in house at all?"). We loved these meetings; it was fun to get a large room of people to agree to something they thought was a "terrible idea" when they walked through the door.
The agile envirionments I've worked in have been preferable, and I'm for whatever gets things done. The manifesto and resulting method showed a clear understanding of the problem they were solving, what I don't think they understood is why waterfall "worked" and what "worked" meant.
Not sure if they have a word for it now, but I call it The Fold, above which, the organization is not agile, and still operates on a dynamic narrative of representations, impressions, commitments and dates without anything concrete, and to people above it, that is a feature.
The concreteness of agile becomes a set of forcing functions on people whose job is to sustain options ("optionality") and not be subject to be forcing functions, but rather, to issue them.
I'm all for the agile anti-governance to make things, but what an enterprise business really is is a governance layer extacting value from capital in an as-abstracted and generic way as possible, where products, people, infrastructure, are interchangeable black boxes in balancing that capital management equation. Making "things" doesn't matter in an enterprise, as the whole business is just a narrative over abstractions of those things.
The flow of the factors and the yield on that capital is directed by the attention and narrative at the executive level between organizations - not by making stuff. What agile practitioners miss is that sometimes, a general sends a platoon on a (career) suicide mission because that's just what's necessary to sustain this yielding narrative that steers the organization. Agile was designed to preclude those missions, which are a key tool in an executive playbook.
That's just in enterprise. I have stumped a couple of agile coach friends with the startup puzzle, "for the company to survive, we need to get another round of investment. To get that, we need prestige customer X in our pipeline because the investors say they are in if this customer is. Customer X wants this feature because their contract with our competitor is coming up, and we have this short window to close this and replace them. It means we need an un-agile date commitment with some tech debt to demonstrate we can execute, or we run out of money. Agile me this."
Most of the answers are counterfactual, where it's "well, you shouldn't have got into that situation." I even agree, where personally, I wouldn't have got into that disadvantaged relationship with just one customer in the first place because I know how that works, but I have watched a few execs die on that hill, and this is a common pattern that happens when sales gets ahead of PMF - you're going to get snookered in front of a date. I'd argue that any method that depends on its own purity is really just a model or an ideology, and not a solution.
What McKinsey will need to do is create the connecting glue between agile production and the narrative business atmosphere, so I'd say agile is not dead, it's being integrated.
Being told for the 20th engagement in a row that the right people for a conversation about a feature can't spare the time to be in the the room to workshop it or provide meaningful feedback (so why are you paying $300/hr for my time?) is de-motivating.
As a rule I find most human beings exceptionally poor at thinking structurally and critically about why they do anything at all, and strategies or behaviors which would improve those things either by outcome or experience.
They just go on raging their fists in the dark and at the same time running away from any daylight that might make them change.