Hacker News new | past | comments | ask | show | jobs | submit login
Software estimation is hard – do it anyway (jacobian.org)
399 points by vortex_ape on June 30, 2021 | hide | past | favorite | 230 comments



Here's my beef with estimates.

I can give you a really really accurate estimate, but in order to do so we're going to have to spend a lot of time going through the request, building and verifying actual requirements, designing the solution and then validating it.

The process will require dev resources, business resources and probably people from the support team and will take a lot of time.

I'm happy to do it. It's actually my favorite part of the job. But the business invariably doesn't want to spend the time and money to do that.

They'd generally much rather start with a fairly vague description of what they need and let the devs keep throwing stuff against the wall and see what sticks.

Good and accurate estimation is not just a dev function. It requires buy in and input from the entire business stack.


> Good and accurate estimation is not just a dev function. It requires buy in and input from the entire business stack.

And in my experience, when people don't want to buy in to doing the whole process up front but they still demand some kind of commitment, the easy way to handle it is:

"We can commit to a date and we'll finish whatever we finish by then, or we can commit to a scope and it will take as long as it takes. But we won't commit to a date and a scope unless we spend the up front time to first figure out every detail of what we need to build."

Stating it like that usually makes people realize how ridiculous it is to commit to something, but you don't know what, but you'll still do it by a certain date. And it makes them feel like you're being willing to work with them/gives them some decision making power.


> commit to something, but you don't know what...

The problem comes in when people think they do know 'what' it is, and they're just... adamant that you 'computer people' don't 'get it'.

I can't speak to all my clients - some are great - but have had some in the past that just insisted I was being obstinate or obtuse or difficult by asking clarifying questions. Then they'll take hours/days obsessing over shades of blue for a screen, then... the morning of 'feature launch' they'll question why there are no notification emails for feature X, when... that morning is the first time those words have ever been spoken.

But... fortunately, I've not had project work like that in a while :)


We don't even need to pretend it is an outside-of-technical-people problem. Developers (and just people in general) are just as guilty at mis-estimating their own capacity for work.

By this I mean we forget we have other things - we don't accurately account for meetings and side-tasks. We underestimate the complexity of even simple tasks. We don't account for the flames we fight habitually without much consideration. We don't even recognise the amount of time we spend just relaying and receiving information. Those intriguing and important (and still work related just not explicitly about the task we have estimated) slack messages and water cooler moments aren't accounted for in our estimates.

Most estimates are inherently given on a "if I am in a perfect working environment with no interruptions" basis and we don't even acknowledge _that_.

This is all before we even begin to appreciate that even perfect world estimates are hard because, as Ron Jeffries said:

    Even with clear requirements — and it seems that they never are — it is still almost impossible to know how long something will take, because we’ve never done it before. If we had done it before, we’d just give it to you.


>Most estimates are inherently given on a "if I am in a perfect working environment with no interruptions" basis and we don't even acknowledge _that_.

I had the experience of working at a company that had the practice of rigorously tracking engineer-hours. Through a time-card system. (this was for billing our clients). This way we always had a paper trail of how long we spent on a given task or project, and it was generally "against the rules" to bill hours you weren't directly working on that project.

This led to having an awareness of that imperfect working environment, and was a powerful enabler of making good estimates.

On the other hand: that documentation effort wasn't free either.


Then you have the companies that take every estimate and cut it down to 1 month without changing scope. Doesn’t matter if the estimates are 3 months worth of work or 6. And now as an eng you have to either lowball your own estimates and burn yourself out to seem like a ‘team player’, or push back and burn yourself out from a losing battle.


I don't think the point of lowercased's comment was that devs don't underestimate tasks to the same degree that "outside-of-technical-people" might. They are saying that "outside-of-technical-people" don't have the experience to understand how difficult it is to give an accurate estimate or how much pressure is put on devs to agree to a deadline and take responsibility for making something happen by that date. This is compounded by the fact that stakeholders are unwilling to define or commit to a detailed set of features or acceptance criteria. The bigger the project, the more painful and difficult this becomes. Stakeholders say "Make it faster!!!" then engineers say "We agree!!! any ideas on how?".


That's fair - my intention was to say that non-engineers aren't any worse than engineers. I've seen the same puzzled look and demands from development teams that are requesting changes from other development teams.

Open source projects are rife with developers demanding the near-impossible from contributors/maintainers, etc. (but plenty of examples people not being dicks as well)

Additionally, they can often be worse (toxic) about it precisely because they are developers themselves, and so think they have that understanding and start acting the alpha.


Yeah, it's definitely different if you're working on a contract basis. At least with internal stakeholders, whether you're building product, doing enterprise integrations, or building internal tooling, you just need to make sure that you have exec backup (or you are the exec). If you have buy-in then you can basically just set your team's policy and be done with it.

For contract work you have to do the process over and over. Frankly, if I was doing contract dev, I'd state it as an upfront policy and move to quickly fire any customers that didn't buy in.


This is why I prefer to work for people who are smarter than I am, rather than the reverse


Yeah, it's frustrating to get that push-back when it's like, those clarifying questions are just the tip of the iceberg and are required simply to get started -- basically the whole rest of the project is going to be resolving "nitpicky" details like that until it's done.


It sounds very much like the constraints of the Project management triangle:

https://en.m.wikipedia.org/wiki/Project_management_triangle

It's probably the first time i have seen estimation described so clearly as a choice between scope or date.

Am still trying to figure out how something like SAFe works with the above, the gut feeling is "not great".


> "We can commit to a date and we'll finish whatever we finish by then, or we can commit to a scope and it will take as long as it takes. But we won't commit to a date and a scope unless we spend the up front time to first figure out every detail of what we need to build."

This.

I often find myself saying “you can be feature-driven, or you can be date-driven, but not both.”


> But we won't commit to a date and a scope unless we spend the up front time to first figure out every detail of what we need to build

Great, we'll be expecting you to complete that by the end of next week.


A common misunderstanding about software creation is that code is the desired result. Hence, estimates more often than not try to predict how long it'll take to write the code that produces a desired outcome.

However, in the end, code is just a very detailed specification of the design that produces a desired outcome. There's a reason why production is called production, after all:

https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...

Therefore, equating code written by a software developer with the final result is a little like equating an architect's blueprint with a building that was built according to that blueprint. The fundamental difference between those disciplines, of course, is that with software development most of the manual (as in: non-automated) work is done once the code has been written, whereas with construction the by far largest part of the manual work involved happens after the architect has created the blueprint.

On one hand, this is a problem of perception. On the other hand, though, it's quite understandable that customers don't want to invest the better part of the budget upfront, not knowing if the design will meet the requirements.

This is where agile management methods come into play. Those can be misused or, indeed, abused, too, but the idea of eliminating waste and adapting early is a sound one.


> However, in the end, code is just a very detailed specification of the design that produces a desired outcome.

Yes, exactly. The code is a technical specification that is so clear that a very dumb uncreative machine can follow it perfectly!


"A common misunderstanding about software creation is that code is the desired result." Here here! If I have my product owner hat on I don't give a flip about the code itself; I'm interested in the business value a properly coded application unlocks.


What a fine cut of beef this was! The very last sentence is what really gets me:

  > It requires buy in and input from the entire business stack.

In my experience, this makes or breaks everything. Unless you have external buy-in and external understanding of all the points steverb enumerated:

  - Inaccurate estimates don't matter: there is so much wack-a-mole going on that by the time you're done it just doesn't matter as everyone is 3+ focus hijacks removed from when you started.

  - Accurate estimates cause heartburn: they wanted it weeks ago and now you're showing up with a number far larger than anyone wants to hear, but they can't tell you where to cut scope.
Additionally, the point of estimates is planning, and planning really only matters when it aligns priorities for multiple teams... the more alignment needed the more important this becomes.

Everyone has to do the dance or it's just no fun.


Well said. My current rule based on experience is that estimates should only be hours/days/weeks/months/quarters/years with NO numbers. It is to give a sense of scale and effort so it could be prioritized and/or modified. If they want exact dates, then it is like you said, several days/weeks to go get an accurate date. I only wish that the sales team had to commit to closing dates the same way software teams do.


That's actually a good hack. I tend to take a similar approach, but using approximate days in a fibonacci sequence.

Start at the high end, will it take... years: NO quarters: NO months: erm... NO. weeks: maybe days: Unlikely hours: Ha. No.

So you end up with a range (days?-weeks-months). That's too broad, what could go wrong to avoid making it months long project (well we could investigate X, Y, and watch for Z). What needs to go perfect for it to be days? (well we could... wait, days is unlikely).

Those discussions about the high and the low to get "reasonable" confidence are super important.


Yeah, I always think about my structure as "some small number of h/d/w/m/q/y". I count 1 week the same as 3 weeks in terms of scale. ymmv


I’m a big fan of avoiding numbers in estimations. The moment a number is included, people start adding them together to create metrics that don’t show any useful information.


I really like your suggestion to use hours/days/weeks/etc without numbers. A similar suggestion I read for estimating (originally outside the context of software projects) was to use numbers with just one significant digit, so your estimate options would jump from 8, 9, 10, 20, 30, ...

The estimation mention I dislike the most is "t-shirt size". There is no clear relationship between S/M/L/XL. At least story points let compare two tasks. If you try to give t-shirt sizes points (e.g. "M = 2*S"), then you might as well skip the t-shirt abstraction and just use story points.


The argument I've heard for T-shirt sizes is that if you go to numbers people try to add them together when that's just not how it works. I do agree that T-shirt sizes don't work that well though.


I see what you mean about trying to think abstractly instead of about numbers. But once you have t-shirt sizes for some tasks, what do you do with them? You can't compare them. You can't convert them to date forecasts. You can't use them for sprint capacity planning like you can with story points.


> I can give you a really really accurate estimate, but in order to do so we're going to have to spend a lot of time going through the request, building and verifying actual requirements, designing the solution and then validating it.

This is almost surely wrong for most developers, or else rewrites wouldn't fail to deliver within the estimated time so often. Rewrites per definition already has a perfect specification in the old code, just write something working the same way using a new architecture. But that is still really hard to deliver apparently.

Of course it could be true in your case, but you can't blame the inability of software engineers in general to estimate tasks on that.


> Rewrites per definition already has a perfect specification in the old code

If the old code is a perfect specification, there is no need to rewrite it, because you already have a code base that performs to specifications.

Less glib, random code is a terrible format for specifications, because it contains lots of things that aren't actually requirements of the specification, but implemenntation details. And a specification that contains lots of specific things that aren't actually part of the specification is not a good one.


In other words, a rewrite is never actually that. There should be a better word we use for this, like 'replacement'


> This is almost surely wrong for most developers, or else rewrites wouldn't fail to deliver within the estimated time so often. Rewrites per definition already has a perfect specification in the old code, just write something working the same way using a new architecture. But that is still really hard to deliver apparently.

This is precisely why rewrites fail!

I've never seen a rewrite where the devs had a perfect understanding of what the old code was doing. They understand the happy path, probably. Not the millions of edge cases through the years.

They only learn the requirements after knocking out the easy stuff and then getting into the gritty bits of bringing over all the edge cases that didn't fit their new mental model easily.


On the flip side, a full rewrite is really the only way to surface and understand all of those edge cases. People seem to harp on the idea that rewrites are bad, but I find them to be a natural part of the SDLC. It's a way to refresh the mental model for the devs currently working on it, since the original dev(s) probably moved on long ago. Updating the tech or architecture itself is just a byproduct.


That's an interesting take, and getting that context is valuable, but it seems like there really should be a way to do that that's less disruptive and destructive to "actually being able to deliver new features" than a full rewrite that stops the world for months or longer...


As someone who has argued against a rewrite, lost the argument, and then proceeded to do the rewrite, I would push back strongly on the notion that we have a perfect specification, which is just "do what the old thing did". This specification is woefully incomplete, of course, just as a vague requirements document for a brand new service or product is incomplete.

When someone proposes a rewrite for software, I ask him or her to think critically along the following questions:

1) What is the purpose of the rewrite? What do you hope to accomplish by it? What business objectives are furthered by the rewrite?

2) Explain in detail what is wrong the the existing code base, and why it is untenable to fix those problems piecemeal.

3) Explain in detail how the rewrite will avoid, overcome, or improve significantly on all the problems mentioned in 2).

In my most recent, case, and as I expect in many others, I couldn't convince anyone to engage on any of these questions.

For 1), we were told that the org planned to build significant new features on the product and the rewrite will help. However, the company's priorities changed significantly even as the rewrite was just getting started. By the time I left the company, I was not aware of any short or long-term plans to continue adding functionality to the now-rewritten product.

For 2), the level of detail was along the lines of "the code base is awful. I hate it!" And, that's about it. Question 3) is, of course, impossible to answer if you failed to answer 2).

Failure to be able to answer these types of questions is also in my eyes a strong indicator that you don't understand the existing product very well. And why would we? The existing team that built the thing had all left by that point, which is, in my experience, the norm, not the outlier. It's normal for devs to build something for a few years and then peace out, either via an internal transfer to another team or a new job opportunity.

I believe that much of software is knowledge acquisition, and much of the cost of software maintenance is in dealing with the failure to transfer and maintain acquired knowledge over time. Rewrites can be spurred by ignorance, and that same ignorance can lead to the rewrite taking much longer than expected.


I think of it this way. All problems that become sticky problems, got that way by being sticky. Re-writes encounter the sticky problem one way. When the spec is real, the spec is vast. "Do everything GMail does" is a lot of things.

In internal business app teams, the sticky issue is that no one actually understands the business problems well enough to articulate sufficiently. There's usually very little incentive to be the spec person, on either the technical or business side.

It might often be the same underlying issue. The difference with rewrites is that the "conversation" happens within tech teams, no outside player.


Rewrites only really have a perfect specification if the original team does the rewrite. Otherwise there are likely all sorts of behaviours that the rewriting team is not aware of.


The business might not know enough for that estimate to be possible either. It reminds me of this quote from Andrew Wiles about mathematics:

"Perhaps I could best describe my experience of doing mathematics in terms of entering a dark mansion. One goes into the first room, and it’s dark, completely dark. One stumbles around bumping into the furniture, and gradually, you learn where each piece of furniture is, and finally, after six months or so, you find the light switch. You turn it on, and suddenly, it’s all illuminated. You can see exactly where you were." [1]

[1] Source: https://micromath.wordpress.com/2011/11/06/andrew-wiles-on-d...


He can ... really really accurate estimate, ... a lot of time .. building .. verifying .. actual requirements.. designing ... solution .. validating .... process will require dev resources.. resources... people.. support team.. a lot of time.. very expensive.. scary.. and I'll be nakedly responsible for my own mistakes.. </inside head>

"How about we do it that other way. You mentioned devs could keep throwing stuff against the wall. I like how that sounds."


The way I express it is that estimation is fundamentally a design process. You can't get an estimate without doing design work, to some appropriate level of detail. That design work is not going to spring forth from the void unbidden, so it needs to be paid for.

This does not mean that you need a big design up front, but it does mean that you need to be happy with a level of precision to the estimates commensurate with the funding that has been given to the design process.


IMO the best use of "agile"-style planning is to replace the estimation process with development. After 6-12 weeks you'll have some amount of working (if minimal) software, and a decent (by software planning standards) idea of how long at least the first major set of features will take. If you like what you see so far, and the estimate doesn't seem like it'll wreck your budget or timeline, you keep going. If not, you reconsider things or stop.

...Or you can spend that 6-12 weeks just estimating, involving more time by more people, have only a somewhat-better idea of how long the first set of features will take, and no working software to show for it.

In practice, however, I find few businesses willing to either have a 1.5-3 month estimation window, or to start development with none but a wildly vague estimate, amounting to a guess, waiting 1.5-3 months to find out what a somewhat-accurate estimate may look like.


The unfortunate truth is that as soon as someone in the process wants an estimate, rather than wanting to see delivered value, you've already got a situation in which "agile"-style planning is on the back foot. Aside from any business value there may be in the estimate itself, insisting on being given one is a political move designed to put software delivery organisations on the defensive: the framing is that IT is a cost centre, not a value source. That's so common that it's easy to forget that it's not the only choice.


My own personal anecdote:

I worked for a company that produced reports for insurance adjusters. Sometimes the reports were small enough to take an hour, and some large enough to take a week to produce.

For some reason the company was obsessed with the "month-end" cycle- people on the last day of the month would work overtime until midnight and occasionally skip usual quality control checks to get things out the door. (And then take the next day off or come in at noon or whatever.)

For reasons I will never understand, with three days left in the month a certain director would spend the whole day running around with a spreadsheet of all the reports that were open and ask people for a red/yellow/green estimate of whether they would be done. The next day and the next he'd repeat the process to get his most accurate estimate of the monthly revenue.

Then two days later, the controller would just hand him the actual revenue numbers for the month ended.


Perhaps an insider trader trying to outpace all the other insider traders?


This is generally why I don’t see a ton of value in trying to do accurate estimation. You’ll get better velocity and delivery not wasting your time on all of the faux story point estimating that scrum and other systems do.

The most proven way to do accurate estimation is to base new estimates off of previous delivered work (i.e. we’ve built this suspension bridge design before and it took us this long, so we believe a similar bridge under similar conditions would take similar amount of time). This is not what most places do (and most places don’t spend a lot of time after the fact really seeing how long each phase and part of the process took).

I’d argue that estimates should be removed from day-to0-day engineers and placed with program managers or others whose job is to see how long work has taken and make schedules and estimates off of previous known work.

The other way to get more accurate estimates is to build in systems and processes that require less and less custom work over time. So many software shops and tech companies never invest in this and every new major feature or project is heavily custom work.


I've seen counting tickets work fairly well. If the team is well-practised at breaking work down into tickets below a certain size, chances are good that the number of tickets completed per month will be fairly stable. That reduces the problem to "can you break this piece of work down into tickets, please", which isn't framed as an estimation problem with any attendant "no, that estimate's too big, try again" pressure. The bonus is that you can tell when a team has a stable enough process for this to work by looking at their ticket history over a few months, and the "estimate" (projection, really) is provided by the team themselves, so you don't get into a toxic situation where a team feels they're being held to an estimate someone else gave on their behalf.


Depends on what you mean by accuracy. There is no bullseye with estimates. It's a means to set expectations and to show that you've thoughtfully analyzed the work.

Part of your argument rests on reducing novelty. However, that's already covered by the myriad of manufacturing process improvement books. Programming is unique because every project is novel. Any migration, any integration, is going to be heavily dependent on company culture and environment.

Like it or not, estimates are necessary to weight A against B, to scope, to plan marketing releases, to compete, to sell, to budget, etc. You may not get value from it as a coder, but that doesn't mean that there is no value in it.


This is 100% accurate and it's the reason that I prefer the SAFe (Scaled Agile) approach to estimation and planning over the common Scrum approaches.

Doing the bulk of your planning during a 2-3 day PI Planning event lets a lot of people dive into a few things, preparing an estimate for the coming quarter, mapping dependencies across teams, lining them up with other planned work and outlining risks to the plan. Then the developers get to explain the plan to upper management, discuss any potential revisions and get moving...with everyone on the same page.

This also keeps any estimation beyond the current quarter firmly in the realm of subject-to-change.

That's the most critical part of it. Tech people and business people being out of alignment on expectations is where everything goes sideways and all of the friction comes from.

Out of all of the methodologies I've come across in my career, this is the only approach I've seen that really balances development realities with a level of "enough" future planning to help business people make informed decisions.


In practice, devs are still writing code on PI days.


How so?


With computers and dev tools, trying to get the current PI complete.


If that happens you’re just supposed to include what is still to be done in the next PI while you stop everything for PI planning.

There are too many people involved to just skip it to keep working.


And yet!


It's also very useful for personal projects, where you are the entire stack. Strangely, it's still difficult to get that buy in...


That's why I try to give Ballpark first, not estimates if I can get away with it. I would say "This can cost anywhere from $x to $y units (time/money)" where $y-$x is usually a big range.

Then we start breaking it down further if there is interest and for that we need everyone involved like you said. Not always easy but sometimes it works.


At my last job, I think 15-20% of Engineering Team's time was allocated for estimations (including multiple back and forth to clarify specs with product etc.)


Have done some apartment renovations recently I find the same thing with architecture, architects.

Without a detailed plan a lot of space is wasted and the apartment ends up less nice and you often end up reconstructing stuff (if it is small and non standard).

The process takes 10-15% of the total cost of the project. Do people want to spend that? No.


> But the business invariably doesn't want to spend the time and money to do that.

Most engineers can actually estimate things fairly well when they take the time to iterate on a PoC and gather all sorts of details. Estimation fails when management has unrealistic expectations, e.g. asking for an estimate immediately after a proposal, or some set of initial documents are written.


Estimating is hard. I like doing it too but it takes me several days to get an accurate estimate. Takes a decent chunk of meetings, figuring out every task that needs to be done, plus major risks and blockers


Estimation that isn't based on previous data - I think the article that follows this one refers to it as "Evidence-Based Scheduling" - is almost entirely a waste of time.

We analyzed our five+ year history of estimates vs actual time, and our standard deviation was larger than our mean. It was ridiculous how wrong our estimates were. The problem was in what was being estimated - coding time. Developers would get asked how long a task would take, and only think about the time spent sitting in front of a computer, typing code, and not the other time - waiting for other resources or people to finish tasks that you depend on, sick days, software and hardware issues, etc.

The only way to accurately take those unforeseen factors into account is by analyzing previously completed tasks that have similar scope. Even then you can only get close.

A couple people have mentioned weather forecasting as a similar endeavor - but meteorologists don't just guess, they analyze previous data.

Estimation that isn't based on concrete data is a fool's game.


Years ago, in a former life, I built a project boilerplate that included all the non-development tasks required to build and ship a new version of a product that I used to build out all my project plans.

There's all this (important) "guff" that people in the development often don't think about and don't care about that you absolutely have to take into account if you want to get even close to a sensible ship date. Some examples: updating license agreements; creating new records or updating them in your licensing system; providing various kinds of information and training to sales and marketing and coordinating with them on launch plan and materials; ensuring your support team is trained on the new product or version; budgeting time for support tasks on existing products; running your early access program, including gathering feedback and implementing changes based on it; and on and on.

The other thing I'd do is front-load all the riskiest work: that way if something goes wrong or is more complex than expected you know early, can communicate early, and there are no nasty surprises late on that might have a negative impact on other parts of the business or customers. You also have plenty of time to come up with contingencies to rescue the situation if it is somewhat time critical.

Even then I'd offer up a "hurricane model", where I'd have an earliest ship date, latest ship date, and most likely ship date, and that window would gradually narrow as the project progressed, the same way certainty about a hurricane's near term track increases as time goes on. Obviously that might not hold true if there's a significant shift in requirements. With our projects what it meant was that by the time we were at the point where we needed to start coordinating across teams around launch activities (generally about three quarters of the way through), there was enough certainty to actually pick a release date that everyone else in the business could work to.

And what did I base all this on? Well, past experience: actual data, even if it was fuzzy or there were too few points for any kind of statistical significance. They key point is that all the work required to ship the product, whether inside or outside of our team, was included in the plan.

Estimates and (increasing) certainty are often quite important to other areas of the business so I would say you can't ignore them, certainly not if you want your voice(s) to be taken seriously in the wider business.


> front-load all the riskiest work

This is important.

I'm in the middle of a dev cycle where I'm doing the riskiest work, and other people depend on it.

Unfortunately, I think I allowed myself to get pulled into the design process too much, when I should have been prototyping like months before I started doing so in actuality.

I allowed myself to get blocked by a bunch of design decisions I could have easily adapted my implementation to conform with, and in turn blocked a few people downstream of my (risky) work.

I'm lucky that what I did is pretty "flashy," because I think management is just happy to have anything at all for the feature I was working on.


You are the kind of PM that I will gladly give estimates to.

The others, not really.


Sounds like you re-invented PERT.


Kanban lends itself well to this.

Breaking down work into similarly sized tickets/units can, over time, be used to predict delivery/capacity (which one can use Cycle time to calibrate)

Even neater with enough data it becomes possible to use Monte Carlo simulations to give you confidence intervals on how much can you do or how long you will take to do X amount of work.

https://kanbanize.com/kanban-resources/kanban-analytics/mont...

I find this approach a lot less time consuming, more predicable and reliable.


    Breaking down work into similarly sized tickets/units can,
    over time, be used to predict delivery/capacity
IMO "can break up work into similarly-sized units" is equivalent to "can estimate accurately".

Re: that article - I can't imagine many things LESS accurate than "we have 104 tasks on the board and each team member's cycle time is 2 days so we can finish all the tasks with 10 people working for 20.8 days". Yeah, it makes for a nice graph - but it omits important details like dependencies...


That's not how it works, though - you never deal in terms of individual team members. The team is the unit of delivery. Otherwise you end up with people who should know better putting more people onto teams to "help".

Teams above a certain maturity level do often settle on a certain number of delivered tickets per month, and when you're looking at that sort of resolution, dependency problems and other factors like those you mention are represented in the data. It's not so much a measure of how productive the team is, it's a measure of how much work the team can get done embedded in the organisation they're in, which covers off their ability to resolve blockers and communicate with other teams.

There's a very different cognitive framing if you count tickets, too: you're not saying to the team "come up with a number, you're going to get shouted at if it's wrong, and you've only got 10% of the relevant information to hand", you're saying "do your usual design process, and we'll use the output to make a projection based on the history." Functionally it might be equivalent to "can estimate accurately" but it doesn't work like that when you're the one in the hot-seat.


True, I do find it easier and a less time consuming to break a large piece of work in around 2day chunks (for example) than using different sizes or trying to decide if something is 3 or 5.

In the end, regardless of whatever scoring strategy you use it should always be team centric, rather than individual.

It is possible to say, over the past 6 months, and X tickets, our team has had a Median/Avg cycle of around 2days. If team breaks future work in similar sized chunks, it can very fairly confidently predict how long they will take to do X more tickets, assuming similar conditions.

The added benefit of using small chunks of time, is that one does not need to be super accurate (in most scenarios), it can be 1 or 4 days, all it matters is it's possible to give window of estimation (based on actual data, not guesses) with a certain degree of confidence. (which will naturally become even more consistent over time)


> IMO "can break up work into similarly-sized units" is equivalent to "can estimate accurately".

Yeah, agreed. What I've always seen is "break up work into logical units, ideally as small as possible" which always ends up with a mixture of tickets of different sizes.


Do you still have access to the data? Can you do me a favour? Can you plot (actual time / estimate) and see if the result is a lognormal distribution with a median very close to 1? In my experience, it always is.


How good are you at judging scope?


Paid overtime will fix scheduling. Scheduling is bad because the cost falls on the employees, not the employer. If crunches resulted in time and a half, double time, and triple time, scheduling would get fixed.

As I've pointed out before, film scheduling is an established discipline. Making a movie is much more complex than a software project. There are a lot of moving parts. Things get changed. There are people problems, weather problems, and transportation problems. Most importantly, if a film project goes into crunch mode, everybody starts getting paid overtime. This reduces the tendency to underestimate.

There are also third party estimates. Hollywood has something called "completion bonds". A completion bond is an insurance policy for the investors. Either they get a showable movie into theaters, or the completion bond company has to pay the investors. A completion bond costs about 5% of the cost of the film.

Completion bond companies do their own estimates. Estimation inputs are "script, budget, shooting schedule, (and) résumés of key crew." To survive, they need a net error near zero - they must overestimate and underestimate about equally. Consistent underestimation would put them out of business.

Since they do a lot of this, they have scripts and financial data from previous movies. They can look up "car chase, metropolitan area, 2 minutes screen time" for how much that cost the last 50 times someone did it. They also have director info, like "director X averages 2.5 takes per scene". All this info is collected across multiple film companies.

The completion bond company has the right to intervene if the project starts to go over budget. Worst case, they can fire the director and take over the production. This is rare, but it happens. "Malcolm X" by Spike Lee (1992) and "Bad Girls" are examples. "Malcolm X" was an epic movie, a bit too epic - it runs 3 hours and 22 minutes - and somebody had to say no to Spike Lee. "Bad Girls" (1994) was just a botched production, and the bond company put in their own director to try to salvage something. It still lost money, but did reach theaters.

That a completion bond company can fire the director puts teeth in this system.


> Making a movie is much more complex than a software project.

Software design and development is more like writing the books that the script was based on.

Think Song of Ice and Fire, but 100,000 pages long and written simultaneously by a hundred authors.


Software Dev is The Wheel of Time, got it


This is the truth. Change doesn't happen until the decision-makers feel the pain of their decisions.


One major “secret” to advancing in a technical career is learning how to give accurate estimates. It certainly has been for me...

Seems like there's a hint of survivorship bias. The author will eventually give a bad estimate. There's no secret or trick otherwise it would be widely known by now.

Even the video games industry has been coming around on this in the last decade. This is a sector of the software industry famous for making aggressive, impossible deadlines for itself. It has ruined countless lives trying to hold to them. The smart ones talk about milestones and road maps. They don't announce release dates until they're basically done and ready to cut the release.

This is the conclusion you come to after you churn staff year after year and people leave in droves and never come back.

An aggressive sales team can ruin a small company. If what they sell is a deadline and promises they can't keep your team has no control or autonomy. People feel good when they have autonomy over their work and feel in control. They get burned out when their company/career is on the line when an estimate they were forced to make blows past due to forces outside their control.

I always recommend selling on what you can control. Promise only what you can deliver: your skills, experience, and knowledge. You can try to estimate how long it will take you but you will be wrong 66% of the time. The people in those studies were also as smart, or smarter, than you. There is no secret.


There are ways to control estimates - just have good controls on time, scope and cost and make sure every stakeholder is aware of and accountable for the impact of their actions in changing expectations.

Someone (sales team, developer or anyone else) wants to radically change the scope half way through? They have to cut the scope or adjust the schedule to compensate. Software is infinitely malleable so it's tempting to just accept any change that comes along, but with unmanaged changes and complexity come missed schedules and blown deadlines - it's all very predictable and avoidable and usually caused by a dysfunctional organisation without proper communication or accountability.

This is not rocket science and while there are no secrets or perfect estimates there are certainly ways to break down most work until estimation is trivial. Sure there are exceptions (research, v. difficult new problems) but for the majority of business/consumer software I've encountered, a proper schedule is possible and software can be delivered on time and on budget, as long as the scope is properly controlled, the work is properly subdivided early on and someone is managing the entire process and keeping communication open with stakeholders so that when things change/go wrong the appropriate action is taken and everyone is aware of why.


Nothing about what you've said wasn't known and employed by people at companies who've made budgets and estimates that went over. This is why estimates and budgets go over. Everyone thinks they have it figured out.

The Taylorist principles of management don't apply to knowledge work in my experience. Refine your requirements gathering and estimation processes all you want. Your estimates will still be wrong most of the time. They're educated guesses and we don't have the foundation to make accurate ones.

One thing I think you nail though is that communication is key. A lot of good comes from being honest, transparent, forthcoming, and supportive.

I never recommend software teams and companies make estimates. I say break down tasks, make mile stones, set learning goals, and get to work. Communicate progress frequently and keep feedback coming back to the team. Working software talks. When you can see the goal in sight that's the time to start talking about release dates. Once you have that first couple of releases you then you can start developing a cadence. It's all based on evidence and what you know and making promises you can keep.


The Taylorist principles of management don't apply to knowledge work in my experience.

The simple principles above do apply in my experience, presumably there is some difference in practice.

Maybe everyone thinks they have it figured out but some people demonstrably do, because they deliver software on time which meets requirements. I agree with delivering software early and often, and that is a solid basis for delivering reliable estimates and promises you can keep.

Where estimates fail IME it's down to lack of accountability and communication among stakeholders, which leads to constantly shifting and unclear priorities and requirements.


A good developer also has to know how to spot when a project will fail.


A big problem is that estimates given by developers are never treated as estimates but rather as quotes . If you miss your estimate then your employer may expect you to work extra hours to make up for the gap . Best strategy is to under promise over deliver .


I learned early in my career to never give an executive a completion date or time estimate I hadn't thought long and hard about and had full confidence in. They won't remember any of the contingencies you tacked onto your estimate or the additional features they insisted on adding to the project after you talked, but they'll never forget when you said it would be done. It's far better to annoy them in the short term by saying what you'll need to provide an accurate time frame than it is to guestimate something that will bite you later.


Even estimates that are requested explicitly as estimates and not quotes have a tendency to be used for planning other dependencies.

Once other dependencies are scheduled around a estimate, missing that deadline incurs rescheduling costs so no one wants to see it missed.


Some of my biggest arguments as a lead were when I'd sit in one meeting and my team would be promised that these estimates weren't going to be held against them and its just for a rough understanding. Then I'd go to the next meeting where those 'project managers' would be using the estimates to try and plan months into the future, as if those estimates were 100% accurate.

Then when it turns out estimates are out, I'm stuck in another meeting where they melt down about how they're going to explain overruns to their bosses. Utterly predictable madness.

Then they had the nerve to get arsey when we started refusing to estimate.


This is my experience, over and over - to the point where I often get to the point the article states as "just … give up".

Then there's the "let's split it up into pieces first". This is where, instead of rolling one 100-sided die, we flip 100 coins to get a better estimate.

But my absolute favorite is when the Project Manager asks for an estimate and you give a number and if they think it's too high or too low, they will keep asking until you give them the number they were looking for in the first place. Why even ask? Because now it's your fault if it's wrong!

(side note: There are solutions to these things and they are definitely not the right way to do things and are signs of a toxic environment - but there is hope!)


This is part of the reason I refuse to give estimates.

As TFA says, you do get better conversations with the rest of the business if you refuse to give an estimate.


And yet, in my experience, even people who recognize this focus hard to improving estimates and "accountability", but rarely seriously try to reduce dependencies.

I tend to operate on the opposite principle, to the point that I believe it is worth doing substantial amounts of seemingly redundant or throw-away[1] work to turn hard dependencies into soft dependencies. But you have to have both a business organization and a software architecture that can support this.

[1]: Really, "throw-away" just means "temporary", and all our work is temporary—the question is just how temporary.


80/20 rule: “Nothing – and I mean nothing – in IT takes less than 80 hours, and whatever you think it’ll actually take, multiply it by 20, and tell management that. You see, 80/20.”


Nothing would get built if 100% accurate estimates were given. Finance would say it's too expensive and that would be that.


Awesome.

I just found another competitive advantage in my startups.


That's why scrum changed the wording from "estimate" to "forecast". It's more like a weather forecast: you'll often get close, but sometimes it's completely off.


You can give a range - eg 3 to 5 months.


I tried that, and it was almost (but not quite) invariably held to be the lowest of the range, and still construed as a deadline. "3-5 months" becomes "3 months or we have to reschedule other stuff".


Don't do it if you can avoid, especially if company culture is to treat rough estimation as a promised deadline.

If you can't avoid giving estimation, try to pad it as much as possible, add every single uncertainty to the task list, and estimate very conservative. Add enough time for testing, communication, work on change requests.

And even after the original estimate is approved / published be sure to communicate updates to the estimation after every single change request, question, bug, new insight.

And if something by chance take less time than estimated, make sure not to decrease estimation, but to use it as a buffer.


Good estimates are critical to plan dependent activities and setting customer expectations. If we don’t estimate then we are saying that software engineering is not an engineering discipline. You can have that view, but in my experience it does not lead to good outcomes.

If I can assume you’re an SDE for a moment, I actually agree with the part of you not providing estimates. Recently I’ve done the initial estimate for projects exclusively between SDM and PMT with no engineers involved. It has been a lot more reliable and it seems that SDEs are very happy to be absolved of this responsibility. This does require SDMs and PMTs with a good amount of experience.


>If we don’t estimate then we are saying that software engineering is not an engineering discipline.

Not a mature engineering discipline.

If you can budget a planning phase in development that allows you to quickly explore the unknown unknowns and known unknowns to investigate critical bottlenecks and uncertainty before estimating and you're able to lock that down with a set of features, then I think you can create decent estimates.

That's rarely how any development environment in current existence operates though, at least from my anecdata. Most are 'agile' that can drastically shift directions, feature/scope creep is a continuous problem, there's a constant time pressure exerted by managenent on developmeny teams in hope to optimize a bit more productivity out of their high price tags which gives no slack space for them to dig into these issues (except maybe some personal time).

The entire modern development culture in most business environments is designed in a way that makes any sort of good quality estimation nearly impossible. In the best of conditions it can be hard but manageable, most environments are the worst of conditions.


> Not a mature engineering discipline.

How many projects in mature engineering disciplines are accurately estimated? I get the sense that this is a general problem, even outside of software.


> Not a mature engineering discipline.

The concept of standardized parts and assembly lines is less than a century old. How accurate do you think their estimates were before they figured out the basic principles of repeatability?

The "mature" engineering disciplines literally just punted on the problem for several centuries, only giving birth to systems engineering [1] in the mid 20th century because they were so bad at it and everyone's back was against the wall in WWII. Before it became its own recognized field, project management in engineering was worse than it is in software now.

Not coincidentally Bell Labs - the company that basically kick started the computing industry - was also the biggest player in the formalization of systems engineering. Since then its been adapted as the methodology for managing engineering projects by everyone from civil engineers to NASA [2]. Any estimate you see for a nontrivial project from the past half century isn't the result of mechanical, civil, or electrical engineers but the product of systems engineers.

[1] https://en.wikipedia.org/wiki/Systems_engineering

[2] https://www.nasa.gov/connect/ebooks/nasa-systems-engineering...


I don't think "real" engineering is necessarily any better at estimating. Take a look at any large construction project and the norm is to be over time and over budget.

There are lots of reasons for that, which can fall outside of the scope of engineering, but the same is true for software.


I want to second this, I worked as a mechanical engineer and never had accurate time estimates there either. Estimates of how long the work will take will be wrong whenever there are new problems to be solved, which is all engineering worth the name.


> customer expectations

My favorite moments from events like WWDC are when you are introduced to some really cool feature or app for the first time, and then the speaker goes "available today". The fans love it, the news sites love it, whenever you can immediately try out something the hype for that product goes up 10x.

When you only show the product when it's finished, you no longer need to estimate anything.

> dependent activities

This is where I think the real problem is. If you have a feature that no one else depends on, if you have a story that nothing else depends on, don't estimate it. It doesn't matter. That is a nice to have. It'll arrive in some sprint eventually.

If you have a feature that other things depend on, before estimating it, you should ask if it is possible to create that feature without any dependencies. Could the other team that you are working with code such that they work with your current product, and when they update and you update, the new feature turns on? Can you do the same for their product? Great, we don't need to depend on each-others updates.

If we had a mature engineering system, I don't think we would ever have any dependencies.

Perhaps you aren't perfect, and there is no way to rid the dependency. Go ahead, estimate it. Then double that estimation. Then convert that estimation into one larger unit. 1 day becomes 2 weeks. 2 weeks becomes 4 months. There, now you can build schedules around it.


Regarding the WWDC comment, of course an estimate was needed in the first place cause WWDC day was also the projects deadline.


Here is the secret: If it misses the deadline, it's going to be announced at the next event.


True. And the pressure not to do that would be extreme. So it doesn’t help the ‘I prefer not to give estimate, like Apple’ argument at all.


My point was that if there is a hard deadline that doesn't depend on the team, estimates are simply not relevant.

If the team provides an estimate that falls one month after the deadline, the pressure for them to change their estimate will be extreme as well. In reality, if management has already decided when the product should be released, they don't give a fuck about an estimate. What they're interested in is for the team to take ownership of a decision they didn't make, and that's what the estimate is for.


> Good estimates are critical to plan dependent activities and setting customer expectations.

One of the things I dislike when I hear this is that it says nothing about the difficulty or cost of getting the estimates. Yes, good estimates are extremely valuable, but solving the halting problem would also be very valuable. That doesn't mean it's going to happen.

A big issue is that to get good estimates, often we need to solve most of the hard parts of the problem. How do we account for the time needed to get the estimates?

> If we don’t estimate then we are saying that software engineering is not an engineering discipline.

I'm not sure I believe this. There are plenty of non-software engineering projects that are late and go over budget. It wouldn't surprise me if that was the norm. Certainly with construction projects it happens all the time.

I'm actually curious about which engineering disciplines actually come up with good estimates. When developing a new type of airplane, or a new engine, are estimates typically accurate? It seems unlikely to me.


> but solving the halting problem would also be very valuable.

The halting problem is mostly a non-problem in settings that really need a proof. We have non-turing-complete languages that let us produce programs that provably halt.

That they are not mainstream tends to show that we don't really need that proof very often actually.


I find estimations a useful process (on user story level). Sometimes team members will have wildly different estimations, and then there is discussion why, and often new information is learned. I know this is basic agile stuff, but I find in practice it works well.


The discussion can be had independently of the estimation.


To accurately estimate means developers and product management have collectively discussed the requirements to a level everyone understands clearly. Without this an estimate is as accurate as a weather report for 90 days out.

A good estimate may also require "spiked" to test concepts to get to a reasonable estimate.

I'm currently in a project that is terrible as the development team provides estimates without even reviewing the requirements and now "18 months late" with many pissed off stakeholders. It is a caustic situation. People are quitting the company due to the politics, frustration, and pressure due to this.


For most saas out there, once all the requirements are that well understood the product is already 70% built.


This is very true.

A great product manager is as priceless as a great developer.


It’s just “a spike” or “spike”.


> It’s just “a spike” or “spike”.

I imagine the OP meant "spikes" and either accidentally hit the 'd' instead of the 's' or simply fell victim to autocorrupt.


Yes, thanks!

You'd make a great developer with the perception of the error cause before confirmation ;)


Yea sorry, autocorrect strikes.


Look, it's just a requirement to know the weather 3 months in advance. A lot of money is riding on this: agricultural impacts, shipping, the effect on consumer behavior and power generation requirements. Doing without accurate weather reports 3 months out is just not acceptable. Sure, it's hard, but you have to just do it anyway, because it's so important.

...

Except weather reports 3 months out are not reliable, unless they are so vague as to be meaningless. I have frequently encountered people who claim to be able to give accurate software estimates. Inevitably, this means that they simply know how to cut requirements as the promised ship date arrives. Which is a useful skill, but not the same as accurate estimates. I have stopped arguing the point, because it doesn't matter what the reality is, business concerns mean that an estimate is required sometimes. But it doesn't mean anything more than the weather estimate for 3 months from now. If you got it right, you were mostly lucky.


it's so weird. i know it'll be cold in feb in minnesota. i know this because i have experience. and that experience translates into my ability to give guidance. if you want to be taken serious, you need experience to be able to estimate.


4F or 12F? Come on, which is it? If it's above 8F I can get away with a cheaper antifreeze next year, but I need to put the order in next week.

"Cold" is not an adequate analogy for the sorts of estimates that non-delivery parts of businesses seem to think they are entitled to, and it's not reasonable to say people can't be taken seriously for rejecting that trap.


This is the "it never rains" strategy of weather reporting. On most days, it's not raining, so you'll be right most of the time.

Some may claim they have personally experienced rain, so your model must have some faults, but just ignore those plebeians.


I absolutely loathe software estimates. They put engineers in a damned if you do, damned if you don't scenario that isn't even in their full control anyway.

I think this is one of the things that Shape Up got the most absolutely correct - inverting the relationship between an estimate and time.

We've been asking the wrong question all along: Instead of asking "how long will X take?" you should be asking "How long do I want to spend on X?". It changes the entire dynamic of the situation to one that allows management to see the trade-offs in a given set of work, and lets the engineers tune scope to match expectations.

Any approach that still asks "How long will X take?" is dead in the water.


I'm the opposite. I find putting an estimate together makes me think through the design and ensure that the scope is limited enough to be viable within the schedule. By having a rough idea of all the tasks that have to be done, progress against those tasks can be measured. Knowing how far along the project is then gives feedback into development to know when and where pressure needs to be applied.

Granted, this whole thing works for me as I happen to thrive under pressure.


> thrive under pressure.

I actually would say I do too, but I also appreciate deadlines for the time compression effect that they instill that you can't get from anything else.

> I find putting an estimate together makes me think through the design

I agree, and I'm not advocating for ignoring all of those details for a ticket or project. I'm just saying that the important thing is to flip the conversation. All of those things should be known regardless of the time question.


I kind of agree with the premise of the article. Having "enough experience" can lead to more accurate estimates in some cases, because the thing that makes estimates inaccurate are the unknown aspects of the story. More knowledge about the domain, the codebase, the expectations of the client, and so on do make your estimates better because that knowledge simply means there are fewer unknown aspects.

For stories that have significant unknowns you'll still be wrong though.

However, even then it's still worthwhile providing estimates. The benefit comes from knowing how wrong you are. If you look at a story and have a guy feeling that it's quite simple but it actually takes far longer than the estimate that's useful data. It tells you that there's some aspect of the story that you weren't expecting, which can point to understanding where unknowns lie, or it means you thought the code was simpler than it really is so maybe there's some technical debt to be refactored, or it means that you failed to fully understand the implications of how far-reaching the story was so you should have done more upfront research. All those things can inform the next estimates you provide.


On our team, not only is estimation worthwhile, but the process of coming to a collective team consensus on an estimate is very worthwhile.

We have our product owner in the estimation session (we use planning poker), we discuss requirements, assumptions, sometimes even a potential approach or two to building a solution. That process frequently leads to discovering unknown unknowns, new requirements, and sometimes even reevaluating whether we need the change.

Estimation discussions involving a big chunk of the team can be truly useful.


Can you just keep the process but skip the estimation part?


I'd like to see the author's quantified track record of giving accurate estimates before I take their advice.

In my experience, software project estimation is the thing that everyone thinks someone else must be able to do well, and that they could do it if only they had more discipline. Then we get the tired old advice about breaking the project up into smaller chunks. But all the studies I've read of actual estimation methodologies show something like 300-600% error rate.

People think this must be doable because they want it to work, and they want an estimate. It's the same way that people thought one witch doctor was better at curing disease than another, when in reality none of it really does what's claimed.

I'm convinced that elaborating the spec in enough detail is the work of software engineering, and once you've done that fully you've done the whole project.


A problem is that people have different things in mind when they ask for estimates.

For some, an accurate estimate would mean that 50% of the time you're over and 50% of the time you're under. But management too often takes this type of 50/50 estimate and then makes all kinds of promises and contracts based on it. If you want an estimate that we're going to hit 99% of the time, that is going to be much much higher. Many places I've worked management would balk at any discussion of percentages like this when making estimates.

As the saying goes: What you say is "there's a 50% chance we'll be done in 6 months, if there's no distractions". What they hear is "I promise we'll be done in 6 months".


IME they hear "there's a 95% chance it'll be done between 5-7 months", when the real 95% interval is more like 2-15 months.


One strategy I've found useful when asked for an estimate is to ask back: "What do you need the estimate for?" Often, this leads to a useful discussion, and we can discover things like:

- They don't actually need to estimate, because the task can very obviously be completed by the previously window

- They're simply trying to prioritize two different features, so the estimate doesn't need to account for who will be working on the project, known vacations, meetings, etc.

- The business is trying to use the estimate for strategic planning, so high-confidence, or multiple estimates (optimistic/normal/conservative) are actually needed.

It's similar to when someone comes to engineering and asks "Please build this button for me" - it's always crucial to ask "Why?" and understand the problem they're trying to solve, since often what they've asked for is not what they need.


I used to do real estimates and was borderline prophetical on them. Didn't matter. I stopped and now just do a rule of thumb plus two weeks, two months, two years depending on the project. Professionally it changed nothing. For me it made my life much better and now projects come in "early" and make customers happy. Instead of people frothing at the mouth because it was a "day late"

Why do we as an industry put up with this? Lawyers Don't, Doctors Don't. Pretty much any Degree based industry doesn't. If it goes over budget/time they all just shrug and say that's how it is if you want it you have to pay more. Yet Devs are somehow supposed to know to a dollar how much the unknown will cost?


> Why do we as an industry put up with this? Lawyers Don't, Doctors Don't.

It's a class difference. Fussell places the typical doctor or lawyer in the Upper Middle. Most developers are, in their relationships with their employers and their employment (not in terms of income!), Mid-Prole, High-Prole, or solidly Middle, under a Fussellian classification. Elevating us to Upper-Middle would put us on par with much of upper management, and where most middle managers want to be but are not and are constantly irritated that they are not, in terms of freedom and respect.

No surprise that corporations (managers, in particular) resist this inversion of class-liberty compared with the corporate hierarchy, especially since they (managers) set the rules and the tone. It's bad enough we might make more money than they do. And besides, most developers haven't been socialized, in childhood, in school, or in their early career (so, the periods of life when class education occurs) into the Upper-Middle. We don't really expect better, would probably feel uncomfortable or like beggars requesting better, and (truly) may even feel uncomfortable or lost with the resulting freedom.

Put another way: who do doctors in a hospital answer to? Who do lawyers answer to, in a law firm? Classically, doctors and lawyers, right? Notice how upset doctors are about professional management infiltrating hospitals? That's them resisting dropping down in social class.


> Lawyers Don't, Doctors Don't.

And both deliver abysmal cost-benefit and have obfuscated competence to the point that it is nearly impossible to discern good ones from bad ones, as long as the bad ones meet the minimum standards of the license. In fact, both doctors and lawyers have fought hard to prevent any sort of evidence of their relative competence and performance from being accessible to their customers.

> Pretty much any Degree based industry doesn't.

Only non-degreed people should be accountable?

> Yet Devs are somehow supposed to know to a dollar how much the unknown will cost?

"To a dollar"? Straw man.


Agile, is in practice, a way to have the dev team under more strict scrutiny than anyone else in the org. It doesn't happen anywhere else, mostly because other industries have a long standing unions and working rights instead of free "professionals" doing services.


Which is ironic, because one of the driving design decisions in both XP and Scrum was to provide protection for the dev team from overbearing project management.


A lot of the comments here paint a picture of a dysfunctional relationship with business stakeholders, and then suggest defensive techniques to cover your own ass. I understand why one would need to do this in certain situations, but I would also say that your impact and skill growth will inherently be limited in these situations because you're lacking the fundamental thing that cross-functional teams need to succeed: trust.

The bottom line is this: exact estimates, especially for large projects are a crapshoot. There is often more than one way to solve a business problem, and long-lived consequences for maintenance, operations and future development. The best solution to large problems can not be arrived at by simply throwing an ill-conceived one-liner prescribed solution over the wall to an engineering team and say "estimate this". What works is to bring a small group of highly skilled practitioners and business operators who have the capability and experience to zoom in and out of the problem space enough to shape a sane low-fidelity plan, and then commission the right discovery and validation to formulate a full plan. This does depend on having the right people in the room and mutual trust between them. It's very easy for one bad apple to derail this whole thing either through outright incompetence or else inability to listen and understand another point of view. Often on HN we paint the picture of the clueless pointy-haired boss making bad decisions, but equally as damaging is the arrogant engineer who is unable to see past their own biases to play out potential tradeoffs with other areas that they don't have deep expertise in.


At the risk of being downvoted, I would argue that working on a project without first estimating the work is not engineering. It's coding.

A thoughtful estimate shows care in understanding the project and how it fits into the greater whole of the existing platform. It allows all members of the company to trust in the timelines of the engineering team and align their work to meet the milestones.

An estimate is by no means certain. The size of a project and its novelty will affect the certainty of the estimate. However, not doing one is careless


here is the problem: estimating is also work. breaking down the work is hard work. sometimes it takes more to break down and think things through than to do the work.

the fundamental problem is that management does not want estimates. they want quick estimates (ie close to zero effort) and after that they turn around and use those estimates as deadlines.

now as a developer what are you supposed to do? you’re gonna get burned a couple of times and be forced in death marches. after that you’ll: take your time estimating. you will pad your estimates to mitigate risk. ruthlessly dissolve complains about how big the estimates are by pointing out all the things that you need to think about and do. reestimate everything when anything but the most trivial thing changes.

everyone loses. really. management believes that they are squeezing the maximum amount of value but they’re not even close. developers end up doing the bare minimum and will take absolutely zero risks even if it would make the product better. fuck all that agility we claim to have.

welcome to software development in the 21st century. oh… I know. I’ll use copilot to write my code and I’ll also update it to estimate stuff! glorious!!!


As a manager and I can tell you that the push back to estimates is primarily from engineers and cheap company owners ("why spend time estimating when I/they can code?"). Estimating, wire framing, documenting, manual testing, security testing, etc are all hard but necessary.

If you don't schedule time to estimate, the estimates are worthless. Rule of thumb, anything that can be done by one engineer in less than 1 month should take about 1 day to estimate, anything under 3 months, one week, anything longer should take up to a sprint (2 weeks). As a manager with experience, you should roughly know what level of time needs to be spent by your team planning their work prior to executing. Chances are, in the estimation work, the engineer(s) will discover questions that have not been answered by the product specification that need clarification. And that's the whole point: getting as clear of a picture as possible.

As for any manager that thinks they can squeeze value is naive about what software engineering is. This is not a manufacturing line.


Well, yes. Estimation is design. Only if you call it "estimation" the engineer who's been berated into crunch-overtime for a miss in the past won't want to do it and will do it badly when forced, and the cheap company owner whose own behaviours drive teams to estimate badly ("that estimate's huge, I'm not paying that, estimate it again but smaller") won't have had good experiences to understand why that design phase is valuable.

There needs to be enough time in the schedule to design to an adequate level for the problem at hand. You don't necessarily want as clear a picture as possible, but you do want as clear a picture as necessary.


My point is, everyone calls their predictions "goals" and then if they do not reach those, it is for some reason, but external to themselves. It's normal to overfultil or underperform goals, this happens all the time. No one hits every goal 100% every time ("60% of the time, it works every time"), I guess no goal in your company is hit a 100%, it's either over or below.

Marketing has a goal of 5000 new customers. They do not estimate it by conversion history, outreach, customer preferences and other values in their model. But they could call it "estimate" with some thinking. But then they'd have the same problems of being "bad estimators" as developers are.

We in technology call our goals "estimations", and if we're "wrong" we are nailed for it. They are tied to our professional skill in a way goals never are.

Let's call our estimates goals as everyone else does.

Also: Many people confuse estimations and measurements. It's easy to sum 5 throws with a dice. I can do that hundreds of times correctly. It's impossible to predict the sum of 5 throws before the dice is thrown correctly all the time (only on average).


> One major “secret” to advancing in a technical career is learning how to give accurate estimates

If Google, Microsoft, Apple, Blizzard, etc can't produce accurate estimates despite employing 'the best of the best', wouldn't that imply it's a nearly impossible task? I can see getting order-of-magnitude estimates, but nothing more accurate than that.


Plans are nothing, planning is everything.

You have to try, even if you know it's going to be bad/inaccurate.


That isn't what the author said though.


Yeah, it's what Dwight D. Eisenhower said. The first line of my previous comment was a quote from him.


It seemed like you and the person you replied to were talking past each other. And now we are.


As yourself how long it took you to do the most similar thing you completed in the past and how long that took. Then without at all accounting for you being more skilled or learned now, use that as a value. It sure beats the everliving shit out of every other estimation method I ever tried, including really extensive planning and prework.


I do believe a rough estimate is important. But detailed estimates are not always helpful. It is wise to understand what decisions and actions will be driven from an estimate. If it is a general swag to tell an exec when something might ship, that is one estimate. If it is a level of effort question to know which of two equally important features can be delivered more quickly, that is a different estimate. If you are trying to order work to be done in a way to optimize assignments of work to team members, that is something else. And if you simply have to do some work to keep a product afloat and there is no possible way to avoid doing the work, then the estimate is only for reporting purposes and should not block getting started on the work.

Estimates do matter - but blindly doing the same type of estimate for all tasks is missing the point. (Which is why I feel story pointing is overdone.)


I stopped playing such games and currently do something else:

I shave the scope as much as possible and make sure to report on my progress daily - usually by demoing.

This is actually something that was originally suggested by my manager in one of my former projects.

With the scope devoid of non-critical pieces and daily updates it's easier to monitor the progress and notice any roadblocks early on.

Normally you'd do something like this during standups, but there's a world of a difference between saying what you did and presenting it.

Generally people are more interested in whether something will be delivered on time than how long the specific pieces will take to finish.

Also in this system any accusations of villainy on part of those you report to never get a chance to happen.


Demoing every day only works if your work is easily visible, but that's beside the point. Micromanagement to this level is not conducive to a healthy development environment.


> Demoing every day only works if your work is easily visible

I don't much like front-end work, but seeing how easy it is for front-end devs, designers, and UX folks to get noticed, makes me seriously reconsider my priorities, sometimes. For them, it's practically effortless, just something that happens.


It's just as easy to get noticed reproducing some ugly piece of shit EMR application for the open source world in some new language just to show how efficient of a programmer you are in a weekend project.

(UI/UX can generalize about the other side, too ;D)


Sure, but the difference is that no-one gives a shit about that unless you are attached to, and high up in, a project with massive, successful PR (e.g. React). Showing up in a meeting with some at-least-competent design mock-ups and getting lots of positive reactions and excitement, meanwhile, is the norm, in my experience, even on fairly mundane parts of mundane projects.

Ugly but technically-impressive weekend code projects may impress programmers and gain visibility there. Meanwhile, designs routinely impress non-technical management, stakeholders, product managers, and clients. I mean, the degree to which that's true is so well-known that it's practically a cliché. There's a huge difference in how hard it is to get people who matter (in terms of career advancement, comp, and even just staying off your back about how much work you're doing) to notice your work. It's not at all comparable, and it's entirely to do with how legible one's work is to the rest of an organization.

The down side is that where non-UI developers meet confusion and ignorance ("so... what is it you still need to do? Why will it take so long? What do you mean it doesn't work yet? Oh you made the query finish in 3% the time it took before? That's nice, thanks. Moving on...") designers instead get endless suggestions, because every dumb-ass thinks their ideas about UI are good, and some of those dumb-asses really, really want to influence the design (why? Because it's so high-visibility, so it's something they can point to for higher-ups or in a portfolio and say "I did that"—they want to acquire some of that natural designer/UI/UX legibility-of-work for themselves)


I am really surprised no one has mentioned that a whole book devoted to this topic has existed for 15 years: https://www.amazon.com/Software-Estimation-Demystifying-Deve...

It's by Steve McConnell (also author of Code Complete) and largely covers what this author does (but more, and in more detail). I have found it consistently one of the more useful books in my library - particularly for its emphasis on error bounds and on how bad people are at estimating confidence intervals...


I really like this engineer's method of software estimation. It's really close to what Civil Engineers use to estimate load, called Load-Factor-Resistant-Design. That's a fancy way of saying, that we come up with an estimate, then multiply by an uncertainty constant called "Factor of Safety".

Steel in tension, low uncertainty, LRFD safety factor is 1.75. Steel in compression, medium uncertainty, safety factor is 2.5. Anything to do with soil, high uncertainty, safety factor of 4.

Same principal really. If it works for life/death scenarios, it'll work for you!


This is just the first article in the series. I'd recommend reading all 4 articles before rebuking this one.

https://jacobian.org/series/estimation/

For example, here's an except from the SWAG article:

> The tradeoff is time: estimation techniques, including mine, require some time to produce any level of accuracy.

> Sometimes, though, it’s less important that an estimate be accurate than that it be quick.


Oh, wow. Usually I get asked to estimate with very vague descriptions of what I am going to build, with hard pressure on date of delivery. At one point I said to a manager that if he pushes hard enough he will get any estimate he wants. Well, for the scientifically inclined, humans can estimate programming tasks that take around one hour with good precision. Precision declines until one week and anything over a month is virtually impossible to estimate.


Complexity exists in interaction between parts - not necessarily the parts themselves. Breaking down a project do not take those interactions into account.

Even if we were to try considering interactions we would fail. Like the weather, and other topics in the complex systems domain, software is sensitive to initial conditions. A small change in input (change in data or code) creates a large change in output.

We generally accept the upper bound on weather forecast to be 7 days and even then we might bring an umbrella just in case. Forecasting software many months into the future is futile - if taken at face value. Used as a general guideline it is usually ok.

A paradigm shift is needed. Both inside and outside the industry. Software is not industrial construction, hence the same logic (project management) do not apply.

Software is creation, conduction and orchestration. Not production or manufacturing. We are not teams of architect, builders and operators. We are musicians in a orchestra.

How long does it take to write a symphony?


I agreed with you all the way until the last part about symphony, that is how you lose the attention of managers and leader types. majority of software is not a symphony rather a race car held together by ziptie and ducktapes just well enough to give a feel of something big in behind the curtains.

this is especially true if you consider the incremental work projects take on. THAT incremental work IS forecast-able. problem is often all the information need to be able to make that forecast is not in one place & the discovery process is often left unaccounted accidentally or on purpose to commit to tighter deadlines. add to that changing requirements and you have the state of software estimation we are in.


You're right - not the best analogy in this case. Not sure about the race car though. I will think about that. Analogies aside...

Let's agree that estimation is possible to a certain degree. We know this and accept the inherent uncertainty.

Modern project management is whole sale copied from industrial construction and manufacturing. It seems no one stopped to ask whether the same logic applies to software creation. And it doesn't.

The business side of IT is stuck in a mental model build on construction and manufacturing. Yet the process of creating software contains neither of those concepts with the exception of automated build and deploy (and costs for those are negligible).

It is also interesting that no distinct vocabulary for software exists. We build, deploy, construct, have factories and so forth. Again copied from disciplines which are complicated - but not complex.

It is not possible to obtain the information you refer to by analysis. That's a property of a complex system. Analysis of parts neglects the interaction between parts and in software more or less everything is connected.

This is one of the reasons why we cannot forecast weather and why we cannot reliably estimate software.

Now, if I start my explanation this way I'm also sure to lose their attention. So what do we do? Which intellectual approach will captivate these people, retain their attention and at least plant a seed of doubt in the established way of working?


One alternative to estimations are projections via (for example) Monte Carlo simulations. I've been happily using https://getnave.com/ for that. The results seem to be in the same ball-park as my old estimations, but with less stress overall.


It's funny. I found only one post that reflected agility in their process as per AM. Yet, "Agile" is recognized as industry standard, while the required maturity is lacking, in both devs and product side. Achieving that level of coherency is fiercely hard, or you're just lucky the circumstances allow it.

In some scenarios you need projects and full-on estimates though, ie. for planning of hard deadlines. But the reason you want to avoid it has to do with the discovery process during development. Everyone conventiently "forgets" this while focusing on their own ends (local optimization).

Quick feedback-loop with A/B tests are maybe easiest way to achieve understanding on how AM recommends people develop together. Such setups may end up costing alot though, unless truly done in the spirit of AM and recognizing the costs of shortcuts.


I prefer to not give exact estimates whenever possible as the unknowns will ruin your estimate anyway.

If necessary I prefer to get very clear what is needed to [make that sale/give that demo/whatever they need] and give a very conservative range with a bunch of caveats. The more flexible the requirements and timeline the better. It means with a bit of luck you can deliver early and/or throw in some bonus stuff at the end.

If you are inevitably going to miss a deadline, discuss it as early as possible and discuss how to proceed, where to focus etc. You can often reduce the scope, cut more corners, move the deadline, find more resources, or do damage control. Whatever is necessary.

That's why I don't think (accurate) estimates matter too much, it's more about communication, managing expectations, and being flexible enough to adapt along the way.


I remember going to a conference in the 1980s (MacHack), and attending a "Software Project Estimation" workshop.

The guy basically listed excuses for padding the estimate.

Steve McConnell wrote a book about it, using a much more rigorous scientific methodology[0]. He has also written some other stuff about it[1].

This one is really the big one:

"9. Both estimation and control are needed to achieve predictability. "

In my experience, we can accurately estimate software projects that have iron-fisted control. No deviation from the plan. If we use quality-first techniques, like TDD, we can do a fairly good job of hitting targets.

Also in my experience, this results in software that no one wants to use. It doesn't crash, ticks off the punchlist, and basically sucks.

I avoid estimates like the plague (a rare luxury, but I can do it). I like to "wander down the garden path, and see what sights there are," so to speak. I call it "paving the bare spots."[2]

It results in software that comes very close to the user/stakeholder "sweet spot," with great quality. It also tends to come together fairly quickly, and allows for excellent early project visibility.

But that won't work, beyond a fairly humble scope.

[0] https://www.amazon.com/Software-Estimation-Demystifying-Deve...

[1] https://stevemcconnell.com/17-theses-software-estimation/

[2] https://littlegreenviper.com/miscellany/the-road-most-travel...


This is true of all knowledge-work. Traditional engineering is fraught with the same problem - you don't know EXACTLY how you will solve the challenges, so how can one accurately estimate them? Even worse - if you don't have a clear set of requirements, or they are expected to change as you progress (a critical feature of AGILE), then estimating with any likelihood of achieving same becomes all but impossible.

The more uncertainty in the path, the less accuracy in the estimate. Kahnemann's latest book "Noise" provides some good background on why this happens.

Having multiple people do independent estimates and averaging them probably gives better results, and having a clear process to document assumptions and test sensitivity to those estimates can also help.


The problem with estimates is that you can only really estimate the best-case scenario - how long it seems like it would take if there were no surprises. If you base your estimate more on past experience ("I've never done this exactly, but I did something similar to this once before and it took a month"), the people demanding the estimates are going to push back and demand you itemize - mostly for the purposes of "negotiating down" your initial estimate. Which, of course, isn't an "estimate" in their mind, it's a rock-solid guarantee.

After 30 years in this profession, I've lost hope that we'll ever get away from the mindset that developing software is a mindless, mechanical, repetitive task rather than a creative endeavor.


I can't remember where I read it initially, but the take that I love about estimates goes something like this:

Software is trivially copy-able, and as such large software projects are unique enough that accurately estimating them is impossible. This is in contrast to something like building a house. I build a single house, figure out how much that cost and now have a very good reference point for how much building that same house over and over will be.

I really like the approach that Basecamp recommend in Shape Up[1] where the team pivots to reasoning about work in terms of appetitie rather than expected time.

[1] https://basecamp.com/shapeup/1.2-chapter-03#setting-the-appe...


Shape Up is fantastic!


I completely agree with this essay.

I wrote more on the problems with estimations, and some solutions, here: https://camhashemi.com/posts/accurate-estimations/


Hell No, for the most part in my limited experience, they're not estimates, we should really stop calling them that. They're at best guesses at worse they're coupled with excessive padding and extreme waste.

I don't have any beef with it being hard. I have a major beef with it setting invalid expectations, having no basis in calculated fact, and overall being useless/time consuming to the people having to meet these deadlines and participate in said "agile" rituals.

I'm all about capacity. If we can understand what a team is capable of or the capacity of said team, we don't have to guess how much work they agree or don't agree to do or force them to use a crystal ball at the weekly séance.


You can only measure capacity if you know the size of the work you're taking on. If you don't, what does capacity even mean?


You're never going to know that. I rather track for example DORA metrics like MLT or DF versus tshirt sizes.


Aren't those DevOps KRs? I would track KRs even in software engineering: releases without incident, estimate to reality ration for future planning, etc


They typically measure the dev part of DevOps.


I work at a place now that ditched the time estimates and the sprint planning meetings and standups that go along with that and it's so much better.

Time estimates are always wrong, it always slips to the right. This is always used against you. You suffer because of it. Your work suffers because of this.

I get a couple extra hours a week by not doing daily standups, retrospectives, sprint planning, etc etc. This allows tasks to be shipped faster.

If there's a problem I communicate it up and stakeholders understand.

Want to know how long it might take? Look at some historical tasks in Jira and compare timestamps.


At my last company I worked on two very different sides of it throughout my time there. One side did no estimates at all as the nature of the work could afford that. The other side heavily relied on estimates and spent a lot of time planning and creating them. I found the non-estimate side of the company vastly more sane, enjoyable, less stressful, etc.

I get why stakeholders want estimates, don't get me wrong. But I can't help but think just letting them go and trusting the team is ultimately more effective in many cases.


Asking for estimates is fundamentally an expression of distrust. It’s obviously unpleasant to have to participate in a regular ritual in which those in power over you express their distrust.

Don’t get me wrong, sometimes distrust or limited trust is justified, but it’s not an ideal.


One person's estimate is different from another person's estimate. The hole that most developers dig for themselves starts with a fixation on having perfect numbers and that they'll be punished if they don't have these. Estimation is often a part of negotiation separate from commitments, but developers treat it as a pure analysis game.

This results in punishment for poor communication. The punishment starts when they claim they can't provide anything that approaches estimate, or attempt to weasel out of a discussion. It should be clear to the customer when and how you are delivering commitments. Poor communication turns an estimation discussion into one where a developer has overcommitted.

Even if you can assign zero time estimates to tasks, or even zero estimates for how long time estimating task will take, you can provide a view on how you'd go about it and the relative priorities for your investigation. This is a critical part of building trust which is necessary for the long-term success of any project. Communicating a clear perspective that does not make commitments is important for building the relationship which will give you wiggle room late on when you have to make adjustments.

When a commitment is made to the customer, any associated estimate needs to be provided along with context. Providing a confidence number leads to misinterpretation because different tasks may require different analyses. It is better to encode it in some other way to highlight things like: multiple interviews conducted, whether a coding spike was done in the area, whether support contracts are in place for the 3rd party service required, etc.

As the project progresses, there should be some kind of update to those commitments. This is where again it gets scary for people because they don't like having these candid discussions.

In all of this, I don't prescribe any methodology. This can fit with any methodology, but you have to find a way to fit it in. Waterfall has lots of clear points where commitments are made, but it doesn't have the feedback mechanism in its purest form. Updating commitments is essentially part of "agile", but the recording and communication can sometimes be a challenge. The job of the developer is do enough of the right work to set the right commitments and communicate around them.


It's rarely developers turning estimates into commitments. Many even have stories of managers refusing to accept estimates because the deadline was decided already.

A developer trying to communicate a clear perspective that does not make commitments will be seen as attempting to weasel out of a discussion in such an organization.

Customers understand 95% confidence better than they understand coding spikes in my experience.


The best estimation technique I've seen is ROPE: Realistic, Optimistic, Pessimistic, Equilibristic. It's fast to ballpark, effective solo or with teams, great for PMs and managers, and able to go directly into critical chain scheduling or monte carlo simulations.

https://github.com/SixArm/sixarm_project_management_rope_est...


Was the name chosen because four numbers provides exactly enough ROPE for the product owner to hang you with?


Ha! The name ROPE is because a rope is a group of strands, braided together into a larger and stronger form with higher tensile strength.

With ROPE, the four numbers combine together to create a larger and stronger estimate. I do estimates for clients, and ROPE provides a way for each stakeholder to see that estimates are really ranges of probabilities.


Generally, I've found that people who believe they can estimate software development projects are severely deluded. Once in a while they're not, but those cases relate to projects that are very similar to several previous projects, undertaken by the same team. Best resource on the subject: https://www.youtube.com/watch?v=v21jg8wb1eU


Personally, I have found it useful to make private estimates, and then be honest with myself about why I did not meet them. It has made me a better developer, and incidentally also better at estimating.

What, if anything, I say about these estimates to anyone else depends on the culture of the environment, but the experience of estimating has made me better at explaining why it is going to take longer than you think, when that case needs to be made.


In my experience there are three things that often get conflated around software estimates:

   1 Effort versus calendar time.

   2 Estimate versus commitment.

   3 Confidence level - are we talking P50? P99? P100 under some set of assumptions?
I don't think I've ever worked in a setting where everyone shared the same understanding on all of these points.


Very true. And lots of people don't even understand the differences on their own.


Agreed, but that's the comparatively easy part...


I've always hated estimating. Then a few years ago I realized that if I'm going to work on a project with a team of six for a year (at a San Francisco tech company) that's in the order of a million dollar investment (likely more).

If the organization is spending a million dollars it's reasonable for them to ask for an idea of what they'll get and when!


If they insist on features and delivery date, that usually leaves quality/reliability as the thing you can adjust.


what I've seen a lot over the years is that an engineer gets asked about timeline, thinks for about 10 seconds, blurts something out, then if it's off (too short) they bust their asses / cut corners trying to hit their own estimate. Since estimates are usually optimistic, I tend to see a lot of over time and/or corners cut.

there's certainly an element of personal pride in that dynamic I think. You're asked to give an estimate. You give it. You now feel like you've staked your professional rep to it.

the author shares his method for coming up with estimates, and it looks like an offline process that involves more than a gut check. I think for sufficiently large features we (eng managers, eng peers) should encourage engineers to _not_ give on the spot estimates given we know how difficult it is to estimate.


A big benefit from putting in the effort to estimate is from the side-effects having to get clear on the what the problem is and what constraints apply. So much useful information gets flushed out when you're both under pressure and are trying to give a reasonable estimate.

Then, often, toss the estimate.


Software estimation is only hard when you don't understand what you have to do or how it will be done.

Just keep breaking the problem down into sub units that you or your team understand and can fairly accurately understand the effort and risk of (because they've been done before)


And when there are chunks that you haven't done before?

Or when breaking it down will require going to a level of detail that means basically fully designing/writing your system in order to estimate it?


Yes! if you don't understand how something is going to be roughly designed/written then you're not estimating your guessing.


If you already know how to write it, why haven't you automated it? One could say, the worst developers are the best in estimating their tasks, because they repeat everything they do all the time. ;)


At some point you probably will want to do a rewrite. I don't think it is surprising that this sort of analytical process of bite-sized task creation also uncovers architectural deficits in your design.


That's the secret though, isn't it? The more likely you are to know all the steps the more likely you're going to make an accurate estimate. The catch there though is that if you know how you're going to do everything, you're already at least halfway through development.


It's going to take awhile to break all this down, management wants an estimate of how long these estimates will take. :\


Here is the author's follow-up post on the topic: https://jacobian.org/2021/may/25/my-estimation-technique.


I'm good with software estimates. The issue is that when I give an estimate, folks want to haggle, as if they are trying to buy something for cheap at a mall.

That is what I'm against mostly. If I give you an estimate, you accept it, or don't ask me for one.


Business here.

Guess what we get it's hard but we have to do it so we can plan release, usage (with Sales) and spit out revenue targets to justify the initial spend.

It's all about confidence of estimates. Many small things are easy to estimate and have high confidence. We're good at that stuff.

Where it gets super difficult are greenfield new products that span across multiple teams,both engineering and business. Without burning your entire budget on the estimation process you just have to have exceptional buy-in from all teams and start working, that's the best way.

Where things get tricky are when one business vertical suddenly has a new urgent #1 priority during the build and has to divert away attention and resource. Everyone can lose momentum so takes some business and engineering craft to hold it together.

All part of the gig.


The problem isn't estimation per se, it's the vicious cycle of

estimation => "commitment" => "failure" => padding => distrust

and much like Global Thermonuclear War the only winning move is not to play.


> I could go on: the point is, there are many situations where an estimate is required.

Please do! I'd find it really valuable. (Not necessarily OP. I'm happy to hear from others as well.)


Billing a customer for a feature they want. How much you charge the customer is going to depend on how many dev hours it will take. An accurate estimate is necessary in order to give come up with the appropriate price to charge them. Name a price that's too low, and you end up losing money on devs' salaries. Name a price too high and the customer walks away from the table


That sounds like freelance work, not product development. Am I mistaken?


My humoristic answer to this topic

https://gioorgi.com/2021/estimation-rules/

And it works :)


The first major problem with estimates is that there no good requirements. If you don't know what you have to build, it is pretty nonsensical to give out estimates.


“…just a quick finger in the air estimate really. We’re just looking for a “t-shirt” size. No one will hold feet to the fire over it”.

^^^^^^ This and other lies told by management. :P


Lots of estimation experts out in force today, obviously Hacker News is full of sooth sayers and savants or maybe they just over point everything like everyone else.


You know, if you hire me as a Scrum / Agile Expert Consultant (tm), I can get your development department so efficient, you can outsource it all and save a ton of money. Payment in full required up front.

/s


Does this synergistic agilization improve our ability to telepathically write code? This would save so much money on keyboards.


If you want the business to set effective priorities, you've got to provide estimates. You can't figure out cost-benefit without some idea of cost.


Seems like giving time-based estimates just isn't feasible, though. Sure, for some types of problems it's not hard (though it feels like tasks that can be that easily estimated should probably be automated), but many are new/hard problems that need to be solved. Yes, it would be great to have accurate time-based estimates, I don't think anyone disagrees with that. But there are lots of things that would be great that we can't have.

Maybe just ranking tasks by difficulty, or using Fibonacci rankings as recommended by some for Agile story points, would be a better use of everyone's time. That way, you can still say "A is roughly X times harder than B", without trying to rely on (almost certainly wrong) estimates in terms of days/months/years.



The main problem is that doing a good estimate is essentially "doing the design", but everyone considers the design as part of the work of the task.


Low Confidence = fast estimate = broad estimate high confidence = slow fast estimate = narrow estimate

It's a scale that you can choose where you want to be.


So the thesis is “Do it anyway, because your boss or their boss is going to say ‘do it anyway’”

Yeah, that is why I do it anyway. This is not insightful.


"Estimations are not cool, you know what is cool? Ballpark-Sizing"... Welcome to Agility.


"Software estimation" is not a specialized problem. Rather, it's just an example of what Daniel Kahneman calls "the planning fallacy." The same phenomena we see with software occur in many, many other fields.

Read "Thinking, Fast and Slow" where he talks about estimating the time to create a new textbook.


Not if you're good at it.

"If you think something is hard or not, both ways you are right"


No battle plan survives contact with enemy. But you still need to make one.


No. It feels like lying and I don't want to lie.


Maybe we should think of it in a different way.

Resource allocation.


> Many Agile methodologies involve arbitrary scoring systems – story points, t-shirt sizing, etc. – deliberately designed to help avoid giving estimates in time-scale units.

I can't tell if there's a really deep misunderstanding of what the author calls "no-estimate" systems, or broad agreement but with a small/superficial difference in preference on an implementation detail.

> However, sooner or later, someone’s going to ask “when will Feature X ship?”

Story points let you do this.

As I see it, the key idea with what the author calls "no-estimate" scoring systems is to psychologically decouple the act of estimating from "real time units" into "abstract work units", which (the claim is) are more accurate than "real time units". Most engineers are bad at producing time estimates for things, but if you ask them for a "points estimate", they are more likely to compare the new task to representative examples of past work (which tends to be more accurate), whereas asking for a "time estimate" they are more likely to envision themselves completing the task at hand (which leads to overly-optimistic estimates).

Given a set of points estimates for upcoming tasks, you look at your team's velocity of "abstract work units" per unit time, and you can project timelines for your backlog. The goal with scrum story points / t-shirt sizes is not to avoid estimating when a feature will ship, it's to make that process more accurate.

Scrum suggests that you try to keep a few sprint's worth of tasks finely-groomed, and keep the rest of the backlog coarsely groomed (i.e. rough estimates at epic-level, where you might have blocks of work that are multiple developer-months in size). This is using the "lean manufacturing" principle; don't spend time grooming/analyzing/estimating work that you're not going to use immediately, as it takes time to do so, and the backlog is subject to changes which would invalidate the preparation you did. But if you have a specific need to forecast 3-6 months of backlog, then of course you would do so, and points-based systems are capable of doing so without any modification.

There's nothing more to it - if you follow this process you end up with a roadmap/backlog that gives predictions for when everything you've estimated is going to land (i.e. "when will feature X ship"), with uncertainty naturally increasing the further in the future that you are looking.

To be clear though -- if you prefer using "days" as your estimate unit, that's completely fine. One of the key principles about doing lower-case-A agile software development is that you need to experiment and figure out what works for your team. I'd recommend that you retrospect on how many "days estimated" of work you actually complete per day though, because it's likely not to be a 1:1. And then, if you're regularly completing 7 "days" of work per 10-day sprint, wouldn't it be more sensible to forecast that you'll complete 7 "days" per sprint, instead of constantly claiming you'll complete 10 days of work every sprint, and only finishing 7 of them? Now you've re-implemented points. Of course, I think the author would prefer to say "fix your estimates and stop saying you'll do 10 when you only do 7", but in my experience the actual amount of work delivered is very lumpy, and so it's hard to close this feedback loop accurately.

A middle-ground here is to distinguish between "burdened" and "unburdened" days, where an unburdened day is the mythical "if I had no other tasks, how long would this take me?" estimate. These are closer to what an average developer will give if you ask them for an estimate. Then you can convert unburdened=>burdened by some ratio, depending on how much time you allocate to non-task time. These are things like devops work, on-call, code review, architecture review, etc. You can improve the unburdened/burdened time ratio, so it can be nice to be able to keep all your old estimates valid as you remove/add burden from your engineering team. In this terminology, the author advocates for asking developers for fully-burdened estimates, i.e. the estimator is responsible for folding in all of the complexity of non-sprint tasks. In my experience, few engineers (very few below staff level) are good at this process, as it's hard, and is fairly orthogonal to most of the normal task work that non-managers participate in.

Now, the case for "the author is making a superficial disagreement" - if you hop over to the author's technique for estimating (https://jacobian.org/2021/may/25/my-estimation-technique/) you'll see a very sensible process that is to my eyes structurally isomorphic to the standard best-practice "agile" techniques, including using time-boxed spikes to reduce implementation uncertainty, and proactively breaking up large tasks into more easily-estimatable chunks. The main differences I see are that the author estimates in fully-burdened days instead of points, and is more explicit about communicating the uncertainty on the estimates given. (In standard points-based approaches you just decline to give an estimate with "high uncertainty", or would give the pessimistic worst-case estimate, and would prefer scheduling a spike before starting to work on something that's highly uncertain. In some cases I can see where an explicit uncertainty range would be more useful to external stakeholders, so I like the author's process. I also can see that asking engineers to be explicit about their uncertainty might be a good way of achieving the same sort of decoupling-from-the-happy-path that story points are aiming to achieve. So overall it seems a good system.)


> Maybe Sales can close a major deal if they commit to a timeline for some new feature.

Maybe sales can sink the company.


‘You can get good at estimation’

Only if you work in an extremely repeatable well trodden domain. If you are so skilled at estimation, I’m sure Tesla would love you to tell them how long FSD will take and would pay a premium!


You don't need to work in an extremely repeatable domain to get good at estimation. There is very little in my day to day job that is repeatable, but I can roughly look at the project I'm on and compare the scope of it to the projects I've done before.

If you really do think that every project you're working on is completely incomparable to anything you've worked on before you're probably concentrating on too low level details.


Or doing something novel?


I can estimate how long FSD will take. Now where do I get the pay check?


estimate = estimate + 1?


People love to conflate research with development out of "R&D", but the latter is a lot more predictable than the former. Repeatable, well trodden domains are 99,9% of development work out there.


The problem has always been that while the ground is well trod, the path is never clear. There are hundreds of ways to build any one widget, not even counting that the widget you start building isn’t the one they wanted in their head (but not in the spec).


Sure, that's why estimation is usually not trivial. But still a lot easier than when you have to invent a new kind of widget first.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: