he also links to a conference talk about this. "you can make a claim that estimates are based on past behavior, but the fact is that what you're implementing is something that hasn't been implemented before. So any kind of measurement that you've made of something that has happened in the past is not going to impact what you're doing now" https://www.youtube.com/watch?v=QVBlnCTu9Ms
I'm rather skeptical on this idea that just because a specific feature was never implemented, or just because a specific bug was never fixed, that estimates based on past behavior don't work. That assertion doesn't have any basis on reality. I mean, implementing a feature of fixing a bug is not an isolated event performed with improvised approaches starting from scratch. Teams have processes and procedures that are standardized and take time, and need to be performed sequentially, which means that the improvisation part at best represents a small subset of the time invested working on a ticket.
For a concrete example, let's imagine a team which has a continuous delivery pipeline which involves a code review step and manual acceptance tests. Let's say that the code review can stay in a queue for a couple of hours, or even sleep into the next day, and that the manual acceptance tests require the feature to be deployed to a preprod stage after passing through all unit and integration tests, and that it might take a day to run.
With this process alone, the ticket already takes at least 2 or 3 days between being assigned to someone and being marked as done.
Now, let's say that the coding bit of a random ticket might take 5 minutes or 3 days. This means that the overall time between the start and end time of a ticket is about 4 days +- 2day, which means worse case scenario, it takes 6 days to close a ticket.
How is this sort of estimate not possible?
The problem of providing estimates is not one of predicting the amount of time it takes to close a ticket. The problem of providing estimates is a problem of processes, and how to adequately organize, structure, and classify work. If you don't know what you're doing then you don't know when you're done.
Good point you hardly ever start with a clean slate. Using historical data or asking engineers to estimate how long it will take will always be based upon this past performance.
But the point I try to make is that it is hard to take into account all the factors you have to deal with in a complex situation. As a human, you tend to ignore irregular influences. With tracking tools, you get this data right out of the box and it is more accurate in my opinion.
Yeah, I think the problem the OP overlooked is that most of the cycle time was taken by various internal company processes and these are really not novel every time therefore it takes a similar amount of time to deliver different features. This in my mind is simply called estimation.
The external clock is going to be some multiple of the internal clock and tasks were piling up behind contended locks. The ferrari and the skateboard make it through rush hour traffic at the same speed.
- Sr Software Engineer: https://grnh.se/14be7a2f1
- Data Scientist: https://grnh.se/7b9e3e181
- SEO Lead: https://grnh.se/0a8e8c771
- Data Analyst: https://grnh.se/c2081bda1
- All positions: https://grnh.se/8ab13b201
Be a part of a company that provides intelligent information and unique experiences to hundreds of millions of consumers each year. We help consumers to save money through smart financial decisions. We experiment and learn with new technologies and large datasets to better serve our consumers and our enterprise partners. Top financial institutions and publishers work with us ... you should, too!