What do you do when your future tasks are unknown or ambiguous?
For example, at my day job my task is to implement banking. The day to day tasks change... day to day. There aren't a "number of tasks remaining in the queue," since whatever I'm doing is what I'm doing.
One could say this is poor planning. But due to the nature of Big Banks, each task is usually blocking the next one -- in other words, it's not possible to discover or plan what you need to do next, until you've finished currently.
An example of this is when we realized we didn't need to run our banking API using $BigBank's test environment. Their test environment was ... uh ... well, let's just say, when we realized that we could simply switch on "production mode" and bypass their test environment altogether, we collectively facepalmed while rejoicing.
It wouldn't be possible to add "switch to the production environment" into the queue several days ago, because we didn't discover that we could do that until yesterday during our biweekly sync call.
I'm sympathetic to your writeup, and I like your recommended approach. But I just wanted to point out a realistic case of it failing. But in fairness, I think every estimation approach would fail us, so don't feel singled out. :)
Perhaps your approach will work in most cases though, and I'm merely stuck in a twilight zone special case.
> The day to day tasks change... day to day. There aren't a "number of tasks remaining in the queue," since whatever I'm doing is what I'm doing.
What you are describing is not ambiguity, it's total variability. If your future is 100% random, it is, by definition, impossible to predict. Such a state would also mean a total absence of direction/vision. Predicting dates is not only impossible but not a question you can ask, since you don't know what's next.
What I'm going to challenge is that you're effectively in such a case, because I don't think it's true.
> One could say this is poor planning. [...] in other words, it's not possible to discover or plan what you need to do next, until you've finished currently. [...] because we didn't discover that we could do that until yesterday during our biweekly sync call.
The example you're giving *is* poor planning. You're going into execution without validating base assumptions. That you discover the specifics of a dependency that late into the game is that you're going into it without a plan. I'm not judging, in your case maybe no-one is asking for any sort of accountability, and just executing is the best recourse with the lower overhead. But the fact that you can't estimate isn't due to the environment, it's due to the fact that you don't have a plan. Some of the companies I worked with are fine with that, most are not.
It's like this for day-day operations when leadership is absent and no CSI ever gets prioritized over new development. You can say the org is dysfunctional, but there's less leverage workers can use to change such situations. Especially when efficiency measures get rewarded with layoffs.
What happens when you complete your work before you know what you need to do next?
If this never happens, then you have some invisible queue, as you do have things to do next.
As far as your example, that's a great example of a task that seemed like it would take long, and ended up being very very short. Can you describe why this would be bad to add into your task system?
- Add Task: Run banking API in $BigBank test environment
- Start work time clock.
- Find out we don't need to do it, and switch to prod mode
- switch to prod mode
- Close task, and time clock
This is now data for your estimates of future tasks, as this will probably happen randomly from time to time in the future.
Switching to prod mode takes 5 to 7 business days, because we have to order certs from DigiCert and then upload them to $BigBank, whose team requires 5 to 7 business days to activate said certs.
We expected to turn on prod once testing was finished. But we ended up discovering that prod was the only correct test environment, because their test environment is rand() and fork()ed to the point that it doesn't even slightly resemble the prod environment. Hence, "prod am become test, destroyer of estimates."
So for 5 to 7 business days, we'll be building out our APIs by "assuming a spherical cow," i.e. assuming that all the test environment brokenness is actually working correctly (mocking their broken responses with non-broken responses.) Then in 5 to 7 business days, hopefully we'll discover that our spherical-cow representation is actually closer to the physical cow of the real production environment. Or it'll be a spherical cow and I'll be reshaping it into a normal cow.
By the way, if you've never had the pleasure of working with a $BigBank like Scottrade, Thomson Reuters, or $BigBank, let's just say it's ... revealing.
Maybe I'm missing your point. It seems you're attempting to answer the wrong question: is this task accurate, given all the changes that have happened? This is irrelevant for large scale estimation.
The question for scheduling prediction is: what distribution of time will it take to mark any task in this queue as FIXED/INVALID/WONTFIX/OBSOLETE/etc? The queue can have any amount of vagueness you want in it.
Regardless of the embedded work, regardless of whether or not it changes, becomes invalid, doesn't exist, etc - these are all probability weights for any given task/project.
For example, at my day job my task is to implement banking. The day to day tasks change... day to day. There aren't a "number of tasks remaining in the queue," since whatever I'm doing is what I'm doing.
One could say this is poor planning. But due to the nature of Big Banks, each task is usually blocking the next one -- in other words, it's not possible to discover or plan what you need to do next, until you've finished currently.
An example of this is when we realized we didn't need to run our banking API using $BigBank's test environment. Their test environment was ... uh ... well, let's just say, when we realized that we could simply switch on "production mode" and bypass their test environment altogether, we collectively facepalmed while rejoicing.
It wouldn't be possible to add "switch to the production environment" into the queue several days ago, because we didn't discover that we could do that until yesterday during our biweekly sync call.
I'm sympathetic to your writeup, and I like your recommended approach. But I just wanted to point out a realistic case of it failing. But in fairness, I think every estimation approach would fail us, so don't feel singled out. :)
Perhaps your approach will work in most cases though, and I'm merely stuck in a twilight zone special case.