Hacker News new | past | comments | ask | show | jobs | submit login

> I often see complaints about Agile and SCRUM met with, "You're just not using it correctly."

To be fair, this is true a lot. Agile has quickly become "whatever you were doing before Agile but now with stand-ups and kanban boards."

The real problem with Agile is that it's kinda stupid to assume that companies will radically change their development process and that they won't just do "whatever we were doing before but with a couple bits and pieces from this other thing."

It's way easier to keep doing whatever you were doing before and just say that it's "Agile." See, for example, the number of teams that have long-winded standups even though that's pretty explicitly discouraged.

> What would not have helped, or been in any way productive, would have been spending time trying to figure out how many "points" my tasks for the day are worth. I still don't see how that system is actually different from just using hours beyond philosophical arguments.

Totally my take on the whole point system, but:

They're definitely related (hours and points), but basically the whole idea is vagueness. Figuring out if something will take 3 hours or 5 hours is kinda meaningless, because you really have no idea until you get into it. Saying it's worth "5 story points" might suffice as a rough representation of that 3-7 hour chunk.

If it ends up taking 20 hours, oh well - good to know for next time!

The idea is not to bicker over whether something will take 3 hours or 5 hours or 7 hours, but to just slap a vague label on it as "low-to-moderate difficulty." It's supposed to save time in that regard.

> Basically, I prefer a more focused and leaner version of the approaches.

I think any reasonable Agile advocate would tell you to discover what's working for your team and what isn't, and to adjust accordingly.

The point is mostly to move away from that extremely long-winded waterfall process and move towards rapid iteration. Story points and stand-ups are just tools for accomplishing that (by estimating velocity and having regular check-ins).




Agile requires flexible features since the deadlines are fixed. The point system is there to give an idea of how much you can cram before the deadline and negotiate with the client about what to axe from the release

Easy bullshit scrum test: go to a manager and ask what feature will be in the release and what will be axed.

If nothing is going to be cut from the requirement docs, all the castle about poibts priority and speed just falls down (or you have a ridicolously lax deadline, of the likes I’ve never seen in practice)


> The point system is there to give an idea of how much you can cram before the deadline and negotiate with the client about what to axe from the release

Well, it's more than that though - otherwise you'd just assign each developer 80 "points" worth of work for each two-week sprint.

> If nothing is going to be cut from the requirement docs, all the castle about poibts priority and speed just falls down (or you have a ridicolously lax deadline, of the likes I’ve never seen in practice)

Adding stories to an iteration in progress is also (I'm pretty certain) explicitly discouraged.

If your manager is doing that, you've already kinda lost the Agile battle.

I think that's sorta what people mean when they say people are doing Agile wrong - they do these things that are explicitly discouraged.

Whether there's any way to reconcile that, I can't say.


> I think that's sorta what people mean when they say people are doing Agile wrong - they do these things that are explicitly discouraged.

Usually how it goes. It's like critics of communism. Or libertarianism. Maybe--just maybe--it's a shit ideology that is totally unworkable.

I think my current place is on their, what, 8th "reset" now? Where they try to correct their process because it's "not working." Despite not listening at all to the developers who are telling them week after week exactly what is wrong: management forcing unreasonable deadlines on developers and throwing all process out the window.

It would be more amusing if I wasn't one of those caught in this demented whirlwind.

I think I'll start a new fad. Instead of Waterfall, I'll propose we call it: Typhoon Development. It's where management focuses all their attention on a single project for a short period of time, stressing the developers to the breaking point and eventually moving on. But not before completely destroying the code base in their wake.


> Maybe--just maybe--it's a shit ideology that is totally unworkable.

Could be. Much like communism, it's something that seems to work remarkably well for small groups of people but seems to have problems with large, entrenched groups.

There are enough agile success stories though that I don't think the ideology is completely silly.

> I think my current place is on their, what, 8th "reset" now? Where they try to correct their process because it's "not working."

It's highly unlikely that any methodology would have any effect on your current place, regardless of how brilliant or revolutionary it is.

I think it's tempting to dismiss Agile as stupid or unworkable because it's failed to produce any results at organizations like this (and often just made things worse).

The truth is really that your organization is highly dysfunctional and the problem is management. There's no development methodology that will fix that, and indeed it won't get better until those people see the light of day (unlikely) or leave/are replaced.

For more functional organizations, or even organizations where management is willing to admit they might be wrong, Agile can be a nice set of guiding principles for development work that makes life easier.


> There are enough agile success stories though that I don't think the ideology is completely silly.

Any lonk to read one account of such written by devs themselves, as opposed to the vast majority that are written by consultants selling agile packages?


I have used scrum in two completely different companies and it was working really well in both.

In both cases management was on board with scrum and knew what that meant. Everything was estimated by devs, not managers. If there was an external deadline, scope was adjusted to meet it.


They explicitly banned in scrum. Scrum is probably least agile of all the 'agile' methodologies.

I think of scrum as training wheels to get started for organisations that aren't used to operating in an agile fashion.

There are agile techniques that don't have sprints.


> Saying it's worth "5 story points" might suffice as a rough representation of that 3-7 hour chunk.

> If it ends up taking 20 hours, oh well - good to know for next time!

The idea behind points in this situation is to start equating that 5 points = 20 hours (for this type of problem, for this team). If you think of it in terms of "this story is about the same complexity as this other one". Whether you call that 5 or 15 or 50 points is largely irrelevant, so long as all 5 point stories are roughly equivalent and a 10 point story is about double the work of that.

The rest of the uncertainty around complexity and people is supposed to wash out in the averages of the team: for example the fact that one person may typically do 12 points a week while another does 30 does not really matter for figuring out the overall velocity (team points per week), assuming the team does dozens of points per week total.

Now that said, I've never successfully used points. Sometimes it was due to endless discussions trying to equate hours to points (mostly from management and others outside the dev team), poor estimates, radically different estimates or skill levels, and probably several other reasons I'm unaware of.

Now my team does t-shirt sizes for long term things, and hours for the stuff in sprint (basically as it's started), but it's still often way off.

Does anyone have successful (long term) experience using points? What did you do to make it work? Does it actually have material advantages over other methods?


Yes, we have been using them successfully for quite a while. We only use it to estimate roughly what fits into a sprint.

No comparisons to time or between team members are made. I think that's really important. Management is on board with that. If it wasn't, it probably wouldn't work.


> Now that said, I've never successfully used points. Sometimes it was due to endless discussions trying to equate hours to points (mostly from management and others outside the dev team), poor estimates, radically different estimates or skill levels, and probably several other reasons I'm unaware of.

After two years of managing the same team of developers and gathering metrics, my PM[0] and I were able to successfully forecast tasks to within a few hours of precision for the team's 2 week feature work sprints.

The key to doing this was having a behind the scenes multiplier that was applied to each developer's capacity. Visual Studio Team System (or whatever it is called today) actually supports doing this. Work was still entered in "hours".

I forget how we even did it so that the devs didn't see the multipliers, but it all worked out some how.

Another key is never have a task that is more than 8 hours. Break it down further, and further, and further. However at some point there is a trade off between spending time breaking tasks down and actually doing tasks. Learning how to break tasks down is a skill that only comes with a lot of forced practice. No one likes doing it, no one likes learning how to do it, and for a new team I'd say it might not even be possible. Some familiarity with the code base and problem space are needed before 2-4hr tasks can be generated for 100% of all work that needs to be done.

We used t-shirt sizing only for initial planning of what work was going to be done. In our case we had 2 types of requirements, internal engineering work (code quality, refactoring, adding new libraries, etc), and feature requests coming from, mostly, external. We'd first triage based on t-shirt sizing, saying we had enough capacity in a sprint to do, for example 2 large, 2 medium, and 4 small sized items.

A senior dev (expensive!), myself, PM, and UX lead, would all sit down in a room and use the planning method taught in BJ Fogg's boot camp (https://www.bjfogg.com/bootcamp.html), which sounds new agey and stuff but it 101% works and ends with everyone in the room agreeing on exactly what should be done[1]. Once the t-shirt sized features were decided on, developers would spend a couple hours (or more...) breaking those features down into ~4hr tasks. At this point we'd discard any work that was over in hours, based on priority. Basically a second pass filter. (If devs guesstimated one of the other t-shirt items would actually fit, a quick breakdown would occasionally be done to see if we could squeeze it into the sprint).

The overhead here was really high. The first part was ~3 hours of prep + meeting for senior leadership: myself, UX lead, PM lead, and stakeholders in the feature work that sprint. The second half was basically the better part of a day for the entire dev team[2]. The super cool advantage of this was that at our peak, all the work was super parallelizable. Dependencies between tasks were mapped out, and anyone could go onto the task board and grab work that they knew would fit in just in time to the work their team members were doing. I cannot express enough, having everything broken down into less than a day provides amazing benefits, and also clarity of though.

We were kind of in a Goldilocks scenario though. Clean code base, all written by the developers there, multiple years experience in the same code base, and a PM team who lets us take a month+ to refactor when things got out of hand. Even then, it took us ~2 years before we were able to reliably hit our burn down every single sprint.

But wow it felt good when the machine finally became perfectly oiled and we were able to ship out embedded firmware every 6 weeks (3 week sprints, 2 weeks feature work, 1 week code quality[3], every other version went out) without stress or crazy[4] hours.

[0] A good PM is a force multiplier, a bad PM is a force divider.

[1] It is the best planning technique by far. Once we switched over to his style of planning sessions, everyone walked out of the room genuinely happy with what we were going to do that sprint!

[2] Once we did a session over fresh made to order Piña coladas, made in coconut shells. I think my PM almost cried from the lack of productivity that day.

[3] Bug fixing and code base improvements. This went up and down accordingly to stay under our bug bar. And after a dev came off of a feature push, they would be told to just work on whatever bugs they felt like for a sprint so as to avoid burn out.

[4] Not counting a couple devs who would refuse to go home, even after I cut their excess feature work and literally came in on weekends to drag them out of their chairs. There was 0 penalty for ICs in not shipping! Design didn't always help, they had cool new visual effects that some of the devs really wanted to see. Doing fancy visuals on an embedded processor with kilobytes of RAM is one of the coolest feelings ever.


well, thats cause its easier to introduce a new thing bit by bit than all at once, init


  > If it ends up taking 20 hours, oh well - good to know for
  > next time!
If a project is adding real productive value to the business there won't be a next time as the tasks should be intrinsically unique. A non-unique task suggests either poor product strategy, poor architecture, or that there's a pre-existing package or product that you should be using instead of recreating the wheel. At best, unavoidable repetition signals deficiencies in the state-of-the-art (e.g. overall Linux or Windows ecosystem), but those are rarely the sorts of tasks where you see trouble communicating or planning.

  > The idea is not to bicker over whether something will take
  > 3 hours or 5 hours or 7 hours, but to just slap a vague
  > label on it as "low-to-moderate difficulty." It's supposed 
  > to save time in that regard.
The purpose of scoring is to enhance the precision of time management. Theoretically, you should be bickering over details! The point is to reliably track so-called velocity with increasing precision to assist executives with product planning and orchestration of resources. But as I mentioned above, if you're doing anything of real value you're constantly creating unique solutions to unique problems. Those solutions and problems should constitute the bulk of your effort. As you suggest, your scoring necessarily must be uncertain and imprecise to be accurate and reliable, at least when you're attacking worthwhile problems. But that's completely at odds with the function and aim of scoring in Agile, which is to achieve increasing precision.

Agile methodologies come from the world of industrial automation where you're trying to refine the process of building millions of identical widgets with designs that evolve relatively slowly. Increased precision in resource management (e.g. real-time inventory management) allows managers to squeeze more profit out of the product without any additional expenditures. It's one of the rare management tools where they can uniquely add measurable value. But repetition and slow evolution is pretty much the complete opposite of what you should see from an efficient and capable programming team. At best, something like SCRUM routinizes poor value-add. Agile isn't a recipe for success, it's a recipe for making failure more efficient. (Not the experimental, knowledge-building definition of failure.)

In the context of software, Agile tries to achieve the impossible, which is to mechanize innovation. It's fundamentally broken. If you cherry pick aspects of SCRUM (or any particular Agile methodology) it's no longer SCRUM, nor is it novel. At best the Agile fad has introduced more programmers to, e.g., unit testing. But Agile is neither necessary nor sufficient to adopt unit testing; nor is unit testing always the most efficient use of time, especially when formal verification is practical. And while fast iteration is ultimately what you want to achieve, again the point of fast iteration is remove as much repetition and wasted effort (e.g. orchestration latency) as possible. If you're iterating fast, you're spending less time on tasks that are easily articulable, let alone predictable, and thus not amenable to fine-grained, precise time management. This is a consequence of the very purpose of software programming--to shift tasks that exhibit regularity to the domain of computers.

As for me, I left my previous job the moment people started complaining that our velocity was "too fast"; that (I kid you not) we should slow down the pace of work on a big, high-priority project in a futile and misguided attempt to make our time management more predictable (like the more junior teams), oblivious to the tensions between accuracy and precision and to the fact that the inaccuracy of our overly precise scoring was actually evidence (especially in light of other evidence) that we were working particularly efficiently.

As far as I understand, my prediction about when the project would wrap up if my preferred strategy (for which I was effectively the deciding vote) wasn't followed was accurate nearly down to the week, for a project that was supposed to take 3-4 months but ended up taking 9 and which was supposedly the highest priority for the entire (NYSE-listed) company. It easily could have taken 3-4 months if some of the engineers didn't approach time management through the lens of Agile development. They would have been more tolerant of the predictive errors for the easy tasks and more suspicious of the estimates for the more complex tasks, many of which were backloaded in this particular context. The lens of SCRUM in particular and Agile more generally primed them to equivocate tasks; to see similarity and repetition where there wasn't any.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: