Hacker News new | past | comments | ask | show | jobs | submit login
Extreme Programming, a Reflection (8thlight.com)
99 points by joseph_cooney on Dec 18, 2013 | hide | past | favorite | 66 comments



The things that I remember XP being controversial for were pair programming and TDD - real TDD, where you write the tests first and let the tests drive the design. And those are the two things that I don't really see as having caught on.

I mean, pairing is a fine approach when training someone up on a codebase, but it tends to be much more effortful for the guy in the driving seat, while the guy looking over your shoulder is making small comments and doing researchy lookups. This makes it less than efficient when both people are at the same knowledge level. The extra eyes looking for bugs is debatable; the bugs are more thoroughly found with tests.

Test-first TDD is even less popular. Norvig vs Jeffries was enlightening - http://devgrind.com/2007/04/25/how-to-not-solve-a-sudoku/ .

Software does have a much heavier focus on testing than it used to, to the point that in many projects, everything is implemented twice - all features have two representations, one in the form of the implementation, another in the form of tests, and often with the lines of test code outnumbering the implementation.

But other things have suffered IMHO; making code easy to test tends to over-abstract it, making it more parameterized and exposing more implementation details of high-level abstractions.

APIs are often uglier with a lot more exposed symbols to handle the parameterization, with various bits and bobs asking for interfaces that only have one concrete implementation that can only be created by a factory, and you have to learn the knack of actually instantiating the useful bits anew for each library. I've got Java squarely in mind, of course, and I'm convinced better language design can solve the problem with less harm to the software.


I don't agree with "making code easy to test tends to over-abstract it". The dependencies are there no matter what, but they become more obvious when you try to test the code in isolation. I think the main benefit of TDD is the way it forces you to break the program apart in smaller, less dependent pieces. The end result is better structure and less dependencies between parts.


I fundamentally disagree with you.

If the goal in writing software is to reduce it to a set of pieces that plug together, I'd agree. But it's an arbitrary metric and not an absolute measure of quality, not by a long shot.

Note that I don't say "composable", because composability is something that needs to be designed in, and it isn't usually clear how to do it best until the third or so time around - the rule of threes.

Furthermore, I don't say reusable, because reusability is something that also needs to be designed for. In particular, reusability in such a way that software can evolve over time without breaking clients (reusers) of the abstraction demands a tight and constrained contract that is broadened carefully, while testable components demand broad and flexible contracts, otherwise not everything would be available to be tested.

Every part that testing has forced you to break out to be individually addressable is a part that you cannot remove in a refactoring that significantly changes the way a library solves its problem.

A library that has been broken into parts that are neither composable nor reusable is simply over-complex. Almost every extant Java library is like this!

Of course, if you just write end software in small teams, none of this is relevant to you. But it is crucial in library design, especially when client code is outside of your organization.


If you split it into little parts, and those parts have no sensible meaning on their own or no sensible interface and semantics -- then you're absolutely right. Testing such parts will be difficult, too, because of this.

But if instead of a monolith, you have a set of components with well defined interfaces that have simple semantics (that do not leak abstractions) -- whether or not these parts are re-usable in other contexts -- then you almost automatically have higher quality software:

* Easier to test means it will likely be better tested

* Well-defined interface and abstraction and a small implementation means that reviewing/correctness becomes easy. You only need to understand a small component to review it. "Obviously no bugs" rather than "no obvious bugs"

* Easier to split the work across developers

* Easier to comprehend the whole as a collection of its parts

The total number of lines of code, or even the total complexity may increase relatively to a monolithic design. But correctness becomes so much easier.

You mention refactoring, too, and IME, refactoring can be both easier or more difficult, depending on whether it is within a single small component or across multiple ones.

If you add architectural/design changes -- then it is night and day. A monolith will likely have to be rewritten to make an architectural change reliably.

A set of components can easily be split, for example, so that a few components are moved to run on a different system with a network protocol between them.


A program made up of loosely coupled pieces has several advantages over one that is more monolithic. It is composable – all you have to do is put the parts together, then you have your complete program. It is easier to modify, because parts can be swapped out with minimal impact. It is easier to test, since the parts can be tested in isolation, which makes testing the complete program much easier.

As for reuse, I like this quote: "Don't aim for reuse. Write small, independent components you can reason about, and the right pieces for reuse will fall out." Jessica Kerr @jessitron on Twitter


How did that work out for the Kernel debates? :)

I don't think anyone disagrees that a "well designed and written" program of loosely coupled pieces has advantages over a monolithic one.[1] The debate is really over which is easier to do. And, the argument you are responding to is essentially, that it is easier to abstract out parts after you have done it in whole a few times. I know, personally, that that is a very compelling argument.

[1] Well, there probably is some debate on the feasibility of making things as loosely coupled as you would like. Back to the kernel debate, how many microkernels have survived with the device support of linux?


The kernel debate is a different one.

Software can easily consist of loosely coupled pieces as source code and be compiled and run as a single monolith with hardly any performance loss at runtime (versus coupled source code).


Maybe I misrepresent the debate, then. My understanding is that the debate was that there was no future in a monolithically sourced and run kernel. Linus took the position that while that had a certain appeal, he just wanted an operating system he could use. If anybody had managed to deliver on the microkernel dream, he would probably not have started the linux kernel.

That is to say, that there is appeal to the "loosely coupled" dream of a software solution. Not just in source but in execution. However, there is the reality that this is very hard. The contention in this thread is that to think you can start in the loosely coupled set of parts is very ambitious.[1] It isn't that it is a bad goal. Just that it is akin to wanting to score well in a marathon without first running a few smaller races.

[1] Unless I am misrepresenting that, of course.


The real problem with the debate here is that a kernel is a place where performance is in your top two concerns, fighting it out with correctness, and among other things beating out "effort to create" and "skill level needed to create". If your software does not have performance as its absolute #1 criterion, and you care about the effort it takes to create it and the skill level needed to create it, you'll probably want to go back to easily isolated pieces that can be tested and understood without the whole system being understood, and that may not perform the absolute best that they could. (Although I find this software doesn't produce slow software on its own; at most it costs you a few more pointer traversals than you may like. Slow software is IMHO far more likely to come from highly coupled programs that everyone is terrified to optimize lest the whole thing come apart.)

Trying to use the kernel as a template to guide all software development is not a great idea.


I still think the loose concept of "loosely coupled pieces" is a massive hand waving over the difficulty in making something multiple pieces. It is typically the goldilocks search. When do you have too many pieces, and when do you not have enough?

I have yet to see a prescriptive approach to this that works. About the best I've seen is the holistic iterative approach. First make something, then look to see where you can isolate changes and make them. Repeat. If this fits a model of TDD, it is new to me.


I disagree that it's massively difficult. I think it's a skill that has to be learned. As I get better at it, I become faster putting together something loosely coupled than tightly coupled... because while writing something big and monolithic may have a momentary short-term advantage, when it comes time to, you know, make sure it works, correctly, my system is a lot easier to verify, test, deploy, and ship than the monolithic one.

Programming speed is not the only consideration when it comes to shipping software. Squishing something together as rapidly as possible may shorten the programming time (and then only for smaller systems), but only at the cost of shoving the time into all the other phases, usually at a ratio greatly in excess of 1:1!

In other news, programmers are generally pretty bad as estimation, and this is probably related. I suspect the estimations for the "squeeze something together" part are pretty good overall, it's the rest that breaks down.

And again, to be clear, I'm not disagreeing that it's challenging. I'm saying that rather than being fundamentally challenging in a way that can never be made easier, it is a skill that can be learned. That makes for a very different cost/benefit set than a task that is fundamentally difficult. And, frankly, few developers are taking the time to learn it; far more are sneeringly dismissive at the skills that are required to learn this. Rather a shocking amount of our "structure" in programming is still just covering over cowboy programming with terms management can get behind. I think XP actually avoided this, but the average bastardization of XP is a thin patina of words over cowboy programming.


I don't disagree that it is a skill that can be learned. Quite the contrary. I just feel that likely the best way to learn this skill is to first build a few systems that aren't loosely coupled first. Consider the analogy for building cars. Before you try and build a continuous transmission system, first get a direct drive one working. Then, determine what would need to be messed with to put basic gearing in place....

Now the major trick here is that this breaks down in categories that are effectively already solved. Which is why many of the examples are obnoxious to the point of unhelpful. If you know how to break something down to where it is loosely coupled parts already, I feel you should definitely do so.


The Linux kernel is not monolithic in the sense that people are talking about here. There are modules or parts in the Linux kernel that are composed using carefully crafted APIs. The kernel difference is that the design goals are different from most applications and that Linux leverages every possibly way to integrate software components on a von Neuman architecture. It goes well beyond what you normally do in a business app.

Linux is well designed and you can learn from it, but in order to get value from that study you need to be a skilled C programmer and at the top of your game. Therefore it is a bad example for people who mainly use other languages.

In additon lets not forget that the SOLID principles, DRY, YAGNI and so on, are not hard and fast rules. Every extreme programmer will regularly violate those principles. The purpose of the principles is to guide your work, to make you see clearly what you are doing, so that when you violate a principle you do it for a good reason.


Mayhap I am misrepresenting the origins of Linux. Simply put, it was less rigidly modular in construction than it could have been at the beginning.[1] Indeed, I think it is a perfect case for the argument of "first get it done, then figure out how to make it modular." Probably a better argument for keeping the model such that you can keep the full picture in your head when working on it. Not sure.

[1] Consider also this lovely thread: http://www.realworldtech.com/forum/?threadid=65915&curpostid...

    You can do simple
    things easily - and in particular, you can do things where
    the information only passes in one direction quite easily,
    but anythign else is much much harder, because there is
    no "shared state" (by design). And in the absense of shared
    state, you have a hell of a lot of problems trying to make
    any decision that spans more than one entity in the
    system.


When people say that the Linux kernel is monolithic they really mean that it is not a microkernel architecture. In microkernels the designer says "Having a minimal microkernel with lots of separate modules is good therefore we will do everything that way". Linux simply says that modularity is good therefore we will use the appropriate integration technique at the appropriate times.

In the Linux kernel, you use printk to print messages on the console. That is modularity. There are device drivers. That is modularity. There is a range of loadable/configurable kernel modules for many things and these used to be more visible when more people would configure and build customized kernels. The Linux kernel has far more of these modules than earlier OSes that I used (TI DX10, UNIX 6th ed., Xenix, 3B2 Unix, SCO UNIX). The Linux kernel is linked into a monolithic binary that runs in kernel mode, but it is composed of many modules, some of which are integrated at link time and some of which are loaded dynamically (lsmod).

There is a good reason why the kernel is more monolithic than a business app, and that is that the kernel is doing a vastly different job at a vastly different layer of abstraction than a business app. You might also note that there are still lots of jobs for C programmers but most of them mention "embedded systems". That's what the Linux kernel is, a big featureful embedded system.

Perhaps some day someone will write a book on integration and cover all the different ways in which functionality can be integrated to produce an application. Most developers lean far too much on only one way of doing it, i.e. the link editor. For most apps, loosely coupled integration techniques are more valuable.


I don't think these terms are as different as you seem to be implying. When someone says "loosely coupled modules" they don't mean "printk" or similar functions. They mean such joys as power management and thread scheduling. These are somewhat modular in the kernel, to be sure. Are they so modular that you could TDD one or the other? My last understanding was not really.

Consider, you can have a device driver that runs fine "on its own" but crashes when run with another driver loaded. This is almost canonically the opposite of loosely coupled modules.


One of the biggest problems I find with people adopting TDD is that they create one test per class. This was never the intention. The idea was that you test units, which may be a single class but equally may also be an aggregate made up of smaller classes. Tests should pin down external behaviour without overly restricting internals. Mocking at every class boundary leads to brittle tests suites that aren't focused on APIs, not to mention being a PITA to refactor.


I really appreciate your response. It's very measured and, to me, seems the result of experience, as opposed to gung-ho idealism. People need to realize that software is complicated, and that despite the fact that computers operate on unambiguous rules, that the people creating the software often can't depend on concepts in the same way. Following test-driven development fanatically is not a shortcut to designing re-usable, composable software. There is no shortcut to that.


Pair programming is supposed to be two people thinking through the problem together. It works when both people are immersed in solving the problem together.

When it works well, having two brains working together brings the benefit of different perspective and experience. It gives the opportunity to riff off each other's ideas. You spot issues with design and implementation earlier because having to communicate your ideas means working harder on them before you try to turn them into code.

I find it fun to work with someone else who is smart and engaged. It's magic when you become warmed up enough that things really start to flow. I've actually managed to get into flow before while working with someone else.

That said, it's pretty difficult to get right. I found it quite hard to let someone else see my process. If both sides aren't engaged in problem solving it can be really boring. I've also found that it takes me a while to figure out how to work productively with new people. The dynamic between any particular pair of people is a bit different. I think you need to build trust with your pair.


A "simpler" version is discussing the problem with your colleague in front of a whiteboard (drawing almost always seems to help). Once you've worked out how to implement it, you go off and implement it yourself.


>But other things have suffered IMHO; making code easy to test tends to over-abstract it, making it more parameterized and exposing more implementation details of high-level abstractions.

I think this highlights a deficiency in testing tools. It's quite hard to, for example, change the system time when running a test which often means that you have to abstract out that part from the method you're testing and pass it through as a parameter.

You shouldn't need to do that (in python you don't!).

Also, there is a lot of APIs out there with very real world effects and integrations that it is pretty cumbersome to build mock ups of. Most API providers also just don't.

Mocking what would happen, say, when a twitter oauth token expires, isn't as easy as it should be.

On the plus side, UI testing tools seem a lot better nowadays.

But yeah, there's a serious dearth in good testing tools and bad language design (cough Java) that ends up making code unreadable.


> It's quite hard to, for example, change the system time when running a test which often means that you have to abstract out that part from the method you're testing and pass it through as a parameter.

Passing the time into your function isn't necessarily a thankless chore, however. It's actually quite similar to strengthening your induction hypothesis when doing a proof. Now your function doesn't just claim to work correctly for one time (the implicit clock time) but for all times. This stronger claim (if true) makes it easier to reason about the code that relies on the function, including not only the testing code but also the rest of your application code.


The question of time in relation to tests is interesting. If you write the code with the mindset that you should be able to test it (including time dependent behavior), you can end up with testable code without too much trouble. The key is to make time external to the code. I just blogged about this in "TDD, Unit Tests and the Passage of Time" http://henrikwarne.com/2013/12/08/tdd-unit-tests-and-the-pas...


Regarding pair programming, I also see it as paying the salaries for 2x developers, yet gaining very little from it. Productivity may be even less than from a single programmer, at least in my experience.

To elaborate, I've tried pair programming myself and it was completely inefficient when we tried it. I'm not going to dismiss it entirely though, perhaps we approached it the wrong way. Personally I just need a bit of space before I can start focusing in-depth about certain problems.

This is also why I like to be well-prepared before attending team design decisions, because coming up with good ideas "right there and then" is difficult for me.


I think of pair programming like dancing. How much practice does it take to be able to dance with a partner before it's natural? More than a week, that's for sure.

I pair on all production code at work with only two other guys. I've worked with them for the last year. Together, any combination of the three of us is easily twice as effective as the fastest in our team. Something about the rhythm of the session, alternating roles, support when tackling boring parts, and the camaraderie frees us up to just get stuff done.

But, we work in a very complex domain that, a year in and many seminars by product later, we only barely are starting to grasp, with a large difficult to grasp system, sometimes solving problems just outside our comfort zone. It's not just CRUD and forms. So, maybe pairing is the four wheel drive of the programming world: uses more gas on the highway, but depending on your terrain, it might be the most fuel efficient way to get across a mountain.


> I've tried pair programming myself and it was completely inefficient when we tried it

Or: I've tried Vim and my writing/editing speed halved. Or: I tried APL but it took me half a day to write one line of code. Or: I tried playing guitar and it sounded horrible.

I get the feeling that maybe the outcome would be different if you'd try doing it for a while longer. No guarantees, though.


Regarding pair programming, I also see it as paying the salaries for 2x developers, yet gaining very little from it.

I think the trick is to use it when developers feel necessary to stay productive. No point having someone slogging away at something they find difficult and frustrating if a second pair of eyes and maybe some more specific knowledge of the area can help.


I had similar experiences with pairing; often it was mandated by folks in charge who didn't seem to have a great grasp of what the benefits were. Productivity/velocity tended to suffer noticeably on many teams. It worked for others, but I think most times it wasn't understood fully why it worked for a given team.

What bothered me about how I saw pairing used was that people seemed to make blanket assumptions about it's benefits. Many times I saw people pairing up on trivially easy tasks. Seemed to me that pairing was a lot like everything else, it can be done well or poorly.

Pairing should be a naturally occurring process IMO; ie I don't know how to best accomplish a task or 'story', so I ask a team member w/expertise or experience to help point me in the right direction. If I need help beyond that, it becomes a pairing/knowledge-transfer exercise. I came to refer to it as "informal pairing". In general I tended to gravitate toward pairing on the exceptionally difficult tasks or ones that would have far reaching design implications.


> Test-first TDD is even less popular

I never bought into TDD, although I do write unit tests for most of the code where it makes sense.

No one has been able to show me an usable way to do TDD when coding native UIs, mobile OS, embedded systems or when using third party libraries not built with testing in mind.

Plus TDD makes very hard to properly design algorithms and data structures, that should beforehand be done at the whiteboard.


"No one has been able to show me an usable way to do TDD when coding native UIs, mobile OS, embedded systems or when using third party libraries not built with testing in mind."

I agree. However, I do not consider this a strike against testing; I consider it a strike against native UIs, mobile OSs, embedded systems, and third party libraries that don't support testing. You may not do TDD (I generally don't), you may not strive for 100% coverage, but testing is a fundamental aspect of serious software engineering, and anything that actively fights your attempts to test it is a big strike against that tech. I only use the ones that fight you that hard because there's unfortunately no competition, but it's still a disgrace. In 2013, testing ought to be a fundamental first-class concern of any new UI library, yet here we are.


Have a look at test driven development for embedded C by James Grenning for a viewpoint of how TDD might work in the embedded world. I found it to be a great resource.

http://pragprog.com/book/jgade/test-driven-development-for-e...


This is a fantastic book!


Thanks, I know that book.


Why do you think that TDD prohibits you from using a whiteboard to design your algorithms and data structures beforehand? As far as I can tell, doing that work, then writing your first test is just TDD done well.


Because that is what TDD advocates sell at agile conferences, design by coding.


I practice what essentially amounts to pair programming very regularly. On the other hand any kind of TDD seems to be mostly moot for projects I work on. I assume this is because most of my projects involve disparate components with continuously changing interfaces and just getting together works better than producing interface specification that is complete enough to base any tests on (and to be clear: it's not about two people being in same room and each hacking away at his component, but about both working alternatively on each component).

I'm somehow surprised that this even works well with pairing of programmer and hardware engineer, but that requires management that believes in their engineer's skills (rapid iteration and hardware tends to get very expensive very fast) and programmers that have meaningful insight into hardware.

On the other hand projects I work on are probably not very representative of anything as most of them are weird :)


My recommendation on pair programming is that it is a good idea, you should try it, you should do it 5%, 10%, 20% of the time, whatever you want, I don't think is a good idea to do it 100%. I see two main benefits on pair programming.

  - Knowledge transfer. Learn new and better ways of working. IDE usage, short cuts, etc.

  - On a complex piece of software is better to have two sets of eyes checking everything.
TDD. I enjoy doing it, I am not strict on doing test first, most of the time I don't. I usually shoot for 70% coverage. Indeed, the tests take a significant part of the effort, often they are brittle and you need to refactor them, but I really think they improve your overall design, your confidence on the robustness of the system.


I was thinking the same thing reading this. I don't see many instances of pair programming. Granted, my data points are limited, but that seems to be the exception, not the rule.

I've seen more people extoll (and consultants sell) automated testing than folks actually use it, let alone TDD. As an idea, I get it, but the implementation still seems mixed.


In regards to pair programming - I realized recently that I'm opposed to it just because I don't enjoy it. I don't like having someone look over your shoulder while programming, or looking over someone else's shoulder while they are programming. It is exhausting. Collaborating in front of a whiteboard for a few hours is fine - I just don't want to do it all day. I also don't like feeling guilty taking a 5 minute break to read hacker news, or my personal email.

And I'm in the programming industry because I enjoy programming, so would want to work in an environment which I enjoy - and in today's market, i have the luxury of picking my employer.

I'm sure not everyone feels the same way, but I suspect I'm not alone in that opinion.


You looking over someone's shoulder or them looking over yours is definitely not pair programming though. It is exhausting though, I agree with that part.


I'm exactly the same. And it annoys me greatly that being hypersocial and actually wanting to sit around in noisy environments is considered a prerequisite for many jobs. Not everyone wants to work that way.


from my experience it's not everything or nothing. In fact I would say i have never done a XP project where we did everything, but we almost always add tests (what %, simply depends on the project), and they always gave us tremendous insights, we did some level (mostly very little) paring (on either super hard or super critical code) and it always helped get thru the code or provide what mgmt needed, more then one person to understand something that was too critical to leave to only one person. people need to enjoy what they do, and employers need to understand if you want strong talent you need both to work as a team (not shove "do it this way" down people's throats) and XP requires BUY IN. - BTW I am a CTO who has been pushing XP on all dev projects for the past 5 years and I don't code :-)


Putting my headphones in, drifting in my own, private world of code is to me one of the simple pleasures of life. Ok with short meetings and thight, small schedules and the like, but put someone watching at my screen while I'm coding and I can easily commit an homicide.


I don't enjoy pair-programming. I avoid it as much as I can get away with. But looking at the results, it's an inescapable fact that I produce higher-quality code when I do - so for those critical pieces that need to be bug-free, I force myself to do it.


Is that personal experience or a general research result?

Is the result better than with careful code reviews [of the critical pieces] (both after writing, but also short checks during development over a code listing and coffee)?

Intuitively, everything with coffee involved ought to be better! :-)


Just personal experience.

Better than some theoretically perfect practice of careful code reviews? I don't know. Better than code reviews as actually implemented everywhere I've worked? Yes. (In particular I find it's really hard to maintain the discipline of carefully going through each other's code when you know that most of the time you won't find anything)


Coffee is certainly my pair! :)


Check out my reply on another comment: https://news.ycombinator.com/item?id=6927170.

That being said, I do miss solo work, because I could get into the zone, and even if I was going down the wrong path, it was ME doing it, master of my own domain. It feels like I'm alone on my boat sailing into uncharted waters, an adventure.

At its best, pairing feels like being part of a tactical response team, at its worst: like there is a machine that turns my brain cycles into money, and they let me keep some of the money at the end.

Edit: Rereading both comments, apparently pairing makes me wax with metaphors, like a... nah


The XP book was hugely important for me. I read Kent Beck's article on Extreme Programming in IEEE Software in October 1999, and got the book as soon as it came out. For the first time I saw a methodology that reflected how I actually liked to work. I hadn't done pair-programming then, but working in small increments, with lots of tests, rewriting etc - more agile in short. Previously, I wrote programs despite the methodology (like RUP, or company internal methodologies), but here was a system that actually helped. Truly revolutionary at the time.


The strange thing is that this is how everyone begins to program you write a little code and get something working. Then you add a little more, reorganise things a little and keep going. It's a natural process that somehow gets trained out of us in CS courses. We're taught how to plan a whole system and them build it. Long running projects interacting with an infathomonable number of users put paid to that though. I think the thing extreme programming got most right is that software is about people. Telling the computer what to do is the easy bit. Figuring out what the users want them to do is the hard part.


The main thing for me is that its virtually impossible to build a complex system (essentially complex that is) without having good feedback and then adjusting the design to fit. That feedback comes from testing, review and the customer. The shorter the feedback cycle the quicker you settle on the target. The XP book helped spur this on in an age where a lot of people had lost the run of themselves. However these ideas were around and practiced since software development started:

http://www.craiglarman.com/wiki/downloads/misc/history-of-it...

And before that for other activities: "Plans are nothing; planning is everything." - Dwight D. Eisenhower


Exactly. You grow software. I really like this quote (by John Gall):

"A complex system that works is invariably found to have evolved from a simple system that worked."


Another post saying nothing by uncle Bob. His ability to say nothing never ceases to amaze me -_-.

He's one of the good guys, that's for sure - but every time I read his blog I keep waiting for that something new and it never comes.


In the short section entitled "Success" toward the end of the article, the author repeats the sentence "Extreme Programming succeeded" five times, slightly rephrased each time. To be perfectly honest, I find it hard to comment on this without sounding derisive. There is no hint at what "succeeding" even means for a software methodology, and even if we assume we know what "succeeding" means in this context, there is not a shred of evidence for the truth of the statement. A poorly defined statement repeated five times without evidence or proof.


It's still relevant - I doubt that 2% of teams do all 12 (mostly it's the pair programming) but it is without a doubt the seminal work on software in business of the last 15 years.


In the enterprise world I move on, it is mostly a checklist to say a project is agile.

Many companies are still run waterfall like, or without any kind of process.

When we bring in agile methodologies into the project, it starts slowly, but eventually everything is in place and everyone is doing it in an agile way (XP, SCRUM, whatever).

When the first project escalation arrives, or the deadlines are not possible to be achieved, the developers start slowly going back to the original way of working.

In the end you get mini-waterfall projects with a sprint duration, but the management puts agile on the project bullet points.


I know the feeling - and my view is a bit brutal - but it's down to tools and people.

For most enterprises automated build, test, deploy (ie CI/CD) is the one missing tool and one absolutely necessary tool to capture and keep benefits of agile - it's capital.

And also for most enterprises you could lose 1/3 of the IT staff without noticing.


As edw519 tweeted:

(Great Process + Average People) < (Average Process + Great People)


Fully agree. In the end what we get from those projects is waterfall with CI, which is still better than how things used to be on the old days.

Although, one has this feeling of playing Quixote all the time.



I love pair programming. Frankly though, it takes several hours of committed time to be really useful. It's hard to get two production programmers to sit down together for a few hours to focus on this. To get around the time hurdle, I've proven to upper mgmt that the amount of bug fixes or efficiency improvements is easily greater than the amount of product the two alone could have achieved. There's a Tesla-ish resonance that occurs when two people are focusing on the exact same problem. LOVE IT.


The Ten Year Agile Retrospective: http://msdn.microsoft.com/en-us/library/hh350860(v=vs.100).a...

From June 2011, this article covers four key success factors for the next 10 years of agile. Great thinking on software development.


I was developing software back then when extreme programming came about. It wasn't controversial at all. In fact, everyone I knew thought it was a much better way of programming than the current waterfall method. The only problem was inertia from management, which is always the case.


I like a lot of what XP brought to the table, except pair programming. I find it's genuinely not effective most of the time. It is very effective when teaching someone. But in general I have found it to be slower, produce worse code, reduce accountability, and causes frustration. I'm currently working on a long blog post detailing my thoughts on it.


I'm interested in seeing your blog post - subscribed in anticipation! I definitely agree with the slower bit, but my experience has showed that paired-on code is generally higher quality as the engineers discuss what really should be done and the semantics of what's being built.

There are definitely tasks in which I would avoid pairing - specifically those that are either very ill-defined (like a spike or bug hunt) or too easy (write some data transformations). However, the tasks that should result in a clean, well tested API with edge cases taken care of, tend to be higher quality while pairing in my experience.


I was pretty shocked to see a programmer that delusional about XP and its influence. Then I got to the bottom and saw it was an XP snake-oil salesman writing it, not a programmer. Surprise. All 12 of those things predate XP, and most of the 12 things are situational. There is virtually never a case where following all 12 of those things makes sense. But that is exactly what XP was. It demanded you do all 12. Just saying "because some people still do some things in that list, XP mattered" when those things were being done before XP came out is absurd. "Which ones don't you do?", all of them but avoiding overtiming.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: