Massive Downside? Massive Downside?

14 05 2016

A friend of mine sent me a link to this article in Inc. magazine by Adam Fridman: The Massive Downside of Agile Software Development.  Since I’ve been doing Agile software development now for twelve or thirteen years, I was eager to learn about this massive downside.

Here’s what he has to say, in five points.

1. Less predictability.
For some software deliverables, developers cannot quantify the full extent of required efforts. This is especially true in the beginning of the development life cycle on larger products. Teams new to the agile methodology fear these unknowns. This fear drives frustration, poor practices, and often poor decisions. The more regimented, waterfall process makes it easy to quantify the effort, time, and cost of delivering the final product.

For all software deliverables, developers cannot quantify the full extent of required efforts.  This is because every time a developer builds something, it’s the first time he’s ever built it.  (If it weren’t the first time, he’d just use what he built the first time instead of doing it over.)  If he is very experienced with similar things, he might have an idea how long it will take him; but he’ll still run into unfamiliar issues that will require unanticipated effort.

What the more regimented, waterfall process makes it easy to do is lie about the effort, time, and cost of delivering the final product, and maintain the illusion until nearly the end of the project, which is where all the make-or-break emergencies are.  Anyone who estimates that a software project will reach a specified scope in eighteen months is just making stuff up, whether he realizes it or not.  Heck, the team I just rolled off made capacity estimates every two weeks for almost a year, and hit it right on the nose only once.  And that time it was probably accidental.

If a collection of actual boots-in-the-trenches developers in the middle of a project can’t give accurate estimates for two weeks in the future, then a project manager isn’t going to be able to give accurate estimates for eighteen months in the future before anybody really knows what the project will involve.

However, we were able to give data–real historical data about the past, not blithe fantasies about the future–on those discrepancies to our product owners every two weeks.  Agile teams are no more able to make long-range predictions than waterfall teams are: but at least they’re honest about it.

2. More time and commitment.

Testers, customers, and developers must constantly interact with each other. This involves numerous face-to-face conversations, as they are the best form of communication. All involved in the project must have close cooperation. Daily users need to be available for prompt testing and sign off on each phase so developers can mark it off as complete before moving on to the next feature. This might ensure the product meets user expectations, but is onerous and time-consuming. This demands more time and energy of everyone involved.

3. Greater demands on developers and clients.

These principles require close collaboration and extensive user involvement. Though it is an engaging and rewarding system, it demands a big commitment for the entirety of the project to ensure success. Clients must go through training to aid in product development. Any lack of client participation will impact software quality and success. It also reflects poorly on the development company.

I think these are both good points.  You should only expend real effort on software projects you want to succeed.  The ones you don’t care about, you shouldn’t waste the testers’ or customers’ time on.

Or the developers’, either.

4. Lack of necessary documentation.

Because requirements for software are clarified just in time for development, documentation is less detailed. This means that when new members join the team, they do not know the details about certain features or how they need to perform. This creates misunderstandings and difficulties.

Have you ever been a new member joining a development team?  Me too.  Have you been a new member joining a development team that has its codebase documented?  Me too.  Have you ever gotten any information out of that documentation that you were confident enough in to use without having to ask somebody else on the team whether it was obsolete or not?  Me either.

Comprehensively documenting an emerging system on paper is a losing proposition that turns into a money pit and a useless effort.  Comprehensively documenting a nonexistent system on paper is even worse.

You know what kind of documentation of an emerging system isn’t useless?  Properly written automated tests, that’s what kind.  First, they’re written not in prose that has to be translated in an error-prone operation to technical concepts in the reader’s head, but in the same code that’s used to represent those technical concepts in the codebase the reader will be dealing with.  Second, they’re always up to date, never obsolete: they have to be, or they’ll fail.

And if you want new members to come up to speed quickly, don’t give them technical documentation–even clearly written, up-to-the-minute technical documentation.  Instead, pair them with experienced team members who will let them drive.  That’s the fastest way for them to learn what’s going on: much, much faster than reading technical documentation–or even automated tests, for that matter.

Can’t spare the time to pair?  Deadline too close?  Need everyone on his own computer to improve velocity?  Well, first, you don’t understand pairing; but that’s a separate issue.  Here’s the point: your new guy is going to be pestering your old guys one way or another, whether he’s trying to find out which parts of the technical docs are obsolete or whether he’s officially pairing with them.  Pairing is much faster.

5. Project easily falls off track.

This method requires very little planning to get started, and assumes the consumer’s needs are ever changing. With so little to go on, you can see how this could limit the agile model. Then, if a consumer’s feedback or communications are not clear, a developer might focus on the wrong areas of development. It also has the potential for scope creep, and an ever-changing product becomes an ever-lasting one.

Is the implication here that the waterfall model handles this situation better?

Seriously?

In a properly-run Agile project, there is no predetermined track to fall off.  The project goes where the customers take it, and the most valuable work is always done first.  If the communications are not clear, the discrepancy shows up immediately and is instantly corrected.  There is no scope creep in an Agile project, by definition: we call it “customer satisfaction” instead.

Since the most valuable things are done first, the product is finished when either A) the money runs out, or B) the business value of the next most valuable feature is less than what the developers would cost to develop it.

On the other hand, if the customers continue to point out more business value that can be exploited by further development, that’s a good thing, not a bad thing.  The customers are happier, the company’s market share increases, and the developers continue to have interesting work to do.


Now, the fact that I disagree with much of what Mr. Fridman says in his article should not be taken to mean that I don’t think Agile has no downside.  I think it has at least two major problems; but Mr. Fridman’s article didn’t mention either of them.

Advertisements

Actions

Information

4 responses

15 05 2016
Raul Miller

I think sturgeon’s law applies, here.

And, on a related note – those automated tests can themselves be buggy, misleading and/or obsolete.

Which brings me back to the documentation thing. Documentation about purpose (why? why? why? why was this code written? why was this test written? …) can help immensely.

Mostly you should want to stay out of the code development mindset when writing documentation, and you should probably want to stay in the user’s context space. That means real live users (the good, capable ones, not the trolls who happen to be spending money) need to have some say in the revision cycle for some of the docs. (You might also need some basic principle docs – what ties “business need” to “physical world” if you are working on a project that accomplishes something useful.)

Documentation is not about solving problems for developers, but well written documentation can be quite useful for bringing unfamiliar people up-to-speed. (And if your business has people implying that everyone must be automatically familiar with everything you should maybe replace those arrogant sorts with competent people.)

Anyways… the big problem tends to be finding and focusing on the issues which matter, and that is rarely as easy as it seems.

15 05 2016
Dan Wiebe

Tests and code are like password and confirmation fields: they give you the chance to say exactly the same thing twice. In the tests-and-code case, however, you not only say the same thing twice, you say it in two completely different ways (or at least you should). The objective of password and confirmation fields is that it’s unlikely that you’ll accidentally get it wrong twice in exactly the same way. Similarly, it’s unlikely that you’ll mis-express yourself in exactly the same way in both tests and code–much more unlikely than that you’ll mis-express yourself if you have just code and no tests. So in the overwhelming majority of cases, tests with bugs in them will fail until you get the bugs out of them (or, if you’re not paying attention, into the code as well; but we try to avoid that).

Now, tests can easily be _incomplete;_ that happens when inexperienced TDDs write a line of code that is not demanded into existence by a failing test. The discipline to avoid that is the sort of thing that is conveyed best by pairing.

Misleading? I don’t follow; I’ll need an example.

Obsolete? Yes, as long as the code they’re testing is also obsolete. Automated tests can’t find dead code. That’s not their function. But neither can the waterfall process.

Documentation…it sounds like you’re talking about user documentation, not technical documentation. If user documentation is necessary (for example, a mobile app or a customer-facing website should not need separate user documentation, or else you have a big UX problem), then it should be developed along with the code, although to avoid too much churn it probably ought to stay an iteration or two behind the code, documenting only features that have a chance of remaining stable as the product owners refactor the requirements and the developers refactor the code to address new challenges.

As for what you call documentation about purpose–why? documentation–I have found that to be the most pernicious kind, much more annoying than the “// set y to 3” kind of documentation. When the latter kind of documentation becomes obsolete, it’s easy to tell, because the line of code it’s documenting will have ceased to mention y or 3 or both. When the purpose documentation becomes obsolete, it can be half a day or more before I discover that I’ve been sent in the wrong direction.

My general MO these days is to just assume all the documentation is wrong and figure out what’s going on by looking at the tests. Where there are good tests, it seems much faster that way. Where there aren’t good tests, I either ask somebody or–if that’s not an option–write some of my own tests to figure out what works and what doesn’t.

Admittedly there have been a few occasions when I have spent a fair amount of time figuring something out on my own only to discover later that all that time there had been a comment right next to it saying exactly what I spent all that time figuring out; but in my experience comments like that are pretty rare in active codebases.

15 05 2016
Raul Miller

Obsolete tests can be a good example of misleading tests.

But tests can also mislead by focusing attention away from the business issues (for example: dealing with systemic failure modes), and towards code structure (for example: dealing with some internal state).

More specifically, the whole “test first” thing requires some really good judgement about what is relevant and what is not. It also requires a developer focus on things that tests can be written for:

If what you are solving is a physics problem involving moving parts, or a user interface “feel” issue, or something like that, the most important testable issues can be what a TDD developer would reject as “integration tests”. It’s only when someone else has done the hard part and you’re working against a solid API with concrete specs that you can settle down and write the tests which let you define the algorithms. In some business contexts, that is not at all a problem. In others, though, it can be a serious problem.

Anyways, I guess my point is that TDD by itself is a tool. It can be a great tool. It can be a misused tool. But I imagine you knew that already.

15 05 2016
Dan WiebeWiebe

Sounds like it’d be fun to spend some time pairing with you. Like as not, we’d both learn something.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: