Massive Downside? Massive Downside?

14 05 2016

A friend of mine sent me a link to this article in Inc. magazine by Adam Fridman: The Massive Downside of Agile Software Development.  Since I’ve been doing Agile software development now for twelve or thirteen years, I was eager to learn about this massive downside.

Here’s what he has to say, in five points.

1. Less predictability.
For some software deliverables, developers cannot quantify the full extent of required efforts. This is especially true in the beginning of the development life cycle on larger products. Teams new to the agile methodology fear these unknowns. This fear drives frustration, poor practices, and often poor decisions. The more regimented, waterfall process makes it easy to quantify the effort, time, and cost of delivering the final product.

For all software deliverables, developers cannot quantify the full extent of required efforts.  This is because every time a developer builds something, it’s the first time he’s ever built it.  (If it weren’t the first time, he’d just use what he built the first time instead of doing it over.)  If he is very experienced with similar things, he might have an idea how long it will take him; but he’ll still run into unfamiliar issues that will require unanticipated effort.

What the more regimented, waterfall process makes it easy to do is lie about the effort, time, and cost of delivering the final product, and maintain the illusion until nearly the end of the project, which is where all the make-or-break emergencies are.  Anyone who estimates that a software project will reach a specified scope in eighteen months is just making stuff up, whether he realizes it or not.  Heck, the team I just rolled off made capacity estimates every two weeks for almost a year, and hit it right on the nose only once.  And that time it was probably accidental.

If a collection of actual boots-in-the-trenches developers in the middle of a project can’t give accurate estimates for two weeks in the future, then a project manager isn’t going to be able to give accurate estimates for eighteen months in the future before anybody really knows what the project will involve.

However, we were able to give data–real historical data about the past, not blithe fantasies about the future–on those discrepancies to our product owners every two weeks.  Agile teams are no more able to make long-range predictions than waterfall teams are: but at least they’re honest about it.

2. More time and commitment.

Testers, customers, and developers must constantly interact with each other. This involves numerous face-to-face conversations, as they are the best form of communication. All involved in the project must have close cooperation. Daily users need to be available for prompt testing and sign off on each phase so developers can mark it off as complete before moving on to the next feature. This might ensure the product meets user expectations, but is onerous and time-consuming. This demands more time and energy of everyone involved.

3. Greater demands on developers and clients.

These principles require close collaboration and extensive user involvement. Though it is an engaging and rewarding system, it demands a big commitment for the entirety of the project to ensure success. Clients must go through training to aid in product development. Any lack of client participation will impact software quality and success. It also reflects poorly on the development company.

I think these are both good points.  You should only expend real effort on software projects you want to succeed.  The ones you don’t care about, you shouldn’t waste the testers’ or customers’ time on.

Or the developers’, either.

4. Lack of necessary documentation.

Because requirements for software are clarified just in time for development, documentation is less detailed. This means that when new members join the team, they do not know the details about certain features or how they need to perform. This creates misunderstandings and difficulties.

Have you ever been a new member joining a development team?  Me too.  Have you been a new member joining a development team that has its codebase documented?  Me too.  Have you ever gotten any information out of that documentation that you were confident enough in to use without having to ask somebody else on the team whether it was obsolete or not?  Me either.

Comprehensively documenting an emerging system on paper is a losing proposition that turns into a money pit and a useless effort.  Comprehensively documenting a nonexistent system on paper is even worse.

You know what kind of documentation of an emerging system isn’t useless?  Properly written automated tests, that’s what kind.  First, they’re written not in prose that has to be translated in an error-prone operation to technical concepts in the reader’s head, but in the same code that’s used to represent those technical concepts in the codebase the reader will be dealing with.  Second, they’re always up to date, never obsolete: they have to be, or they’ll fail.

And if you want new members to come up to speed quickly, don’t give them technical documentation–even clearly written, up-to-the-minute technical documentation.  Instead, pair them with experienced team members who will let them drive.  That’s the fastest way for them to learn what’s going on: much, much faster than reading technical documentation–or even automated tests, for that matter.

Can’t spare the time to pair?  Deadline too close?  Need everyone on his own computer to improve velocity?  Well, first, you don’t understand pairing; but that’s a separate issue.  Here’s the point: your new guy is going to be pestering your old guys one way or another, whether he’s trying to find out which parts of the technical docs are obsolete or whether he’s officially pairing with them.  Pairing is much faster.

5. Project easily falls off track.

This method requires very little planning to get started, and assumes the consumer’s needs are ever changing. With so little to go on, you can see how this could limit the agile model. Then, if a consumer’s feedback or communications are not clear, a developer might focus on the wrong areas of development. It also has the potential for scope creep, and an ever-changing product becomes an ever-lasting one.

Is the implication here that the waterfall model handles this situation better?


In a properly-run Agile project, there is no predetermined track to fall off.  The project goes where the customers take it, and the most valuable work is always done first.  If the communications are not clear, the discrepancy shows up immediately and is instantly corrected.  There is no scope creep in an Agile project, by definition: we call it “customer satisfaction” instead.

Since the most valuable things are done first, the product is finished when either A) the money runs out, or B) the business value of the next most valuable feature is less than what the developers would cost to develop it.

On the other hand, if the customers continue to point out more business value that can be exploited by further development, that’s a good thing, not a bad thing.  The customers are happier, the company’s market share increases, and the developers continue to have interesting work to do.

Now, the fact that I disagree with much of what Mr. Fridman says in his article should not be taken to mean that I don’t think Agile has no downside.  I think it has at least two major problems; but Mr. Fridman’s article didn’t mention either of them.


The Only Legitimate Measure of Agility

29 04 2016

“Oh no you di’unt.  I know you didn’t just say there’s only one legitimate measure of agility!”

Oh yes I did.  That’s exactly what I did.  Not only is there just one legitimate measure of agility, but it’s a very simple measure, requiring only one metric, and a completely objective one at that.

“Obviously, you don’t understand.  Agility is a tremendously complex proposition, requiring many moving pieces, and any attempt to measure it is going to have to take into account at least dozens if not hundreds of different metrics, which will be different from methodology to methodology, and most of those metrics will be untidily subjective!”


An agile culture is indeed tremendously complex, but I’m not going to measure the culture. I’m going to measure the success of the culture, which is a simpler task.


Agility is a measure of how frequently new functionality is released to customers customers customers in production production production.  Significantly, this is not how frequently it’s released to the business in the UAT region.

Why customers customers customers?  Why not the business?

Those of us who are developers are familiar with the phenomenon, discovered only after wading through thundering torrents of human misery, that it’s difficult-bordering-on-impossible for developers to understand what the business wants from them without an iterative process of stepwise refinement.  That’s why we have the business in the room with us.  That’s why we’re constantly pestering them with questions.  That’s why trying to cope without a product owner is such a disastrous idea.

But as hard as it is for us developers to understand what the business wants, it’s at least as hard for the business to understand what the market wants without an iterative process of stepwise refinement.  Harder, because they have not only to know what the market wants, they have to predict what it will want when the software is finished.  The business is undoubtedly better than we developers are at predicting the market, but that’s not saying enough.  Without iterative stepwise refinement of the project’s goals, it will end up being much less responsive to customer’s needs–and therefore much less valuable–when it is finally released. If it is finally released.

Of course, that iterative process of stepwise refinement for the business is frequent releases to production. This is so that the customers, who pay the bills, can see and react to the trajectory of the product, and the business can either A) adjust that trajectory for better effect, or B) abandon the project, if it turns out to have been a bad idea, early before millions of dollars are irretrievably gone.

That’s what the word “agility” means: the ability to respond quickly to unexpected stimuli and change direction rapidly.

The only legitimate measure of agility is this simple formula:


where w is the number of weeks between releases of new functionality to production.

If you release every week, you’re 100% agile.

If you release every month, you’re 23% agile.

If you release every year, you’re 1.9% agile.

If you release twice a day, or ten times a week, you’re 1000% agile!

I know what you’re thinking.  You’re thinking, “What about the single startup developer with execrable engineering practices whose production server is running on his development machine, and who ‘releases’ every time he commits a code change to source control?  Are you going to try to claim that he’s agile?”


Yes I am.

He may not be very good at agile, but he definitely is agile, responding constantly to customer needs.  He’s certainly more agile than a big company that releases only once a year but has dutifully imposed all the latest engineering best practices on its developers.

And if he doesn’t go out of business, it will be his very agility that will force him to develop and adopt better engineering practices, purely in self-defense.  Which is to say, enthusiastically and wholeheartedly, as opposed to the way he’d have adopted them if a manager forced them on him.

You see, that’s the way it works.  Demos aren’t agility. Retrospectives aren’t agility. Pair programming isn’t agility. Not even test-driven development is agility. Releasing to production is agility, and all those other things are supportive practices that have been empirically demonstrated to enable agility to be sustained and increased.

A client says, “If everything goes right, we release about once a year.  We’d like to be more agile, but we have a lot of inertia, and we’d like to take it slow and gradual.  So we put up this card wall, and now we’re considerably more agile.”


No they’re not.

They were 1.9% (max) agile before they put up the card wall, and they’re 1.9% (max) agile now. The card wall may or may not be a good idea, depending on how they use it, but it has zero effect on their agility.

I think agile transformations ought to start on the other end.

Do whatever it takes to develop the ability to release to production every week, and then start doing it.  Every week.

If you have new code since last week to release, then release it.  If you have no new code, then re-release exactly the same code you released last week, and complain bitterly to everyone within earshot.

Presently, perhaps with the help of a powerful director or vice president or executive, you’ll start getting some code.  Probably it won’t be very good code, and minutes after you release it you’ll have to roll it back out and return it for repair; but you’ll be agile, and your agility will drive the development team to adopt the engineering practices necessary to support that agility.

That’s the way we do things in an agile culture, isn’t it? We try something for a short time to see if it works, and if it fails we figure out what to do to improve the process and try again.

Maybe the dev team will choose to adopt card walls and demos and retrospectives and TDD and continuous integration and all the rest of the standard “agile” practices, and one or more of us will get rich teaching them how.

Or…maybe…they’ll come up with something better.


Riding Herd on Developers

28 03 2012

When I was a kid, I learned a song in Sunday school about motivation.  It was called The Hornet Song*, and part of it went like this:

If a nest of live hornets were brought to this room
And the creatures allowed to go free,
You would not need urging to make yourself scarce:
You’d want to get out; don’t you see?

They would not take hold and by force of their strength
Throw you out of the window; oh, no!
They would not compel you to go ‘gainst your will;
They’d just make you willing to go.

A client architect once told me, “We’ve been trying really hard to get on the software-craftsmanship bandwagon.  We’ve been having lunch-and-learns, and offering training, and everything we can think of; but maybe two or three people show up.”

When you’re a developer, and there are three or four layers of inspection and signoff between you and production (as there were in this case), there’s no real reason to care about things like software craftsmanship, unless you just naturally have a fascination with such things, in which case you’re certainly not going to be working in a place where there are three or four layers of inspection and signoff between you and production.

What makes you care about software craftsmanship is being right up against production so that the first person who sees what leaves your fingers is a customer, and the first person who gets a call when something goes blooie is you.

My architect friend said this as part of a conversation about whether IT should control the dev teams (and through the dev teams the business interests they serve) or whether the business interests should control the dev teams (and through the dev teams IT).

He objected that unless IT rides very close herd on the dev teams, they all do different things, and build engineering and operations becomes an intractable nightmare.

This struck me as a perfect case of trying to hang onto the soap by squeezing it tighter.

He was describing a classic case of moral hazard.  In his world, it was the developers’ job to make a horrible mess, and the job of build engineering and operations to clean it up; and the only way that job would ever be small enough to be kept under control by build engineering and operations was to bear down ever harder on developers.

As always in a situation like this, the real solution will turn out not to be figuring out how to cram one party into an ever tinier, more inescapable prison, but how to eliminate the moral hazard that creates the problem in the first place.

First, let’s talk about build engineering.  What is build engineering?  In this case, it was one huge farm of CI servers that knew about every project in the company, and a predetermined set of builds and test environments that every checkin from everywhere automatically went through, and a group of people to oversee and maintain this system and explain rather impatiently to dev teams why (or at least that) no, they couldn’t do that either, because the build system wasn’t set up that way.

I watched the reaction in that company to the news that a dev team was planning to set up its own team-local CI server, with a bunch of VMs to simulate the application environment for testing, so that it could run certain tests after every checkin that the company-wide system could only run once a day.  There was a moment of slack-jawed horror, followed by absolute, nonnegotiable, and emphatic prohibition–even though a team-local CI server would not require the least change or modification to any code or configuration or practice having to do with the main system.

If the build engineering team reacted this way to a suggestion that doesn’t involve them doing anything different, imagine their reaction to a suggestion that they should change something!

In their defense, though, it’s not hard to understand.  As any developer knows, build is a terrifically delicate process, especially when it involves enough configuration to get a nontrivial application to run successfully in a variety of different environments (on the order of a dozen, as I remember) with different levels of mocking each time.  Getting that all working once is already something to write home about: imagine getting it working—and keeping it working—for 150 completely unrelated applications whose natures, characteristics, and requirements you know next to nothing about!

It strikes me as being about as difficult as playing all the positions on a baseball team simultaneously.

Which is why, in the real world, we have baseball teams.  The job of playing all the positions at once doesn’t exist, because there’s no need for it, once we’ve got the first baseman playing first base and the pitcher pitching and the catcher catching and the outfielders outfielding and so on.

In my opinion, that whole huge company-wide build system should have been thrown out—or at least broken up into tiny pieces—and the individual teams should have been responsible for their own CI and testing processes.  Those processes undoubtedly would have been completely different from team to team, but that would have been a good thing, not a bad thing, because it would mean each team was doing what was best for its project.

I suspect my architect friend would have protested that a huge build system like that was the only way they could be sure that every team put their code through all the proper testing environments before it went into production.

My response would have been that the most important thing was not that all code go through every environment envisaged by the build engineers, but that all code worked properly in production.  Put the dev teams right up against production, as suggested above, and they’ll find a way to make sure their code gets whatever testing it needs, or else they’ll be doing 24/7 production support instead of software development.  (That‘ll get ’em to your lunch-and-learns, for dang sure.)

But what about operations?  Do we solve that problem by putting the dev teams in charge of operations as well?

I don’t think so—not in most cases.  It’s an issue of boundaries.  Just as it’s a moral hazard for the build team to be compelling all sorts of behavior inside the dev teams, it’s a moral hazard for the dev teams to be specifying behavior inside the operations team.

The operations team should say, “Based on our skillset, expertise, and practical constraints, here’s a short list of services we can provide to you, along with the technologies we’re willing to use to provide them.  This list is subject to change, but at our behest, not yours.”  The dev teams should design their CI processes to spit out something compatible with the services provided by the operations team, if at all possible.

When a dev team can’t do without something operations can’t provide—or could theoretically provide but not fast enough—that’s when the dev team needs to think about doing its own operations temporarily; but that’s a sign of a sick organization, and one way or another, that situation probably won’t be allowed to persist long.

To sum this all up, morale is very important on a dev team.  You don’t want developers who lie in bed staring at the brightening ceiling thinking to themselves, “I wonder what insufferable crap they’re going to make me do today.”  You want developers who say, “If my pair and I get our story done early today, I’ll get to spend some time playing with that new framework the guy tweeted about last night.”

To be in a state of high morale, developers need to be constantly in an innovative state of mind, slapping down challenges right and left and reaching for ever more velocity multiplication in the form of technology, skill, and experience.  (Notice how I didn’t mention super-high salaries or super-low hours or team-building trips.)

You don’t make or attract developers like this by insulating them from the real world with signoffs and approvals and global build systems and stupid, counterproductive rules, and then imposing a bunch of stuff on them that—even if it’s really great stuff, which is rare—they’ll resent and regard very suspiciously and cynically.

You make them by tossing them into the deep end—preferably with several senior developers who already know how to navigate the deep end comfortably—and making sure that when they grab out for something to stay afloat, it’s there for them.  (If they do the grabbing for it, they’re not going to see it as an imposition.  See?)

The term “riding herd” comes from the great cattle drives when America was young, where several dozen cowboys would drive a herd of thousands of dumb cattle a distance of thousands of miles over several months to be slaughtered.

Do you want developers like those cattle?

Then ride herd on them, like the cowboys did.

Otherwise don’t.

Update 3/30/2012:

My architect friend saw this article and got in touch with me about it.  We discussed what he said was the major reason for all the heavyweight ceremony and process in his company: Sarbanes-Oxley.  SarbOx means that the company CFO is accountable for everything individual developers do, and if they do something the government doesn’t like, he goes to prison: hence, he is pretty much required to ride herd on them.

SarbOx is an area that I haven’t yet been seriously involved in.

I understand that it’s a government’s job to kill people, devastate lives, destroy liberties, and generally cock things up as far as it can without its politicians ending up swinging from lampposts in droves; but it’s also true that for decades now software developers have been wading into cocked-up situations with the help of domain experts and making them convenient, fast, and smooth.

Is SarbOx really such a competently-conceived atrocity that even Agile developers can find no way around, through, or over it?  Somehow it seems unlikely to me that politicians could be that smart.

*The complete Hornet Song used the quoted analogy to explain the behavior of the Hivites, Canaanites, and Hittites in Exodus 23:28, and the behavior of Jonah in Jonah 3:3.

A Conversation I’d Like to Have

3 03 2012

Do you ever talk to yourself while you’re driving alone, playacting how things might go if you were able to say what you wanted to the people to whom you wanted to say them?

I do.  It doesn’t usually do me any good, but I do it anyway.  Especially these days, since I have a much longer commute than I used to have.  For example:

“Would you stop screwing around with your Agile teams that way?  You can’t just keep breaking them up and stirring them around and re-forming them and expect any decent velocity from them!

“People are not commodities.  They’re not interchangeable.  They’re unique individuals, with unique aptitudes and unique skills and unique strengths and unique weaknesses. Put a small number of them on a team and leave them there, and they’ll learn all those things about each other and grow to be able to combine their strengths and cancel out their weaknesses.

“A brand-new, immature Agile team that has only one or two projects under its belt isn’t particularly valuable, true, except that it has the potential to grow into an experienced, mature Agile team, which is particularly valuable.

“For example, here’s a challenge. I am under no illusion that you will take me up on this challenge, but I’m not bluffing. Take Kevin and DJ and Gabbie and me, and put us on a team.  I don’t mean add us to an existing team: I mean make a new team out of us.  Give us two of your better QA folks—say Matt and Kay—and a product owner who’s in a bind.  We will blow the frickin’ doors off anything you folks have ever seen in this company before.

“Why—because we’re superintelligent?  Hey: you know me well enough by now to say definitively and with confidence that I, at least, am certainly not superintelligent.  Because we’re expert programmers?  No.  Because we’re an experienced, mature Agile team.

“We know each other; we’ve worked with each other long enough that each of us knows how the others work.  And we won’t stand for any frickin’ bullcrap.  Our overriding concern is not following IT’s established rules and processes: our overriding concern is creating business value for our product owner as quickly and efficiently as humanly possible.  If your IT people support us, we’ll work with them.  If they get in our way, we’ll bypass them.  We will have something in production by the end of the second week, come hell or high water, and at the end of every week thereafter.

“But I warn you: after experiencing a team like ours, that product owner will be forever corrupted.  He’s no longer going to be interested in waiting for approvals from committees of architects or waiting for monthly releases or waiting for weeks of big design up front to be complete…or really in waiting for much of anything, for that matter.  He’s going to want what he wants, and he’s going to want it right now, and if you tell him you can’t give it to him on his time schedule he’s going to point to us and say, ‘They can.'”


“Yeah…listen, I heard something a little disturbing yesterday.  Do I understand correctly that you have an application you want us to put into production?  Because we haven’t heard anything about this application: it’s not in our process or on our schedule.”

“No, you don’t understand correctly.  The application is already in production—has been for almost two months now.  We hit 200 simultaneous users last week.”

“[indulgent chuckle] I’m afraid your information is mistaken.  I would certainly know if your application was in production, because—as you know—I oversee the deployment process; and I haven’t put it in production.”

“It’s your information that’s mistaken.  We talked to you folks—what, ten weeks ago or so now?—about our application, and you told us we’d have to write and submit a proposal to the proper committee and have it discussed and approved at the next quarterly meeting, and we simply didn’t have time for that; so we hired a hosting service in the Cloud and deployed it there.”

“No—you see, in order to do that you’d have to…wait.  Wait.  What?!”

“I told my team that I needed the application in production as soon as possible.  They talked to you guys, then decided it was possible to put it into production sooner than you could, so that’s what they did.  They’re Agile, you know.”

“But you’re not allowed to do that!  We don’t host our code on third-party servers, we host it on our own servers, under our control.  You’re going to have to take it down and have us put it up on our farm.”

“What did you say to me?  I’m not allowed?  Listen, let’s talk for a minute about what I am, rather than what I’m not.  I am the business.  You’re IT.  I’m your reason for being.  You exist to serve me, not to tell me what I’m allowed to do or give me orders.  My team is leading me to really understand that for the first time.

“But look, I’ll tell you what.  You’ve got a proposed requirement for our application.  Fine.  Write me up a story card, and I’ll put it in the backlog. I’ll prioritize it among all the others written by my team and me.  If you get fast enough to give me the same level of service my team gives me with our hosting service, or if the application gets mature enough that we change it so seldom that your latency doesn’t really matter, your card might get somewhere near the top of the backlog.”

“It’s not a good idea to talk to me that way.  Since the application didn’t go through our established processes, we can simply choose not to take the responsibility for supporting it.”

“We’re not interested in having you support it.  My team is doing a fine job supporting it themselves.  As a matter of fact, the defect count is so low there hasn’t been much in the way of support to do, anyhow.”

“They didn’t consult with any of our senior architects.”

“Or the junior ones either—yes, I know.  That’s one of the reasons they were able to move so quickly.  Another is that they wrote it in Scala, which enabled them to use a couple of new frameworks they say really gave them a boost.”

“Scala?  Scala?!  We don’t support Scala!”

“Yeah, we noticed.  You don’t support git or IntelliJ or Jenkins either.”

“We can’t maintain that—none of our maintenance people know any of that stuff.”

“We’re not interested in having you maintain it.  My team is going to maintain it.  They’re the ones who wrote it, after all—it makes sense for them to maintain it.”

“Well, okay, what are you interested in having us do?”

“Hey, I didn’t call you; you called me.  You just go on doing whatever it is you do—make a rule, require an approval, establish a process; I’m just guessing here.  I probably won’t have a lot of time to talk to you for awhile: I’ve got about two more iterations of stories on this project, and then I’m going to have my team start another one.  I’ll tell you about it if you like, but I’m not interested in having you support it or anything.  My guess is that we’ll have it serving users before you could have gotten it through your process.

“This Agile stuff is really cool.  You should try it sometime.”

Should Agile Testers Code?

13 01 2012

This morning at CodeMash, I had an argument over breakfast with Jeff “@chzy” Morgan (more commonly known as Cheezy) of LeanDog about Agile testers.  After some advancing and retreating around the battlefield, it became clear that we seemed to generally agree about almost everything regarding how Agile teams should be run, except for this one issue.

The story test that every story card should produce when it first arrives in the Dev In Progress column: should that story test be written by a developer or by a tester?

Cheezy says a tester should write it; I say a developer should write it (after a detailed conversation with a tester or product owner, of course).

I’m going to reproduce here, as well as I can remember them, the salient points of the argument, and I’d love to have people having more experience with testing than I have address them in the comments.

Before I say what I’m going to say, though, I’d like to say what I’m not going to say.  As a developer, I’m not going to disrespect testers.  I’m not going to imply that coding is an exalted, morally superior activity that should be undertaken only by developers, since developers are the moral superiors of testers.  I live in awe of good Agile testers, and–after having tried once or twice–I have no lingering illusions that I could do what they do.  Agile testers have brought my cheese in out of the wind so many times that I’ll be indebted to them for a long time.

Essentially, my arguments will all boil down to this: people should do what they’re good at.

Finally: I’m going to do my best to be fair to Cheezy’s arguments as he presented them.  Cheezy, if you read this, feel free to comment publicly or privately about anything I get wrong, and I’ll post an update.


Cheezy says that since testers will almost always know more about the problem domain than developers will, it is the testers who should have the responsibility of translating the requirements from story-card language into executable language, like Cucumber over Ruby.

I’ve been on teams before where this was advocated.  It sounds good, both to Cheezy and to me.  What I find in the real world, though, is that writing good tests is absolutely the hardest technical part of a developer’s job.  It can sometimes take me a couple of days to get a story test written and failing for the right reasons, given all the investigation and forgotten last-minute spiking that has to go into it; once I do that, the code to make it pass is generally much easier.

So in this case you’re not just saddling testers with development work: you’re saddling them with the very hardest kind of development work.  For this, it’s not good enough to take an excellent tester and teach him a little coding.  You have to actually make him into a real honest-to-goodness developer as well as a tester, if he’s going to be successful.

I was on a team once with a tester who was really good at writing story tests.  In his case, if I remember correctly, he was writing easyb tests and backing them up with Java.  As a matter of fact, he liked it so much that he ended up crossing over and becoming a developer instead of a tester.  I was rotated off the project shortly after that, but I imagine that with both testing and development skills, he went far and fast.  If he did so, however, it was as a developer, not as a tester.

The rest of the testers on that team didn’t really cotton well to easyb or Java.  They struggled at it for awhile, frequently resorting to the developers for help, and eventually stopped writing automated tests altogether, whereupon the task fell to…that’s right, the developers.

To support Cheezy’s side of the argument, I should say that I was also on a team that used Fitnesse over Java for ATDD, and on this team the tester really did write the acceptance tests in Fitnesse, and it worked out great.  On my side of the argument, though, the Fitnesse tests had to be supported by fixtures written by the developers, and it’s somewhat of a stretch to call filling out a Fitnesse wiki table “coding.”

Cheezy says that the most dangerous part of the software development process is the place where requirements get translated from English to code; therefore it’s always safest for that to happen all inside one brain, rather than having it depend on one person understanding another person’s explanation.

I agree that that would be the best place for it if that kind of thing worked; but in my experience, in a large percentage of cases, the tester gets stuck while writing the code–remember, this is hard work, not easy work–and enlists a developer to help him.  When that happens, the developer has to look at the tester’s existing code and figure out what he’s trying to do–error-prone–talk to the tester and figure out what he wants to do–error-prone–and guide him through the rest of the process of getting his thoughts down in code–again, error-prone.

I don’t know of a way that we can eliminate errors in translation.  We can reduce them by listening really, really carefully and asking lots of questions; that’s the way Agile developers are trained to operate.  But eliminate?  You tell me.

There’s another issue here too that didn’t come up in our discussion, but which I’d like to raise here.

Many times, especially when a card represents a foray into an as-yet-unexplored wing of the project, there will be spikes preceding it to discover things like, “Once I’ve sent a set of instructions over the network to that remote node, how do I get it to tell me what it’s done in response to them?” or “How do I take control of this legacy subsystem so that I can make it behave as my tests need it to?”

Those spikes won’t be completed by testers: they’ll be completed by developers.  Hopefully they’ll be completed by the same developers who will eventually play the cards they precede.  Having those developers employ the technology they’ve just spiked out is certainly less error-prone than having them explain it to testers so that the testers can write the story tests (quite possibly in a completely different language from that of the spike) that use it.

Cheezy says that having testers automate their testing frees them from regression testing so that they can do the most valuable thing testers do: manual exploratory testing.  I agree that exploratory testing is the crowning glory of an Agile tester; I agree that manual regression testing is worse than scrubbing the gymnasium floor with a toothbrush.  But my experience with testers writing automated tests is that they spend so much time struggling with the code that it cuts into their exploratory time just as much if not more.

And as for manual regression testing, nobody should have to do that–except perhaps after a system-wide refactoring, like moving from a SQL database to a No-SQL database or switching Web frameworks, where the vulnerabilities move around.  When a tester discovers a defect during his exploratory testing, a developer needs to write a test proving that defect doesn’t exist and then make that test pass, leaving it as a permanent feature of the test suite, so that the tester never has to regress that problem again.

Cheezy pointed out that it’s very expensive when cards move backwards on the wall, and opined that having testers write story tests could eliminate that.  I’m skeptical, but if Cheezy–if anybody–can come up with a way to eliminate having cards move backwards on the wall, I suggest that the very next step should be getting rid of testing altogether: why would it be needed if a card never moves backward anyway?

A final point I’d like to make is that I’ve found that one of the reasons testers are so valuable is that they don’t instinctively think like developers; they think like users.  Making them code would force them to think like developers at least a little–that is, enough to be able to do the hardest thing developers do.  Hopefully it wouldn’t pull them completely over to the Dark Side, but in the final analysis I think I really would rather have them stare blankly at me as I yammer on about some developer excuse and then say, “But it doesn’t work.”  If they start accepting my excuses because they understand my viewpoint, that’s no good for anybody.

Thanks again, Cheezy, for the discussion this morning; it has helped me think about Agile testing in new ways.  Perhaps your comments and others will help me further.