Travails of a Newly-Embedded TDDer

17 06 2012

So, as someone who has done a lot of building and repair of things strictly electrical, but who has done practically nothing having to do with analog or digital electronic hardware, I’ve taken delivery of an Arduino microcontroller board and have been playing around with it.  I’ve got a fairly decent breadboard, a set of wire jumpers, and a mess of resistors, capacitors, and LEDs.   I’ve put together several meaningless toy “devices,” ranging from a program that just repeatedly blinked “Shave And a Hair-cut, Two Bits” on an on-board LED to one that sequenced green outboard LEDs until you pulled a particular pin low, at which point it would sequence red outboard LEDs until you let the pin go, at which point it would go back to sequencing the green LEDs.

But as a TDD, I’m beginning to feel really, really crowded by the difficulty of test-driving embedded microcontroller code.  I just happened to find a bug in one of my programs that happened because I was refactoring without tests; if I had not just happened across it, there’s no reliable way I could have found it without tests.

So I’m thinking: If I could wish into existence whatever I wanted to support my Arduino development, what would I wish?

At first I thought, hey, this isn’t so hard.  Arduino provides me with a C library full of API calls like pinMode(), that sets a pin to input or output mode, digitalWrite(), that sets an output pin high or low, analogRead(), that reads a value from one of the ADC pins, and so on.  All I have to do is write an instrumented static-mock version of that library.  When I want to run tests, I can compile the microcontroller code for my development machine instead of for the microcontroller, link in my static mock instead of the Arduino library, and run tests with Google Test.  I can set up the static mock so that I can attach prerecorded “analog” signals (essentially sequences of floating-point numbers) to particular input pins, and attach “recorder” objects to particular output pins.  In the Assert portion of the tests, I can examine the recordings and make sure they recorded what should have happened.

But then I thought about timing.  Timing’s real important to microcontrollers.  For example, suppose I wanted to test-drive handling of the situation when an interrupt is received while a value is being written to EEPROM.  How am I going to arrange the input signal on the interrupt pin so that it changes state exactly at the right time during the run of the code under test?  Without access to the hardware on which that code is running, how do I even simulate an interrupt at all?

Now I’m thinking I need to write a complete software emulator of the microcontroller–keeping the idea of prerecording input signals and recording output signals during the test, and having asserts to compare them, but with debugger-like awareness of just what’s executing in the test: for example, you could have prerecorded signals where the numbers weren’t just numbers, but could be tagged with clock cycle counts or time delays, or even with something like “Fire this sample the third time execution passes address XXXX with memory location XXXX set to XXXX.”

There is at least one software emulator in existence for my microcontroller–the AVR ATmega2560–but it doesn’t offer the kind of control that a TDD (this one, anyway) would want.

But I’m thinking there’s got to be an easier way.





How Do You Measure Position on a Fingerboard?

17 06 2012

My first idea was just to put long ribbon controllers (like laptop touchpads, but long and skinny and sensitive only to one dimension) under the strings.  But there are two problems with that.  First, nobody seems to make ribbon controllers any longer than about 100mm.  Second, my bass strings can, over time, wear grooves in even ultra-hard ebony fingerboards.  I’d expect plastic ribbon controllers to last maybe ten minutes.

My second idea was to put a little piezoelectric transducer on the bridge end of the string and have them fire periodic compression pulses down the string, listen for the reflection from the point where the string is held down, and compute position from the delay.  But there three problems with that.  First, it’s going to be difficult to damp those pulses out at the bridge end; most likely, they’ll reflect off each end a couple of times before they disappear into the noise floor.  How am I going to tell the reflection of the pulse I just sent from the continuing echoes of its predecessors?  Second, holding a string against a fingerboard definitely stops transverse waves; but I have no reason for believing that it would cause a clean echo of a compression wave.  My instinct is that there’d be a very weak reflection from the fingered position followed by a nice sharp one from the other end of the string, so I’d have to find the weak, fuzzy one among all the sharp, clear ones.  Third, a compression pulse travels at about 20,000 feet per second down a steel string.  If I want, say, 1mm resolution in positioning, then even if the other two problems evaporate I’m going to have to be able to measure the delay with a granularity of 164 nanoseconds, which isn’t particularly realistic for the sort of microcontroller hardware one expects to find in a MIDI controller.

My third idea was to make the fingerboard of some hard, conductive metal and the strings of resistance wire–that is, wire that has a significant resistance per unit length, unlike copper or aluminum.  Then you could use an analog-to-digital converter to measure the resistance between the fingerboard and the bridge end of each string, and you’d get either infinity–if the string wasn’t being fingered at all–or a value roughly proportional to the distance of the fingered point from the bridge.

There are a whole host of problems with this third idea.

Resistance is affected by more than where the string is fingered.  For example, a small contact patch (light pressure) will have more resistance than a large contact patch (heavy pressure).  A place on the string or fingerboard with more oxidation will have more resistance than a place with less oxidation.  But…stainless steel doesn’t oxidize (much), and neither does nichrome resistance wire, if you’re not using it for a heating element.

Nichrome resistance wire doesn’t have as much resistance as one might imagine.  In one diameter that’s roughly the size of a middling guitar string, its resistance is about one third of an ohm per foot.  I can crank my ADC’s reference voltage down as far as 1.0V, but if I run 1.0V through about eight inches of wire (what I expect to be left over at the end of the fingerboard once a string is fingered in the highest position), Ohm’s Law says I’ll get 4.5 amps of current, which will burn my poor little ADC to a smoking crisp, as well as frying the battery and even heating up the strings significantly.  But…I could limit the current to one milliamp and use an operational amplifier with a gain of 1000 or so to get a low-current variable voltage between 0 and 1V for the ADC.

A/D converters don’t measure resistance, they measure voltage.  Sure, it’s easy to convert resistance into voltage; but it’s easy to convert all sorts of things into voltage, and much harder than you would think to avoid converting them–things like electromagnetic interference from elevators, power lines, air conditioners, and so on.  The signal arriving at the ADC is liable to be pretty noisy, especially given the required amplification.  But…I could add another string on the neck and put it somewhere where it couldn’t be played.  That string would be subject to all the same noise and interference as the one being played.  The only difference would be that it wouldn’t be being played; so you could use the differential mode of the op amp to subtract one from the other and cancel out the common stuff.

One idea I had to get rid of the amplifier, and all the noise it would amplify, would be to ditch the 16ga nichrome wire (about the size of a guitar string) and use 40ga instead: very skinny, with a significantly higher resistance per foot.  Then coat it with insulating enamel, like magnet wire, and wrap it around a central support wire made of steel or bronze, so that it looked very much like a real guitar string.  Then sand off the enamel from the outside of the string so that the nichrome can contact the steel fingerboard.  In this case, our nichrome wire is much thinner, with more resistance per foot, and much longer, because it’s wound in a helix instead of being straight.  I ran the numbers and discovered that I could expect about 10,000 ohms per foot for a string like that: almost perfect for the ADC.

Then I remembered that the resistance of the human body is about 10,000 ohms: the resistance of the system would be significantly modified if anyone touched it without insulating gloves.  So…ditch that idea.  (A 10,000-ohm parallel resistance will have no perceptible influence on the 1-ohm resistor you get from a three-foot 16ga nichrome wire.)

So there are problems with this third approach too, but all of them at least appear to be surmountable.  It might be worth a try.





Looks Like a New Project Coming Up

17 06 2012

So this is a little different subject.

In my life away from Agile software development, one of the things I do is play the electric bass.  Not particularly well, but it’s something I enjoy.  My favorite bass has six strings and no frets.

Since I’m a computer geek, though, I’ve been playing around with MIDI for years.

So for quite some time, I’ve been developing a blue-sky fantasy about a fretless stringed MIDI controller–that is, something that would have a user interface similar to that of a fretless bass (but better, because it wouldn’t have to tolerate certain physical realities that actual fretless basses do) and would allow the player to control a synthesizer.

One thing that any MIDI controller must be able to do is determine what pitch the user wishes it to direct the synth to play.  I’ve been able to find two different methods by which this is done by existing guitar-shaped MIDI controllers (EGSMCs).

The most common way is to use a real electric guitar with real guitar strings and a real pickup feeding a real note into an analog-to-digital converter and running a pitch-extraction algorithm on the resulting stream of samples. I don’t like this way because of the unavoidable inherent latency.  You have to identify the attack transient and wait for it to pass; then you have to identify at least two clean cycles of the actual note after it has begun to ring; then you have to do a Fourier transform to find the fundamental frequency; then you have to convert it into (at least) a three-byte MIDI message; then you have to transmit it over a serial link at just over 3000BPS; then the synthesizer has to initiate the note.  Even with a lot of processing power, that can be 50-60ms of latency or even more, especially with low-frequency notes.  It’s hard to play decent music when you have to wait that long after playing a note before hearing it.

The other way I’ve seen is to use a series of switches, one for each fret/string combination, that the player depresses in much the same way he’d press a string against a fret on a real instrument.  This provides considerably less latency, because the entire pitch-extraction computation is eliminated; but it doesn’t lend itself to fretlessness.

I’d like to see a MIDI controller that determined which MIDI note number it would send exclusively  by position rather than pitch, like the discrete-switch solution, but in which the position measurement would be continuous rather than discrete, so that if you wanted to you could send MIDI pitch-wheel messages as well as note-on messages.

If you could do this, one of the coolnesses that would result would be that note spacing would be entirely soft.

On a real stringed instrument, the note positions, whether fretted or not, are significantly further apart near the head (the small end of the guitar that usually has the tuning keys on it) than they are near the bridge.  This has two tradeoff effects, at least for a fretless instrument.  First, playing quickly is more difficult near the head, because one’s fingers have to span a much larger physical distance, meaning more hand movement.  Second, playing with good intonation (that is, in tune) is more difficult near the bridge, where a half-step interval might be less than half an inch wide, making the positional tolerances very small.

Also on a real stringed instrument, the available pitch range is determined in a minor way by the number of strings, but in a major way by the length of the fingerboard in relation to the strings.  Is the fingerboard half the length of the strings?  Then you can play an octave on each string.  Three-quarters the length of the strings?  Then you can play two octaves.  Want three octaves?  Then you need seven-eighths of the string length underlaid by fingerboard, meaning that the notes are extremely close together for that last octave, and there is very little room between the end of the fingerboard and the bridge for pickup electronics, and usually the body of the instrument has to be mutilated to allow the player’s hand to get to that last octave.  Four octaves?  Forget it.

On a MIDI controller that depended on position rather than pitch to determine note number, you could:

  • define whatever length of fingerboard you had to hold whatever number of notes you wanted it to;
  • space those notes however you wanted, with no regard for the physics of a vibrating string; and
  • map position to note number in whatever way you liked–specifically, you could round a fingered position up to the next note, the way a fret does, if you wanted to simulate frets.

You could make different definitions for each string, if you wanted, enabling simulation of a five-string banjo with its fifth short string.  You could do diatonic quantization, like a mountain dulcimer, where the frets are irregularly spaced to prevent playing non-scale notes.  You could space the notes uniformly from one end to the other, so finger spacing was always the same.  You could make some of the strings “fretted” and not the others, or you could make some or all of the strings “fretted” in some places and not in others.

You could set up several of these different arrangements and impose them successively in the course of a single song, if you wanted to, with simple MIDI program-change messages.

So being able to sense continuous position admits a host of wonderful possibilities; but how do we sense it?

Well, I’ve been thinking about that.





Riding Herd on Developers

28 03 2012

When I was a kid, I learned a song in Sunday school about motivation.  It was called The Hornet Song*, and part of it went like this:

If a nest of live hornets were brought to this room
And the creatures allowed to go free,
You would not need urging to make yourself scarce:
You’d want to get out; don’t you see?

They would not take hold and by force of their strength
Throw you out of the window; oh, no!
They would not compel you to go ‘gainst your will;
They’d just make you willing to go.

A client architect once told me, “We’ve been trying really hard to get on the software-craftsmanship bandwagon.  We’ve been having lunch-and-learns, and offering training, and everything we can think of; but maybe two or three people show up.”

When you’re a developer, and there are three or four layers of inspection and signoff between you and production (as there were in this case), there’s no real reason to care about things like software craftsmanship, unless you just naturally have a fascination with such things, in which case you’re certainly not going to be working in a place where there are three or four layers of inspection and signoff between you and production.

What makes you care about software craftsmanship is being right up against production so that the first person who sees what leaves your fingers is a customer, and the first person who gets a call when something goes blooie is you.

My architect friend said this as part of a conversation about whether IT should control the dev teams (and through the dev teams the business interests they serve) or whether the business interests should control the dev teams (and through the dev teams IT).

He objected that unless IT rides very close herd on the dev teams, they all do different things, and build engineering and operations becomes an intractable nightmare.

This struck me as a perfect case of trying to hang onto the soap by squeezing it tighter.

He was describing a classic case of moral hazard.  In his world, it was the developers’ job to make a horrible mess, and the job of build engineering and operations to clean it up; and the only way that job would ever be small enough to be kept under control by build engineering and operations was to bear down ever harder on developers.

As always in a situation like this, the real solution will turn out not to be figuring out how to cram one party into an ever tinier, more inescapable prison, but how to eliminate the moral hazard that creates the problem in the first place.

First, let’s talk about build engineering.  What is build engineering?  In this case, it was one huge farm of CI servers that knew about every project in the company, and a predetermined set of builds and test environments that every checkin from everywhere automatically went through, and a group of people to oversee and maintain this system and explain rather impatiently to dev teams why (or at least that) no, they couldn’t do that either, because the build system wasn’t set up that way.

I watched the reaction in that company to the news that a dev team was planning to set up its own team-local CI server, with a bunch of VMs to simulate the application environment for testing, so that it could run certain tests after every checkin that the company-wide system could only run once a day.  There was a moment of slack-jawed horror, followed by absolute, nonnegotiable, and emphatic prohibition–even though a team-local CI server would not require the least change or modification to any code or configuration or practice having to do with the main system.

If the build engineering team reacted this way to a suggestion that doesn’t involve them doing anything different, imagine their reaction to a suggestion that they should change something!

In their defense, though, it’s not hard to understand.  As any developer knows, build is a terrifically delicate process, especially when it involves enough configuration to get a nontrivial application to run successfully in a variety of different environments (on the order of a dozen, as I remember) with different levels of mocking each time.  Getting that all working once is already something to write home about: imagine getting it working—and keeping it working—for 150 completely unrelated applications whose natures, characteristics, and requirements you know next to nothing about!

It strikes me as being about as difficult as playing all the positions on a baseball team simultaneously.

Which is why, in the real world, we have baseball teams.  The job of playing all the positions at once doesn’t exist, because there’s no need for it, once we’ve got the first baseman playing first base and the pitcher pitching and the catcher catching and the outfielders outfielding and so on.

In my opinion, that whole huge company-wide build system should have been thrown out—or at least broken up into tiny pieces—and the individual teams should have been responsible for their own CI and testing processes.  Those processes undoubtedly would have been completely different from team to team, but that would have been a good thing, not a bad thing, because it would mean each team was doing what was best for its project.

I suspect my architect friend would have protested that a huge build system like that was the only way they could be sure that every team put their code through all the proper testing environments before it went into production.

My response would have been that the most important thing was not that all code go through every environment envisaged by the build engineers, but that all code worked properly in production.  Put the dev teams right up against production, as suggested above, and they’ll find a way to make sure their code gets whatever testing it needs, or else they’ll be doing 24/7 production support instead of software development.  (That‘ll get ’em to your lunch-and-learns, for dang sure.)

But what about operations?  Do we solve that problem by putting the dev teams in charge of operations as well?

I don’t think so—not in most cases.  It’s an issue of boundaries.  Just as it’s a moral hazard for the build team to be compelling all sorts of behavior inside the dev teams, it’s a moral hazard for the dev teams to be specifying behavior inside the operations team.

The operations team should say, “Based on our skillset, expertise, and practical constraints, here’s a short list of services we can provide to you, along with the technologies we’re willing to use to provide them.  This list is subject to change, but at our behest, not yours.”  The dev teams should design their CI processes to spit out something compatible with the services provided by the operations team, if at all possible.

When a dev team can’t do without something operations can’t provide—or could theoretically provide but not fast enough—that’s when the dev team needs to think about doing its own operations temporarily; but that’s a sign of a sick organization, and one way or another, that situation probably won’t be allowed to persist long.

To sum this all up, morale is very important on a dev team.  You don’t want developers who lie in bed staring at the brightening ceiling thinking to themselves, “I wonder what insufferable crap they’re going to make me do today.”  You want developers who say, “If my pair and I get our story done early today, I’ll get to spend some time playing with that new framework the guy tweeted about last night.”

To be in a state of high morale, developers need to be constantly in an innovative state of mind, slapping down challenges right and left and reaching for ever more velocity multiplication in the form of technology, skill, and experience.  (Notice how I didn’t mention super-high salaries or super-low hours or team-building trips.)

You don’t make or attract developers like this by insulating them from the real world with signoffs and approvals and global build systems and stupid, counterproductive rules, and then imposing a bunch of stuff on them that—even if it’s really great stuff, which is rare—they’ll resent and regard very suspiciously and cynically.

You make them by tossing them into the deep end—preferably with several senior developers who already know how to navigate the deep end comfortably—and making sure that when they grab out for something to stay afloat, it’s there for them.  (If they do the grabbing for it, they’re not going to see it as an imposition.  See?)

The term “riding herd” comes from the great cattle drives when America was young, where several dozen cowboys would drive a herd of thousands of dumb cattle a distance of thousands of miles over several months to be slaughtered.

Do you want developers like those cattle?

Then ride herd on them, like the cowboys did.

Otherwise don’t.


Update 3/30/2012:

My architect friend saw this article and got in touch with me about it.  We discussed what he said was the major reason for all the heavyweight ceremony and process in his company: Sarbanes-Oxley.  SarbOx means that the company CFO is accountable for everything individual developers do, and if they do something the government doesn’t like, he goes to prison: hence, he is pretty much required to ride herd on them.

SarbOx is an area that I haven’t yet been seriously involved in.

I understand that it’s a government’s job to kill people, devastate lives, destroy liberties, and generally cock things up as far as it can without its politicians ending up swinging from lampposts in droves; but it’s also true that for decades now software developers have been wading into cocked-up situations with the help of domain experts and making them convenient, fast, and smooth.

Is SarbOx really such a competently-conceived atrocity that even Agile developers can find no way around, through, or over it?  Somehow it seems unlikely to me that politicians could be that smart.


*The complete Hornet Song used the quoted analogy to explain the behavior of the Hivites, Canaanites, and Hittites in Exodus 23:28, and the behavior of Jonah in Jonah 3:3.





Moral Hazard: The Implacable Enemy of Agile

25 03 2012

This is from Wikipedia:

In economic theorymoral hazard is a tendency to take undue risks because the costs are not borne by the party taking the risk. The term defines a situation where the behavior of one party may change to the detriment of another after a transaction has taken place. For example, a person with insurance against automobile theft may be less cautious about locking their car, because the negative consequences of vehicle theft are now (partially) the responsibility of the insurance company. A party makes a decision about how much risk to take, while another party bears the costs if things go badly, and the party insulated from risk behaves differently from how it would if it were fully exposed to the risk.

In more general terms, moral hazard is a situation where it is the responsibility of one person or group to make a mess, and the responsibility of another person or group to clean it up.

For example, for a limited time while I was a kid, it was my job to play with the toys in my room, and my mother’s job to keep the floor uncluttered enough that she wouldn’t fall and break her neck when she had to walk across it in the dark.

So…who won, do you think?  Was the floor always clean, or was the floor always messy?

Of course the floor was always messy.  It was always messy until a rule was made–and enforced–that if my room wasn’t cleaned by a specified time, then I either wouldn’t get something I wanted or would get something I didn’t want.  (Decades later, I honestly don’t remember which.)  The trick was to take the incentive out of the wrong place and put it into the right place.

But this post isn’t about parenting, it’s about company culture.  Here are some more relevant examples of moral hazard:

  • A company has a maintenance team–or a group of them–whose job it is to jump on production problems and fix them when they crop up.  That is, it’s the development team’s job to create production problems and the maintenance team’s job to fix them.
  • Architects come up with designs, which developers then implement.  That is, it’s the architects’ job to create dozens of small but annoying and confusing incompatibilities and discrepancies between their overarching concept of how the system should work and the reality of how the system actually does work, and the development team’s job to make that design work anyway (in the process creating even more discrepancies between the architects’ picture of the system and reality).
  • Some uber-architect in the IT organization writes a Checkstyle (or other) coding standard and mandates that all code must pass the standard before it’s allowed into production.  That is, it’s the uber-architect’s job to create thousands and thousands of build errors, and the development teams’ job to clean them up.
  • Technical people outside the development team must sign off on all code before it goes into production.  That is, it’s IT’s job to create deployment delays at the last moment based on various criteria that have little or nothing to do with actual business functionality, and the product owner’s job to keep customers from leaving because of those delays.
  • Agile coaches explain all the things that each part of the Agile process is supposed to accomplish, and their clients take them seriously.  That is, it’s management’s job to spend a large portion of the development team’s time on meaningless meetings, and development’s job to deliver anyway.
  • Ideas from the dev team for making things better better or easier or faster or more efficient are submitted in writing to a special committee, which at its next meeting–or the one after that, or at the very latest the one after that–will gravely consider them and all their associated consequences.  If an idea is judged to be worthy, the machinery for imposing it on the entire organization will be set in motion as soon as the proper approvals can be applied for and awarded.  That is, it’s the committee’s job to impose incomprehensible, irrelevant, and counterproductive regulations on distant teams that already had an environment all set up and working for them, and the job of those teams to deal with the unintended consequences.

Moral hazard puts incentives in the wrong places for getting anything done.  It destroys morale, it dulls the wits, and it creates endless loops of excuses.  For example:

Product owner: “We didn’t go into production over the weekend?  Why not?”

Development team: “It was the operations team’s fault: they rejected our package.”

Product owner: “Why did you reject their package?”

Operations team: “It wasn’t our fault: the Checkstyle analysis blew up with over 700 coding-standard violations.”

Product owner: “Why did you submit a package with over 700 coding-standard violations?”

Development team: “It’s the architect’s fault: there’s no way we can track down and fix that many violations and still get your functionality into production on time.”

Product owner: “Why did you create a Checkstyle configuration that was so out of whack with the way our team codes?”

Architect: “It’s the development team’s fault: we have to have some standards, otherwise we’ll end up with a whole base of impenetrable code that nobody dares change!”

Product owner: “Why don’t you have any standards?

Development team: “Uh…”

The real answer to that one, of course, is one that the development team probably won’t give: coding standards are somebody else’s responsibility, not theirs, so they aren’t used to thinking about it and don’t really care.  Likewise things like simple design.  They’re not responsible for the design, and making official changes to the design is so painful and time-consuming that it’s much easier not to think about design at all, and just hack in fixes where the official design doesn’t fit reality.  Testing?  Sure, they’ll have to make a token effort in that direction, but as long as the Emma coverage numbers are such that the code passes the externally-imposed checks, it doesn’t really matter how good or bad the tests are, because if there are problems the maintenance team will catch them.

In a situation like this, the drive to excellence inside the team is largely extinguished, for two complementary reasons.  First, the culture is designed so that excellence is defined by people outside the team, so that any ideas the team has to make things even better are therefore by definition not excellence (or else said people outside the team would already have had them), so they are resisted or even punished.  Second, it’s just not the way things are done.  Mediocrity is a way of life, and anything completely intolerable will be taken care of by somebody else in some other layer of company bureaucracy.

Any team member who has a hunger for excellence must surmount both the why-bother and the you’re-gonna-get-in-trouble obstacles before he can accomplish anything.

The thing that has made Agile so successful, in the places where it has been successful, is the fact that it puts the incentives in the right places, not the wrong places.  To the greatest extent possible, the rewards for your successes and the consequences for your failures come directly to you, not to somebody else.  You are given not only the power but also the incentive to better yourself, the project, and the organization, and whatever stands in your way is made big and visible, and then removed with alacrity and enthusiasm.

Many organizations believe that they have adopted Agile by instituting certain of the practices that Agile coaches tell them are part of Agile, but they retain a company culture shot through with moral hazard where many of the incentives are counterproductive.

For example, in a truly Agile environment:

  • There is no separate maintenance team.  The development team is responsible for a project from conception to end-of-life.  If there’s a production problem that means somebody rolls out of bed at 3:00 in the morning, then it’s somebody from the development team responsible for that project.  If the project is so buggy that its dev team spends 80% of its time fixing production issues, then until the project is much healthier, that dev team isn’t going to get many new projects.  Hence, the dev team is going to do everything it possibly can by way of testing to make sure the project is ready for production, and by way of logging so that if there are issues forensics will be quick and easy, and by way of simple and open design so that any problems can be fixed fast.  And they’ll do this without any scowling remonstrances from a stern-faced architect outside the team, because the built-in reward for success and consequence for failure is worth much more to them than anything he could offer anyway.
  • Developers come up with designs, which they then vet–should they decide it’s necessary to avoid becoming a maintenance team–by consulting with architects who know the whole system better than they do.  The architects have no power or responsibility to approve or reject; their job is merely to act as information resources.  Hence, the designs will be better and simpler, because they spring up from reality, rather than down from an ideal.  The architects will know more about what’s really going on, because they’ll be doing more listening and less telling–and because if they don’t, development teams will prefer to interact directly with other development teams to find out what they need to know and make the highly-paid, expensive architects superfluous.
  • If Checkstyle or one of its competitors is used at all, it’ll be used because the team has decided it will be helpful, and the standard it enforces will be tailored to the team by the team, not imposed on the entire organization by somebody with a Vision who actually writes very little code.
  • The product owner, not anyone in IT, decides when the code goes into production and is responsible for the result.  If the product owner puts code in too soon, he’ll be responsible for the negative experience the customers have: but that’s appropriate, because they’re his customers and he knows what’s important to them.  At least he’s not likely to put the code in too late, because as the product owner he knows which barriers to production are important to his customers and which they don’t care about.  Wanna place any bets on where he’ll stand on Checkstyle violations?
  • The team’s cadence–iteration length, end/start day, standard meetings, impromptu meetings, etc.–is determined by the team based on the requirements of the project and the needs of the product owner.  Fly-by-night Agile coaches may pass through and provide consultation and advice, but it’s the team–not management, not IT–and the product owner that decides what works best for the project.
  • Finally, anyone on the team who has an idea to make things better or easier or faster or more efficient can directly implement that idea for the team immediately, with no submission process or lengthy approvals more involved than bringing it up after standup one day and getting thumbs-up or thumbs-down.  Whatever it is, it doesn’t have to affect or be imposed upon anyone else in the company.  If it turns out to be a good idea, other teams will notice, inquire, and imitate: voluntarily, on their own, because they’re attracted by it.  If it turns out to be a bad idea–and let’s face it, nobody can say for sure whether a newly suggested practice will have unforeseen and unintended consequences–then it will slow down only one team for an iteration or two before being abandoned, and the company will cheaply learn something valuable about what doesn’t work.

If a company labors to adopt Agile, but insists on keeping the moral hazard in its culture, the change will be limited to a few modifications in practice, but no significant increase in velocity, efficiency, quality, or morale.  Furthermore, people in an environment like this will look around and say, “So this is Agile, huh?  What a crock!  This sucks!”

But what if the company is also willing to put the incentives where they belong, as well as adopting practices that those incentives have shown to be useful in other organizations?  What if IT is told that its purpose is to serve and support development teams as they direct, and the development teams are told that if IT doesn’t support them properly, they’re to bypass it and do whatever’s necessary to get into production?  What if development teams are told that their purpose is to serve and support the product owner, and that if they don’t satisfy their product owner, he’ll go find another team?  What if the product owner is told that his purpose is to serve the business, and any demand made on him by development or IT that doesn’t move toward satisfying the needs of the business can be rejected out of hand?

In a situation like that, moral hazard will quickly become Big And Visible, and it can be dealt with promptly.

As a matter of fact, one might say that an Agile transformation should consist chiefly of a quest to expose and eliminate moral hazard, and that the various practices for which Agile is so well known will automatically trail along behind because they’re the best way anyone’s found so far to operate in a low-moral-hazard environment.

If you adopt economical driving habits, you’ll end up putting less gasoline in your tank.  But if you skip past the economical driving habits and just put less gas in your tank, you’ll end up muttering grim imprecations as you trudge down the highway with a gas can.





A Conversation I’d Like to Have

3 03 2012

Do you ever talk to yourself while you’re driving alone, playacting how things might go if you were able to say what you wanted to the people to whom you wanted to say them?

I do.  It doesn’t usually do me any good, but I do it anyway.  Especially these days, since I have a much longer commute than I used to have.  For example:

“Would you stop screwing around with your Agile teams that way?  You can’t just keep breaking them up and stirring them around and re-forming them and expect any decent velocity from them!

“People are not commodities.  They’re not interchangeable.  They’re unique individuals, with unique aptitudes and unique skills and unique strengths and unique weaknesses. Put a small number of them on a team and leave them there, and they’ll learn all those things about each other and grow to be able to combine their strengths and cancel out their weaknesses.

“A brand-new, immature Agile team that has only one or two projects under its belt isn’t particularly valuable, true, except that it has the potential to grow into an experienced, mature Agile team, which is particularly valuable.

“For example, here’s a challenge. I am under no illusion that you will take me up on this challenge, but I’m not bluffing. Take Kevin and DJ and Gabbie and me, and put us on a team.  I don’t mean add us to an existing team: I mean make a new team out of us.  Give us two of your better QA folks—say Matt and Kay—and a product owner who’s in a bind.  We will blow the frickin’ doors off anything you folks have ever seen in this company before.

“Why—because we’re superintelligent?  Hey: you know me well enough by now to say definitively and with confidence that I, at least, am certainly not superintelligent.  Because we’re expert programmers?  No.  Because we’re an experienced, mature Agile team.

“We know each other; we’ve worked with each other long enough that each of us knows how the others work.  And we won’t stand for any frickin’ bullcrap.  Our overriding concern is not following IT’s established rules and processes: our overriding concern is creating business value for our product owner as quickly and efficiently as humanly possible.  If your IT people support us, we’ll work with them.  If they get in our way, we’ll bypass them.  We will have something in production by the end of the second week, come hell or high water, and at the end of every week thereafter.

“But I warn you: after experiencing a team like ours, that product owner will be forever corrupted.  He’s no longer going to be interested in waiting for approvals from committees of architects or waiting for monthly releases or waiting for weeks of big design up front to be complete…or really in waiting for much of anything, for that matter.  He’s going to want what he wants, and he’s going to want it right now, and if you tell him you can’t give it to him on his time schedule he’s going to point to us and say, ‘They can.'”

Later:

“Yeah…listen, I heard something a little disturbing yesterday.  Do I understand correctly that you have an application you want us to put into production?  Because we haven’t heard anything about this application: it’s not in our process or on our schedule.”

“No, you don’t understand correctly.  The application is already in production—has been for almost two months now.  We hit 200 simultaneous users last week.”

“[indulgent chuckle] I’m afraid your information is mistaken.  I would certainly know if your application was in production, because—as you know—I oversee the deployment process; and I haven’t put it in production.”

“It’s your information that’s mistaken.  We talked to you folks—what, ten weeks ago or so now?—about our application, and you told us we’d have to write and submit a proposal to the proper committee and have it discussed and approved at the next quarterly meeting, and we simply didn’t have time for that; so we hired a hosting service in the Cloud and deployed it there.”

“No—you see, in order to do that you’d have to…wait.  Wait.  What?!”

“I told my team that I needed the application in production as soon as possible.  They talked to you guys, then decided it was possible to put it into production sooner than you could, so that’s what they did.  They’re Agile, you know.”

“But you’re not allowed to do that!  We don’t host our code on third-party servers, we host it on our own servers, under our control.  You’re going to have to take it down and have us put it up on our farm.”

“What did you say to me?  I’m not allowed?  Listen, let’s talk for a minute about what I am, rather than what I’m not.  I am the business.  You’re IT.  I’m your reason for being.  You exist to serve me, not to tell me what I’m allowed to do or give me orders.  My team is leading me to really understand that for the first time.

“But look, I’ll tell you what.  You’ve got a proposed requirement for our application.  Fine.  Write me up a story card, and I’ll put it in the backlog. I’ll prioritize it among all the others written by my team and me.  If you get fast enough to give me the same level of service my team gives me with our hosting service, or if the application gets mature enough that we change it so seldom that your latency doesn’t really matter, your card might get somewhere near the top of the backlog.”

“It’s not a good idea to talk to me that way.  Since the application didn’t go through our established processes, we can simply choose not to take the responsibility for supporting it.”

“We’re not interested in having you support it.  My team is doing a fine job supporting it themselves.  As a matter of fact, the defect count is so low there hasn’t been much in the way of support to do, anyhow.”

“They didn’t consult with any of our senior architects.”

“Or the junior ones either—yes, I know.  That’s one of the reasons they were able to move so quickly.  Another is that they wrote it in Scala, which enabled them to use a couple of new frameworks they say really gave them a boost.”

“Scala?  Scala?!  We don’t support Scala!”

“Yeah, we noticed.  You don’t support git or IntelliJ or Jenkins either.”

“We can’t maintain that—none of our maintenance people know any of that stuff.”

“We’re not interested in having you maintain it.  My team is going to maintain it.  They’re the ones who wrote it, after all—it makes sense for them to maintain it.”

“Well, okay, what are you interested in having us do?”

“Hey, I didn’t call you; you called me.  You just go on doing whatever it is you do—make a rule, require an approval, establish a process; I’m just guessing here.  I probably won’t have a lot of time to talk to you for awhile: I’ve got about two more iterations of stories on this project, and then I’m going to have my team start another one.  I’ll tell you about it if you like, but I’m not interested in having you support it or anything.  My guess is that we’ll have it serving users before you could have gotten it through your process.

“This Agile stuff is really cool.  You should try it sometime.”





Should Agile Testers Code?

13 01 2012

This morning at CodeMash 2.0.1.2, I had an argument over breakfast with Jeff “@chzy” Morgan (more commonly known as Cheezy) of LeanDog about Agile testers.  After some advancing and retreating around the battlefield, it became clear that we seemed to generally agree about almost everything regarding how Agile teams should be run, except for this one issue.

The story test that every story card should produce when it first arrives in the Dev In Progress column: should that story test be written by a developer or by a tester?

Cheezy says a tester should write it; I say a developer should write it (after a detailed conversation with a tester or product owner, of course).

I’m going to reproduce here, as well as I can remember them, the salient points of the argument, and I’d love to have people having more experience with testing than I have address them in the comments.

Before I say what I’m going to say, though, I’d like to say what I’m not going to say.  As a developer, I’m not going to disrespect testers.  I’m not going to imply that coding is an exalted, morally superior activity that should be undertaken only by developers, since developers are the moral superiors of testers.  I live in awe of good Agile testers, and–after having tried once or twice–I have no lingering illusions that I could do what they do.  Agile testers have brought my cheese in out of the wind so many times that I’ll be indebted to them for a long time.

Essentially, my arguments will all boil down to this: people should do what they’re good at.

Finally: I’m going to do my best to be fair to Cheezy’s arguments as he presented them.  Cheezy, if you read this, feel free to comment publicly or privately about anything I get wrong, and I’ll post an update.

Onward.

Cheezy says that since testers will almost always know more about the problem domain than developers will, it is the testers who should have the responsibility of translating the requirements from story-card language into executable language, like Cucumber over Ruby.

I’ve been on teams before where this was advocated.  It sounds good, both to Cheezy and to me.  What I find in the real world, though, is that writing good tests is absolutely the hardest technical part of a developer’s job.  It can sometimes take me a couple of days to get a story test written and failing for the right reasons, given all the investigation and forgotten last-minute spiking that has to go into it; once I do that, the code to make it pass is generally much easier.

So in this case you’re not just saddling testers with development work: you’re saddling them with the very hardest kind of development work.  For this, it’s not good enough to take an excellent tester and teach him a little coding.  You have to actually make him into a real honest-to-goodness developer as well as a tester, if he’s going to be successful.

I was on a team once with a tester who was really good at writing story tests.  In his case, if I remember correctly, he was writing easyb tests and backing them up with Java.  As a matter of fact, he liked it so much that he ended up crossing over and becoming a developer instead of a tester.  I was rotated off the project shortly after that, but I imagine that with both testing and development skills, he went far and fast.  If he did so, however, it was as a developer, not as a tester.

The rest of the testers on that team didn’t really cotton well to easyb or Java.  They struggled at it for awhile, frequently resorting to the developers for help, and eventually stopped writing automated tests altogether, whereupon the task fell to…that’s right, the developers.

To support Cheezy’s side of the argument, I should say that I was also on a team that used Fitnesse over Java for ATDD, and on this team the tester really did write the acceptance tests in Fitnesse, and it worked out great.  On my side of the argument, though, the Fitnesse tests had to be supported by fixtures written by the developers, and it’s somewhat of a stretch to call filling out a Fitnesse wiki table “coding.”

Cheezy says that the most dangerous part of the software development process is the place where requirements get translated from English to code; therefore it’s always safest for that to happen all inside one brain, rather than having it depend on one person understanding another person’s explanation.

I agree that that would be the best place for it if that kind of thing worked; but in my experience, in a large percentage of cases, the tester gets stuck while writing the code–remember, this is hard work, not easy work–and enlists a developer to help him.  When that happens, the developer has to look at the tester’s existing code and figure out what he’s trying to do–error-prone–talk to the tester and figure out what he wants to do–error-prone–and guide him through the rest of the process of getting his thoughts down in code–again, error-prone.

I don’t know of a way that we can eliminate errors in translation.  We can reduce them by listening really, really carefully and asking lots of questions; that’s the way Agile developers are trained to operate.  But eliminate?  You tell me.

There’s another issue here too that didn’t come up in our discussion, but which I’d like to raise here.

Many times, especially when a card represents a foray into an as-yet-unexplored wing of the project, there will be spikes preceding it to discover things like, “Once I’ve sent a set of instructions over the network to that remote node, how do I get it to tell me what it’s done in response to them?” or “How do I take control of this legacy subsystem so that I can make it behave as my tests need it to?”

Those spikes won’t be completed by testers: they’ll be completed by developers.  Hopefully they’ll be completed by the same developers who will eventually play the cards they precede.  Having those developers employ the technology they’ve just spiked out is certainly less error-prone than having them explain it to testers so that the testers can write the story tests (quite possibly in a completely different language from that of the spike) that use it.

Cheezy says that having testers automate their testing frees them from regression testing so that they can do the most valuable thing testers do: manual exploratory testing.  I agree that exploratory testing is the crowning glory of an Agile tester; I agree that manual regression testing is worse than scrubbing the gymnasium floor with a toothbrush.  But my experience with testers writing automated tests is that they spend so much time struggling with the code that it cuts into their exploratory time just as much if not more.

And as for manual regression testing, nobody should have to do that–except perhaps after a system-wide refactoring, like moving from a SQL database to a No-SQL database or switching Web frameworks, where the vulnerabilities move around.  When a tester discovers a defect during his exploratory testing, a developer needs to write a test proving that defect doesn’t exist and then make that test pass, leaving it as a permanent feature of the test suite, so that the tester never has to regress that problem again.

Cheezy pointed out that it’s very expensive when cards move backwards on the wall, and opined that having testers write story tests could eliminate that.  I’m skeptical, but if Cheezy–if anybody–can come up with a way to eliminate having cards move backwards on the wall, I suggest that the very next step should be getting rid of testing altogether: why would it be needed if a card never moves backward anyway?

A final point I’d like to make is that I’ve found that one of the reasons testers are so valuable is that they don’t instinctively think like developers; they think like users.  Making them code would force them to think like developers at least a little–that is, enough to be able to do the hardest thing developers do.  Hopefully it wouldn’t pull them completely over to the Dark Side, but in the final analysis I think I really would rather have them stare blankly at me as I yammer on about some developer excuse and then say, “But it doesn’t work.”  If they start accepting my excuses because they understand my viewpoint, that’s no good for anybody.

Thanks again, Cheezy, for the discussion this morning; it has helped me think about Agile testing in new ways.  Perhaps your comments and others will help me further.