Thursday, May 1, 2008

Test Automation Contexts

Over the past couple of months, I've been leading cross-product group at work that is discussing test automation and trying to increase our efficiency/effectiveness. In the first couple of meetings, we ran into a lot of confusion and disagreement. At the suggestion of one of the participants, we took a collective step back to come up with a problem statement. Without too much difficulty, we were able to agree on the following problem statement for automation.

The goal of Test Automation is to create reusable test assets that reduce or eliminate the asymmetry of effort between test and development for new release.

So what the heck does that mean? By asymmetry of effort we mean that as the product becomes more complex, the test load for each release climbs faster than the dev load. We have to test both the new features and the integration of new features with old features, but the dev team is only producing new features. So on Release 1, dev build N features, testing test N features. On Release 2, dev builds N features again while testing tests N new features plus N^2 interactions between new features and old features. The longer the product lives, the more features get integrated and the bigger that gap gets.

By reusable test asset, we mean tests that can be run against multiple releases of the product, multiple hardware platforms, os’s, etc. without having to be completely rewritten for each release. Sometimes that isn’t possible—when dev changes how a feature works, the test has to change too. But we need to do that without breaking the version of the test that runs against older versions of that feature.

Having a problem statement and some agreed upon definitions helped, but we continued to have a surprising level of disagreement. For instance, to me it was obvious that we needed a database to keep track of the history of test results, but another member of the group argued strongly that a database was irrelevant, the only thing he cared about was the results of the current test run. I spent a lot of time and thought trying to make sense of this. Eventually it occurred to me that he was a developer working in a development team, rather than a tester working in a test team. His context was different than mine.

That realization turned out to be the key. I had one on one conversations with several members of the group, and then brought up the idea in the meeting. As we talked it over, we came up with four contexts in which our company uses test automation.

  • Developer Test Automation

  • Development Team Test Automation

  • Project Automation

  • Product-line Automation

Different groups within the company were creating automation in different contexts, but the automation rarely crossed boundaries, so (for example) the test team wasn't getting the benefit of the automation being built by developers.

What is the difference between the different contexts? What are then constants? Migrating tests from one context to another would give a big boost to our return on investment. How do we make it possible (and easy) for tests to migrate from one context to another?

No comments: