Thursday, April 24, 2008

How big is a test case?

I run an automated testing team, and I've noticed something interesting lately. As we start adding more test automation, and looking at the test automation numbers used by other teams, it's impossible to tell at glance what the scope of a test case is.

What does it mean to say that you have 10 automated tests for a particular feature area? How does that compare to 10 manual tests?

Some of the tests that we run in our automation system are pretty much direct ports of manual test case. Often the manual test case was defined by another tester at some point in the past. Many of the tests that are currently being automated are tests of the rules engine. In the test you create an rule, apply it to the device under test, pass traffic of various types through the device, and look at the results. The test may set 5-20 parameters on the device, and check the results of a similar number of distinct pieces of traffic sent through the DUT. This all counts as one test.

In contrast, some of our automated tests contain an enormous number of setting changes and traffic checks per reported test case. The SSL test suite that we've built around the codenomic does 7200 discrete pieces of traffic for each of the 144 tests that we report in automation summary. Our compression test suite tests 432 discrete settings for each of its test cases. We have 480 test cases in our compression suite, for a total of 207,360 checks.

If we were to say that a manual test case averages 15 checks, then the compression test suite would be the equivalent of nearly 14,000 manual tests. Obviously, it isn't the equivalent. The manual test suite for the entire project only contains about 2000 tests.

So how big should a test case be? Is it possible to define tests in such a way that you can meaningfully compare manual tests to automated tests? Would that be worth doing?

No comments: