Monday, February 24, 2014

Notes on Software Testing

It seems software testing is one of those really hard things to get right.  I find very often I run into projects where the testing is inadequate or where, in an overzealous effort to ensure no bugs, test cases are too invasive and test things that shouldn't be tested.  This article attempts to summarize what I have learned from experience in the hope that it is of use to others.

The Two Types of Tests


Software testing has a couple of important functions and these are different enough to form the basis of a taxonomy system of test cases.  These functions include:
  1. Ensuring that a change to a library doesn't break something else
  2. Ensuring general usability by the intended audience
  3. Ensuring sane handling of errors
  4. Ensuring safe and secure operation under all circumstances
 These functions fall themselves into roughly two groups.  The first ensures that the software functions as designed, and the second ensures that where undefined behavior exists, it occurs in a sane and safe way.

The first type of tests then are those which ensure that the behavior conforms to the designed outlines of the contract with downstream developers or users.  This is what we may call "design-implementation testing."

The second type of tests are those which ensure that behavior outside the designed parameters is either appropriately documented or appropriately handled, and can be deployed and used in a safe and secure manner.  This, generally, reduces to error testing.

These two types of tests are different enough they need to be written by different groups.  The design-implementation tests are really best written by the engineers designing the software, and the error tests need to be handled by someone somewhat removed from that process.

Why Software Engineers Should Write Test Cases


Design-implementation tests are a formalization of the interface specification.  As such a formalization the people best prepared to write good software contract tests are those specifying the software contracts, namely the software engineers.

There are a couple ways this can be done.  Engineers can write quick pseudocode intended to document interfaces and test cases to define the contracts, or can develop a quick prototype with test cases before handing off to developers, or the engineers and the developers can be closely integrated.  Either way the engineers are in the best position, knowledge-wise, to write test cases about whether the interface contracts are violated or not.

This works best with an initial short iteration cycle (regarding prototypes).  However the full development could be on a much larger cycle so it is not necessarily limited to agile development environments.

Having the engineers write these sorts of test cases ensures that a few very basic principles are not violated:

  1. The tests do not test the internals of dependencies beyond necessity
  2. The tests focus on interface instead of implementation
These rules help avoid test cases broken needlessly when dependencies fix bugs.

Why You Still Need QA Folks Who Write Tests After the Fact


Interface and design-implementation tests are not enough.  They cover very basic things, and ensure that correct operation will continue.  However they don't generally cover error handling very well, nor do they cover security-critical questions very well.

For good error handling tests, you really need an outside set of eyes, not too deeply tied to current design or coding.  It is easier for an outsider to spot that "user is an idiot" that was left in as a placeholder in an error message than it is for the developer or the engineer.  Some of these can be reduced by cross-team review of changes as they come in.

A second problem is that to test security-sensitive failure modes, you really need someone who can think about how to break an interface, not just what it was designed to do.  The more invested one is, brain-cycle-wise, in implementing the software, the harder it often is to see this.

Conclusion


Software testing is something which is best woven into the development process relatively deeply and should be both a before and after main development.  Writing test cases is often harder than writing code, and this goes double for good test cases vs good code.

Now obviously there is a difference in testing SQL stored procedures than testing C code, and there may be cases where you can dispense to a small extent with some after-the-fact testing (particularly in declarative programming environments).   After all, you don't have to test what you can prove, but you cannot prove that an existing contract will be maintained into the future.

No comments:

Post a Comment