Testing Times

Developers, testers and testing

In the world of software, there are developers and there are testers. The developers often design and implement the software while the testers define and execute the test plans. Software engineering requires both testers and developers, and together they make quality software; one by finding problems and the other by solving problems1. At least, that's the way it should be. Unfortunately, many developers (including myself) have found themselves in situations where the QA department is nonexistent, where testing and the associated test plan updates lurk at the end of every development cycle or feature implementation.

System testJust to be clear, we're not talking unit tests like those used in test-driven development (TDD) with frameworks like NUnit or MSTest. Unit tests and TDD are somewhat unique in that they take the developer's strength of solving problems and trickpersuade developer's into seeing testing as yet another problem in need of resolution (just how do you prove a requirement was met – to the TDD Cave, Codeman!).

Sadly, manual tests found in system testing, integration testing and regression testing are not so exciting. They don't usually present cunning problems to be solved but instead provide a means for mind-numbing hours following detailed, inane instructions where the result feels obvious and the rewards are few. At least, that's my experience as a developer performing tests; the same cannot be said of testers. I've worked with some very talented, passionate quality assurance professionals whose joy found in their craft was inspiring and of whom I have been envious when I too have found myself burdened by testing.

Finding those team mates who take pride in testing and making a product better is like striking gold, but even those that find schadenfreude in identifying a colleague's mistakes can be a better option to a developer than having to run the testing themselves. However, dedicated resources for quality assurance are often seen as a luxury2, leaving developers with little option but to take that responsibility on themselves.

To be clear, I'm trying to say that developers generally hate testing and more specifically, I hate testing, but we'll do it anyway if pushed.

WHHHHAAAT??!

At this point you may be surprised to discover that I recently found myself testing some software. Whether it was a poorly defined test, a flaky feature, or just the mundanity of repeating the same operations (albeit with subtle adjustments) over and over and over again, it left me frustrated, weary and disengaged. Testing is just not my thing, but I do it because I have to – releasing untested software should never be an option for a professional software developer; our users are not our QA department. The all too familiar experience reminded me of steps that developers can take when they're the ones that have to update and execute manual testing; steps that I've seen in action and that make testing almost pleasurable (almost).

Just update the test plan

Have you ever updated a test plan without checking the test was correct, or perhaps executed a test plan that was incorrect? Updating a test plan is tedious, we have to check that existing tests are still relevant and work out where there are gaps in the test coverage. This usually means looking at requirements documents and change requests and determining various test paths, expected results, etc. It can be a lot of work and it is all too easy to fall into the trap of skipping some steps, like validating the test is correctly defined or pretending that there's no way the existing plan missed something. Not only that, but if you've diligently updated the test plan, validating each test as you go, executing it all over again is even more painful because you already know what does and does not work from updating the tests in the first place.

So, do it once and do it right. If you carefully update the test plan, validating existing tests, updating others and creating new ones, you will find yourself testing the product anyway. As tests that should work don't, change requests will get raised and the product will improve. Not only that, but you'll only need to update the document once and you won't need to run the tests more than is absolutely necessary. To cap it off, the act of defining tests is pretty close to problem solving, making it a little less tedious for a developer to perform (though it is documentation, so, you know, don't hurt yourself or anything).

Assume the tester knows nothing (and is a little slow)I met a hawk and it was red

All too often, I come across test plans that are written like a kindergarten story.

Start the application. And then open a file. And then click OK. And then check the background is white and the caption says "Bite me!".

Paragraphs of simple instructions, often with steps missing that the author assumes the tester will know and without any explanation of what it means if that test fails. Instead of this mess, introduce each test with an overview of its purpose and what failure means, followed by test instructions each on a separate line. This not only helps you and your team mates when running the tests but it also helps when they come to update the test plan. Think of the test as code; you wouldn't expect the processor to guess when you miss out lines of code (I hope) so don't expect a tester to do the same; don't forget to add comments where more detail is needed (such as why it's important to change what locale the system is using); and number each step so that it can be referred to easily in notes and change requests, e.g. "Test 2.6, step 10 failed with a value of 20 where 21 was expected"3. If you do this, you will thank yourself later.

Provide context for the results

When performing the test, you will want to be recording results for each step. When reviewing results, you will usually want to see the test step that garnered them, especially if there is a failure or an ambiguous result. Save yourself some time by specifying your tests as a table with a column for results. That way, results are recorded next to the test definition making both recording and reviewing much easier. Not only that, but you don't need to maintain a results sheet and the test definitions separately or contend with different people recording the results in different formats.

Conclusions

If you follow these three simple steps, you should end up with test definitions that look less like an account of your weeks at summer camp when you were 7 and more like the example below.

This test checks the flange sprocket exposes the doobrey flap.

Step Instructions Results
10 Open the flange sprocket. You should see the flange sprocket open. Pass – opened
20 Press the doobrey flap. Fail – unable to locate doobrey flap. Test lacking sufficient detail or doobrey flap was not exposed.
30

Of course, all this assumes you don't have a QA team or team members (or even some tools that help you define and execute manual testing). If you do, that's great; respect your QA team members (or your tools) and the work they do to keep your users from deploying their wrath upon thee. For the rest of us, stuck with ourselves and our office productivity applications in which to define and record our testing, following these tips will make our testing life (and that of those around us) just that little bit less tedious. Who knows, some of you might even start enjoying it.

  1. This is a very simplistic overview, I know. []
  2. There are valid and not so valid reasons for this, but we're not going to get into that here. []
  3. You might also consider spacing step numbers by 10 so it's easier to insert additional steps without renumbering all subsequent steps. []