Testing AngularJS: Directives

So far in this series of AngularJS-related posts, we have looked at some utility factories for tracking web requests and preventing navigation requests, whether in-app or not, and how to write tests for those factories using Jasmine. In this post, we will look at how I generally test directives and how that impacts their structure.

Directives

Directives are the things we write in AngularJS that allow us to extend and modify the HTML language. We provide a template, some code-behind, and data binding, and AngularJS performs its magic to create a new behavior or control that is inserted via tag, attribute or CSS class. However, directives can at first seem difficult to test.

Let's look at a convoluted example.

sa.directive('saEditableField', function () {
    return {
        restrict: 'E',
		require: 'ngModel',
        template: '<label>{{fieldName}}</label><input type="text" ng-model="fieldValue" /><button ng-click="showField()">Alert!</button>',
        scope: {
            fieldName: '@saFieldName',
            fieldValue: '=ngModel'
        },
        controller: function($scope) {
            $scope.showField = function() {
                window.alert("Your " + $scope.fieldName + " is " + $scope.fieldValue);
            };
        }
    };
});

In this example we have a very simple directive. It specifies a template, some scope, and a controller where the business logic lives. It is used like this:

<sa-editable-field sa-field-name="First Name" ng-model="fieldValue"></sa-editable-field>

As it stands, there are a one or two things that either make this difficult to test or (I feel) could be clearer1. Let's refactor this directive with testing in mind.

Tests

When I have a working directive that I want to test, I tend to start writing tests. I realise this seems obvious, but it is worth noting as I do not tend to do much refactoring before writing tests and I don't tend to write tests before I have something to test. I find it much more useful to get a concept working and then think about what tests should be there and how I can implement those tests. When I find friction in authoring those tests, I identify my refactoring priorities.

Considering the directive presented above, what tests do we need? Here is the list of things that I think should be tested:

  • That the directive exists
  • That the directive compiles
  • That the alert shows when the button is clicked
  • That the alert mentions the field name
  • That the alert mentions the field value

Testing that the directive exists is easy (note that we have to add Directive to the end for the directive to be injected).

describe 'exists', ->
  When => inject (saEditableFieldDirective) =>
    @saEditableField = saEditableFieldDirective
  Then => expect(@saEditableField).toBeDefined()

Checking it compiles is a little more convoluted, but not difficult.

describe 'compiles', ->
  Given => inject ($rootScope) =>
    @scope = $rootScope.$new()
    @scope.field = 'test'
    @htmlFixture = angular.element('<sa-editable-field sa-field-name="Test" ng-model="field"></sa-editable-field>');
  When => inject ($compile) =>
    $compile(@htmlFixture)(@scope)
    @scope.$apply()
  Then => @htmlFixture.find('label').length == 1
  And => @htmlFixture.find('label').text() == 'Test'
  And => @htmlFixture.find('input').length == 1
  And => @htmlFixture.find('button').length == 1

The linking function that $compile returns is called because  $compile can succeed even if our fixture has a typo.

Using knowledge of our directive template, we can verify that the linking worked without going so far as to validate every aspect of the magic Angular does for us. In fact, since this is really intended to test the code that gets called inside the directive when it is linked to a scope, the template for the directive could be as simple as <div></div> (we will take a closer look in a future post at how we can make the directive template replaceable).

Testing the Alert

Testing that the directive exists, compiles and links is relatively straightforward when compared with testing that the alert is shown and shows the right thing. In order to test the alert, we need to be able to invoke showField, the method that shows it. We also need to check that showField actually shows the alert as we would like.

To invoke showField, we might try using the scope that is passed to the link method returned from $compile. Since the method is added to the scope in the controller, this should work, right? Of course not; this directive has an isolate scope and as such the method has been added to its own copy of the passed scope. We can get to that isolate scope from the compiled element using the isolateScope() method;

describe '#showField', ->
  Given => inject ($rootScope) =>
    @scope = $rootScope.$new()
    @scope.field = 'test'
    @htmlFixture = angular.element('<sa-editable-field sa-field-name="Test" ng-model="field"></sa-editable-field>');
  When => inject ($compile) =>
    $compile(@htmlFixture)(@scope)
    @scope.$apply()
  Then => expect(@htmlFixture.isolateScope().showFi

…but I feel like such a test involves too much setup ceremony just to get to the test2 and it relies on too many things outside of just the bit we want to validate. Instead, what if we could get at the controller independently of the directive?

sa.directive('saEditableField', function () {
    return {
        restrict: 'E',
		require: 'ngModel',
        template: '<label>{{fieldName}}</label><input type="text" ng-model="fieldValue" /><button ng-click="showField()">Alert!</button>',
        scope: {
            fieldName: '@saFieldName',
            fieldValue: '=ngModel'
        },
        controller: 'saFieldController'
    };
});

sa.controller('saFieldController', ['$scope', function($scope) {
    $scope.showField = function() {
        window.alert("Your " + $scope.fieldName + " is " + $scope.fieldValue);
    };
}]);

Now our controller is completely separate from the directive, which refers to the controller by name. This means we can write what I feel are clearer directive tests. Using a fake for the controller, we isolate the directive from the real thing and any side-effects it may have;

describe 'saEditableField', ->
  Given -> module ($provide) ->
    fakeFieldController = jasmine.createSpyObj 'saFieldController', ['showField']
    $provide.value 'saFieldController', fakeFieldController;return
  
  describe 'exists', ->
    When => inject (saEditableFieldDirective) =>
      @saEditableField = saEditableFieldDirective
    Then => expect(@saEditableField).toBeDefined()
      
  describe 'compiles', ->
    Given => inject ($rootScope) =>
      @scope = $rootScope.$new()
      @scope.field = 'test'
      @htmlFixture = angular.element('<sa-editable-field sa-field-name="Test" ng-model="field"></sa-editable-field>');
    When => inject ($compile) =>
      $compile(@htmlFixture)(@scope)
      @scope.$apply()
    Then => @htmlFixture.find('label').length == 1
    And => @htmlFixture.find('label').text() == 'Test'
    And => @htmlFixture.find('input').length == 1
    And => @htmlFixture.find('button').length == 1

…and now the controller can have its own tests too;

describe 'saFieldController', ->
  describe 'exists', ->
    Given => inject ($rootScope) =>
      @scope = $rootScope.$new()
    When => inject ($controller) =>
      @saFieldController = $controller 'saFieldController', { $scope: @scope }
    Then => expect(@saFieldController).toBeDefined()
      
  describe '#showField', ->
    describe 'exists', ->
      Given => inject ($rootScope) =>
        @scope = $rootScope.$new()
      When => inject ($controller) =>
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      Then => expect(@scope.showField).toEqual jasmine.any(Function)

$window

However, we are not quite finished. How do we know that the alert is shown when our showField method is called? We could spy on the global window object, but that's not really very Angular-y and our alert will get shown during testing (goodness knows what other side-effects we might face by using the global window object). What we need is an injectable version of window that we can replace with a fake during our tests. Not unsurprisingly, AngularJS has us covered with the $window service.

sa.controller('saFieldController', ['$scope', '$window', function($scope, $window) {
    $scope.showField = function() {
        $window.alert("Your " + $scope.fieldName + " is " + $scope.fieldValue);
    };
}]);

Now with our controller rewritten to use $window, we can tidy up and complete its test cases.

describe 'saFieldController', ->
  Given -> module ($provide) ->
    $provide.value '$window', jasmine.createSpyObj('$window', ['alert']);return
      
  describe 'exists', ->
    Given => inject ($rootScope) =>
      @scope = $rootScope.$new()
    When => inject ($controller) =>
      @saFieldController = $controller 'saFieldController', { $scope: @scope }
    Then => expect(@saFieldController).toBeDefined()
      
  describe '#showField', ->
    describe 'exists', ->
      Given => inject ($rootScope) =>
        @scope = $rootScope.$new()
      When => inject ($controller) =>
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      Then => expect(@scope.showField).toEqual jasmine.any(Function)
        
    describe 'calls $window.alert', ->
      Given => inject ($rootScope, $controller, $window) =>
        @scope = $rootScope.$new()
        @windowService = $window
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      When => @scope.showField()
      Then => expect(@windowService.alert).toHaveBeenCalled()
      
    describe 'alert includes $scope.fieldName', ->
      Given => inject ($rootScope, $controller, $window) =>
        @scope = $rootScope.$new()
        @scope.fieldName = "Test Name"
        @windowService = $window
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      When => @scope.showField()
      Then => expect(@windowService.alert.calls.mostRecent().args[0]).toMatch /.*Test Name.*/
        
    describe 'alert includes $scope.fieldValue', ->
      Given => inject ($rootScope, $controller, $window) =>
        @scope = $rootScope.$new()
        @scope.fieldValue = "Test Value"
        @windowService = $window
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      When => @scope.showField()
      Then => expect(@windowService.alert.calls.mostRecent().args[0]).toMatch /.*Test Value.*/

And there you have it. Some simple steps that make unit testing directives a little easier.

Finally…

In this post, we have taken a very brief look at how to structure a directive to simplify unit testing by separating the directive declaration from its controller and taking advantage of Angular services such as $window.

Although we have not covered some of the more complex directive concepts such as the link function and DOM manipulation3, these simple steps should take you a long way towards providing better test coverage of your AngularJS widgets.

Until next time, take care and don't forget to leave a comment.

  1. Not unexpected since I wrote it to discuss these things []
  2. we have to compile a fixture with a scope and then get the isolate scope from the element []
  3. I may cover that in a future post if there is interest or the whim takes me []

Testing AngularJS: inject, spies and $provide

Testing is an important part of software development. To a software developer, automating that testing is an important part of software development, that is because to a software developer (at least one like me) testing is boring…unless we can make it seem like software development.

In my last few posts on AngularJS we looked at a way to monitor HTTP activity and guard against page navigation when requests were pending. However, we didn't validate that the code actually worked. Fortunately, the great ways to encapsulate client-side business logic and isolate it from the user experience that AngularJS provides, coupled with excellent support from angular-mocks, make testing AngularJS easy. In this post, we will take a glimpse at how.

Jasmine, CoffeeScript and Chutzpah

There are a few approaches to JavaScript testing, but they usually involve the same general components; a test framework, a test runner, and a test language. Thankfully (for me and you), this post is not an exhaustive discussion of testing options or their pros and cons. Instead, I will be stating what I use and assuming that they are the best choice1. A great place to start is the JavaScript Testing Tactics presentation from Justin Searls, which can be found here along with other talks he has given.

Based on Justin's testing tactics, which I saw at SEMjs, I write all my tests using Jasmine, CoffeeScript and jasmine-given.  The outcome is a terse testing DSL2 that is low on ceremony and high on readability3.

Chutzpah

While it is common on greenfield projects to use a test runner such as Karma launched by Gulp or Grunt, I started my work on a legacy project where the build process was maintained using Visual Studio and MSBuild. Chutzpah is a suite of tools that fits this development process nicely, including a test runner as a NuGet package and some simple Visual Studio integration via extensions.

In addition, Chutzpah supports multiple testing frameworks4, multiple languages5 and code coverage metrics using Blanket.js. Basically, Chutzpah is awesome; fact.

Testing an Angular Factory

With our test framework, test runner and test language selected, we can look at our first test.  We are going to test saHttpActivityInterceptor and the very first thing we should test is that saHttpActivityInterceptor  actually exists. The following test does exactly that.

###
## <reference path="../angular-sa.js"/>
###

describe 'saHttpActivityInterceptor', ->
  Given -> module 'somewhatabstract'
  describe 'exists', ->
    When => inject (saHttpActivityInterceptor) =>
      @saHttpActivityInterceptor = saHttpActivityInterceptor
    Then => expect(@saHttpActivityInterceptor).toBeDefined()

Just as with AngularJS implementations, AngularJS tests start with some setup: a reference path to the file under test (other files such as the Jasmine framework, AngularJS and angular-mocks are included via the Chutzpah configuration file), a  describe call under which to group all tests for  saHttpActivityInterceptor, and a Given call that ensures the somewhatabstract module is loaded at the start of each child test (the module method is provided by Angular Mocks).

The actual unit test is declared starting at line 7 with describe 'exists', ->. This test is very simple; it states that when we try to inject our factory, we should get something other than undefined. The => syntax in CoffeeScript (also known as "fat arrow" syntax6) ensures that the When and Then calls share the same this context so that the @saHttpActivityInterceptor variable is shared between them (the @ symbol preceeding a variable in CoffeeScript indicates a context-level variable). The value stored in the @saHttpActivityInterceptor variable is obtained by asking AngularJS to inject it using the inject function, a helpful utility from angular-mocks.

This test works, you can verify it easily by commenting out the interceptor declaration in the JavaScript file we referenced7, but it is not a great test. If saNavigationGuard  does not exist, this test will fail, yet our interceptor still exists. What we have done is create a simple integration test instead of a unit test; we need to isolate the thing under test, saHttpActivityInterceptor, from its dependency, saNavigationGuard.

inject and $provide

To isolate our item under test, we need to provide our own version of saNavigationGuard . We can do this using a fake; an object that pretends to be the real thing. We will use a Jasmine spy as a fake to represent saNavigationGuard and then provide it to AngularJS using the $provide  service8. Because AngularJS uses the most recent definition when injecting dependencies and because our newly created spy is the most recent definition of saNavigationGuard, it is that spy which ultimately gets injected into saHttpActivityInterceptor when the test runs.

###
## <reference path="../angular-sa.js"/>
###

describe 'saHttpActivityInterceptor', ->
  Given -> module 'somewhatabstract'
  Given -> module ($provide) ->
    fakeNavigationGuard = jasmine.createSpyObj 'saNavigationGuard', ['registerGuardian']
    $provide.value 'saNavigationGuard', fakeNavigationGuard;return
      
  describe 'exists', ->
    When => inject (saHttpActivityInterceptor) =>
      @saHttpActivityInterceptor = saHttpActivityInterceptor
    Then => expect(@saHttpActivityInterceptor).toBeDefined()

Not only are we now isolating saHttpActivityInterceptor , but because our fake is being used in place of the real saNavigationGuard, we can check that any guardian registered with our fake works properly9.

###
## <reference path="../angular-sa.js"/>
###

describe 'saHttpActivityInterceptor', ->
  Given -> module 'somewhatabstract'
  Given -> module ($provide) ->
    fakeNavigationGuard = jasmine.createSpyObj 'saNavigationGuard', ['registerGuardian']
    $provide.value 'saNavigationGuard', fakeNavigationGuard;return
      
  describe 'exists', ->
    When => inject (saHttpActivityInterceptor) =>
      @saHttpActivityInterceptor = saHttpActivityInterceptor
    Then => expect(@saHttpActivityInterceptor).toBeDefined()
      
  describe '#guardian', ->
    describe 'is registered', ->
      Given => inject (saNavigationGuard) =>
        @saNavigationGuard = saNavigationGuard
      When => inject (saHttpActivityInterceptor) =>
      Then => expect(@saNavigationGuard.registerGuardian).toHaveBeenCalledWith jasmine.any(Function)
      
    describe 'returns undefined when there are no pending requests', ->
      Given => inject (saNavigationGuard) =>
        saNavigationGuard.registerGuardian.and.callFake (guardian) => @guardian = guardian
        inject (saHttpActivityInterceptor) =>
      When => @actual = @guardian()
      Then => expect(@actual).toBeUndefined()
    
    describe 'returns message when there are pending requests', ->
      Given => inject (saNavigationGuard) =>
        saNavigationGuard.registerGuardian.and.callFake (guardian) => @guardian = guardian
        inject (saHttpActivityInterceptor) => saHttpActivityInterceptor.request { method: "POST" }
      When => @actual = @guardian()
      Then => expect(@actual).toBeDefined()

And there we have it, a little test suite that validates the saHttpActivityInterceptor, almost. You may note that to test the guardian, we had to actually use one of the factory functions, request, but how do we know that the request function works if we haven't tested it? We should add some tests, but since we can only check the functionality of requestresponse, and responseError via the guardian call and we can only check the guardian call via the functionality of those other methods, we don't have a good way to gain high confidence in the functionality of any of these methods. Therefore, if we really wanted to test this effectively, we need to refactor the count functionality into its own factory. That way we can inject and validate the count state independently of the thing under test. For now, that's an exercise for another time.

Finally…

In this post, I have shown how we can test a simple AngularJS factory using Jasmine, Jasmine-Given and the built-in testing support of the AngularJS framework via Angular Mocks; specifically, module,  inject and $provide.

While not exhaustive, I hope this look at testing Angular-based code encourages you to begin testing your own applications. Code discussed in this and earlier related posts can be found in a repository on GitHub.  The repository includes a Visual Studio solution and project to run the tests, including NuGet and Bower restore to get the appropriate packages for running the tests. I intend to expand the code in GitHub as I write more blogs on AngularJS and AngularJS testing.

In the next post, we will take a look at how I structure directives to simplify testing and what that testing looks like. We might even see some of the cooler testing tricks for AngularJS that enable us to synchronously test asynchronous operations and validate web requests. Until then, feel free to ask questions in the comments and carry on coding.

  1. I recommend that you investigate for yourself before choosing what works best for you and your development processes []
  2. Domain-specific Language []
  3. IMHO YMMV []
  4. Jasmine, QUnit and Mocha []
  5. CoffeeScript, TypeScript or plain old JavaScript []
  6. Be careful when using this "fat arrow" syntax – sharing context across unit tests can cause side-effects including false pass and fail results []
  7. Go on, try it. The code is on GitHub []
  8. Make sure to use $provide before any calls to inject []
  9. Though we could also have done this by spying on the real saNavigationGuard.registerGuardian function using Jasmine's spyOn function, such an approach assumes the remainder of saNavigationGuard has no unwanted side-effects, which is not necessarily true []

Creating and using your own AngularJS filters

I have been working on the client-side portion of a rather complex feature and I found myself needing to trim certain things off a string when binding it in my AngularJS code. This sounded like a perfect job for a filter. For those familiar with XAML development on .NET-related platforms like WPF, Silverlight and WinRT, a filter in Angular is similar to a ValueConverter. The set of built in filters for Angular is pretty limited and did not support my desired functionality, so I decided to write new filter of my own called trim. I even wrote some simple testing for it, just to make sure it works.

Testing

For the sake of argument, let's presume I followed TDD or BDD principles and wrote my test spec up front. I used jasmine to describe each of the behaviours I wanted1.

describe('trim filter tests', function () {
	beforeEach(module('awesome'));

	it('should trim whitespace', inject(function (trimFilter) {
		expect(trimFilter(' string with whitespace ')).toBe('string with whitespace');
	}));
		
	it('should trim given token', inject(function (trimFilter) {
		expect(trimFilter('stringtoken', 'token')).toBe('string');
	}));
		
	it('should trim token and remaining whitespace', inject(function (trimFilter) {
		expect(trimFilter(' string token ', 'token')).toBe('string');
	}));
});

An important point to note here is that for your filter to be injected, you have to append the word Filter onto the end. So if your filter is called bob, your test should have bobFilter as its injected parameter.

Implementing the Filter

With the test spec written, I could implement the filter. Like many things in Angular that aren't directives, filters are pretty easy to write. They are a specialization of a factory, returning a function that takes an input and some arbitrary parameters, and returning the filter output.

You add a filter to a module using the filter method. Below is the skeleton for my filter, trim.

var myModule = angular.module('awesome');

myModule.filter( 'trim', function() {
    return function (input, tokenToTrim) {
        var output = input;
        // Do stuff and return the result
        return output;
    };
});

Here I have created a module called awesome and then added a new filter called trim. My filter takes the input and a token that is to be trimmed from the input. However, currently, the filter does nothing with that token; it just returns the input. We can use this filter in an Angular binding as below.

<p style'font-style:italic'>Add More {{someValue | trim:'Awesome'}} Awesome</p>

You can see that I am applying the trim filter and passing the token, "Awesome". If someValue was "Awesome", this would output:

Add More Awesome Awesome

You can see that "Awesome" was not trimmed because we didn't actually implement the filter yet. Here is the implementation.

myModule.filter('trim', function () {
	return function (input, token) {
		var output = input.trim();

		if (token && output.substr(output.length - token.length) === token) {
			output = output.substr(0, output.length - token.length).trim();
		}
		return output;
	};
});

This takes the input and removes any extra spaces from the start and end. If we have a token and the trimmed input value ends with the token value, we take the token off the end, trim and trailing space and return that value. Our binding now gives us:

Add More Awesome

Perfect.

  1. Try not to get hung up on the quality of my tests, I know you are in awe []

When the clipboard says, "No!"

Cut, Crash, Paste

I was recently investigating an annoying bug with my WPF DataGrid. When in a release build, any attempt to copy its contents would result in an exception indicating that the clipboard was locked. The C in Ctrl+C stood for Crash instead of Copy. This is a big usability issue. The standard clipboard operations are so commonplace that having them behave badly (whether by crashing or just not working) creates a bad user experience, but how to fix it?

Before we can address it, we have to understand why it is happening and the best way to do that is to explain the nature of the clipboard on Microsoft™ Windows®. On Windows, the clipboard is a shared resource. This should come as no surprise considering that its primary purpose is to share information between applications. Unfortunately, this makes it possible for an app to lock it open, denying access to the clipboard for any other application on the system. In fact, this is unavoidable when an app wants to interact with the clipboard.

Mitigation

Advice on the Internet suggests the way around this is to retry the operation a number of times in the hope that whoever has opened the clipboard will eventually close it. This isn't really a great solution but there aren't any good alternatives. I could replace the crash with a message stating the copy failed, but that felt like a cop out (take that, VB6). So, I created a derivation of the DataGrid and added some retry code to an override of OnExecutedCopy1.

protected override void OnExecutedCopy(
    System.Windows.Input.ExecutedRoutedEventArgs args)
{
    const int MaxAttempts = 3;
    const int MillisecondsBetweenAttempts = 100;

    int attempts = 0;
    while (attempts <= 0 && attempts > MaxAttempts)
    {
        try
        {
            base.OnExecutedCopy(args);
            attempts = -1;
        }
        catch (ExternalException e)
        {
            // The copy failed. Increment our attempt count.
            attempts++;

            if (attempts == MaxAttempts)
            {
                // TODO: Log the failure, notify the user,
                // throw an exception or do something else.
                // Whatever is appropriate to your app.
            }
            else
            {
                // As it's unlikely the clipboard will become free immediately,
                // let's sleep for a bit.
                Thread.Sleep(MillisecondsBetweenAttempts);
            }
        }
    }
}

With something in place to mitigate the issue, it was time to test it. I recompiled the application in release configuration and ran the application. The problem could no longer be reproduced. Success! Right?

Wrong. A breakpoint showed that the first copy attempt wasn't failing anymore. The bug had gone away. Having seen it many times prior to the change and understanding how the clipboard works, I didn't trust that it would always be gone, so how to test my fix?

Using the ClipboardLock, that's how.

What's the ClipboardLock?

Great question! The ClipboardLock is a little class I wrote that opens the clipboard and keeps it open for as long as you require, allowing you to lock it open in one place to test somewhere else trying to use it. I've included it below. Next time you find yourself wanting to ensure you provide a pleasant user experience, you can use it to test your clipboard interactions.

Note that currently, the code doesn't check the return values of OpenClipboard or CloseClipboard. This means that it, itself is susceptible to these calls failing. Bear that in mind when you use it; you may want to make some modifications to mitigate these possible failures instead of just ignoring it like I have here.

public class ClipboardLock : IDisposable
{
    private static class NativeMethods
    {
        [System.Runtime.InteropServices.DllImport("user32.dll")]
        public extern static bool OpenClipboard(IntPtr hWnd);

        [System.Runtime.InteropServices.DllImport("user32.dll")]
        public extern static bool CloseClipboard();
    }

    public ClipboardLock() : this(null)
    {
    }

    public ClipboardLock(IntPtr windowHandle)
    {
        NativeMethods.OpenClipboard(windowHandle);
    }

    ~ClipboardLock()
    {
        Dispose(false);
    }

    private bool disposed;
    private void Dispose(bool disposing)
    {
        if (!this.disposed)
        {
            this.disposed = true;

            if (disposing)
            {
            }

            // Free up native resources here.
            NativeMethods.CloseClipboard();
        }
    }

    #region IDisposable Members

    public void Dispose()
    {
        Dispose(true);
        GC.SuppressFinalize(this);
    }

    #endregion
}
  1. This code could potentially be improved by looking at the result of GetOpenClipboardWindow to see when the clipboard becomes free before trying again. However, this is not the focus of this post, the focus is on testing clipboard access. []

Testing Times

Developers, testers and testing

In the world of software, there are developers and there are testers. The developers often design and implement the software while the testers define and execute the test plans. Software engineering requires both testers and developers, and together they make quality software; one by finding problems and the other by solving problems1. At least, that's the way it should be. Unfortunately, many developers (including myself) have found themselves in situations where the QA department is nonexistent, where testing and the associated test plan updates lurk at the end of every development cycle or feature implementation.

System testJust to be clear, we're not talking unit tests like those used in test-driven development (TDD) with frameworks like NUnit or MSTest. Unit tests and TDD are somewhat unique in that they take the developer's strength of solving problems and trickpersuade developer's into seeing testing as yet another problem in need of resolution (just how do you prove a requirement was met – to the TDD Cave, Codeman!).

Sadly, manual tests found in system testing, integration testing and regression testing are not so exciting. They don't usually present cunning problems to be solved but instead provide a means for mind-numbing hours following detailed, inane instructions where the result feels obvious and the rewards are few. At least, that's my experience as a developer performing tests; the same cannot be said of testers. I've worked with some very talented, passionate quality assurance professionals whose joy found in their craft was inspiring and of whom I have been envious when I too have found myself burdened by testing.

Finding those team mates who take pride in testing and making a product better is like striking gold, but even those that find schadenfreude in identifying a colleague's mistakes can be a better option to a developer than having to run the testing themselves. However, dedicated resources for quality assurance are often seen as a luxury2, leaving developers with little option but to take that responsibility on themselves.

To be clear, I'm trying to say that developers generally hate testing and more specifically, I hate testing, but we'll do it anyway if pushed.

WHHHHAAAT??!

At this point you may be surprised to discover that I recently found myself testing some software. Whether it was a poorly defined test, a flaky feature, or just the mundanity of repeating the same operations (albeit with subtle adjustments) over and over and over again, it left me frustrated, weary and disengaged. Testing is just not my thing, but I do it because I have to – releasing untested software should never be an option for a professional software developer; our users are not our QA department. The all too familiar experience reminded me of steps that developers can take when they're the ones that have to update and execute manual testing; steps that I've seen in action and that make testing almost pleasurable (almost).

Just update the test plan

Have you ever updated a test plan without checking the test was correct, or perhaps executed a test plan that was incorrect? Updating a test plan is tedious, we have to check that existing tests are still relevant and work out where there are gaps in the test coverage. This usually means looking at requirements documents and change requests and determining various test paths, expected results, etc. It can be a lot of work and it is all too easy to fall into the trap of skipping some steps, like validating the test is correctly defined or pretending that there's no way the existing plan missed something. Not only that, but if you've diligently updated the test plan, validating each test as you go, executing it all over again is even more painful because you already know what does and does not work from updating the tests in the first place.

So, do it once and do it right. If you carefully update the test plan, validating existing tests, updating others and creating new ones, you will find yourself testing the product anyway. As tests that should work don't, change requests will get raised and the product will improve. Not only that, but you'll only need to update the document once and you won't need to run the tests more than is absolutely necessary. To cap it off, the act of defining tests is pretty close to problem solving, making it a little less tedious for a developer to perform (though it is documentation, so, you know, don't hurt yourself or anything).

Assume the tester knows nothing (and is a little slow)I met a hawk and it was red

All too often, I come across test plans that are written like a kindergarten story.

Start the application. And then open a file. And then click OK. And then check the background is white and the caption says "Bite me!".

Paragraphs of simple instructions, often with steps missing that the author assumes the tester will know and without any explanation of what it means if that test fails. Instead of this mess, introduce each test with an overview of its purpose and what failure means, followed by test instructions each on a separate line. This not only helps you and your team mates when running the tests but it also helps when they come to update the test plan. Think of the test as code; you wouldn't expect the processor to guess when you miss out lines of code (I hope) so don't expect a tester to do the same; don't forget to add comments where more detail is needed (such as why it's important to change what locale the system is using); and number each step so that it can be referred to easily in notes and change requests, e.g. "Test 2.6, step 10 failed with a value of 20 where 21 was expected"3. If you do this, you will thank yourself later.

Provide context for the results

When performing the test, you will want to be recording results for each step. When reviewing results, you will usually want to see the test step that garnered them, especially if there is a failure or an ambiguous result. Save yourself some time by specifying your tests as a table with a column for results. That way, results are recorded next to the test definition making both recording and reviewing much easier. Not only that, but you don't need to maintain a results sheet and the test definitions separately or contend with different people recording the results in different formats.

Conclusions

If you follow these three simple steps, you should end up with test definitions that look less like an account of your weeks at summer camp when you were 7 and more like the example below.

This test checks the flange sprocket exposes the doobrey flap.

Step Instructions Results
10 Open the flange sprocket. You should see the flange sprocket open. Pass – opened
20 Press the doobrey flap. Fail – unable to locate doobrey flap. Test lacking sufficient detail or doobrey flap was not exposed.
30

Of course, all this assumes you don't have a QA team or team members (or even some tools that help you define and execute manual testing). If you do, that's great; respect your QA team members (or your tools) and the work they do to keep your users from deploying their wrath upon thee. For the rest of us, stuck with ourselves and our office productivity applications in which to define and record our testing, following these tips will make our testing life (and that of those around us) just that little bit less tedious. Who knows, some of you might even start enjoying it.

  1. This is a very simplistic overview, I know. []
  2. There are valid and not so valid reasons for this, but we're not going to get into that here. []
  3. You might also consider spacing step numbers by 10 so it's easier to insert additional steps without renumbering all subsequent steps. []