TestMethod, TestInitialize, and TestCleanup in XUnit2

In the last post, I briefly described how to automatically migrate your MSTest tests to XUnit by using the XUnitConverterĀ utility. Of course, nothing is ever that simple; MSTest has some concepts that XUnit expresses very differently1 like how toĀ share code between tests whether that is setup, fixtures, cleanup, or data. Some of these concepts are implemented differently enough that automating the migration from one to the other would be very difficult if not impossible. However, some of it really is that simple. Before we look at the difficult examples, I thought it would be useful to illustrate how some of the simple concepts map from MSTest to XUnit using an example2.

So, let's look at an MSTest example (contrived, of course):

[TestClass]
public class MyTests
{
    private MemoryStream _stream;

    [TestInitialize]
    public void Initialize()
    {
        _stream = new MemoryStream();
    }

    [TestCleanup]
    public void Cleanup()
    {
        _stream.Dispose();
    }

    [TestMethod]
    public void Test1()
    {
        //Arrange
        //Act
        //Assert
    }

    [TestMethod]
    public void Test2()
    {
        //Arrange
        //Act
        //Assert
    }
}

Clearly, I cheated by not actually making the tests do anything, but the content of the test methods is mostly irrelevant; you set some stuff up, you do something, and you assert a resultā€“it's all the same regardless of the test framework. However, this is a simple example of a test class written for the MSTest framework. There are attributes to tell the framework that the class is a test class, which methods inside of it are test methods, and which methods should be called before and after each test. In this case, our test initialization creates a stream, which is then disposed of in the cleanup method; each test method would get sandwiched in the middle.

After converting to XUnit with the converter tool, the same class will look something like this:

public class MyTests : IDisposable
{
    private MemoryStream _stream;

    public MyTests()
    {
        _stream = new MemoryStream();
    }

    public void Dispose()
    {
        _stream.Dispose();
    }

    [Fact]
    public void Test1()
    {
        //Arrange
        //Act
        //Assert
    }

    [Fact]
    public void Test2()
    {
        //Arrange
        //Act
        //Assert
    }
}

There are a few things that happened.

  1. The class no longer has an attribute. XUnit knows the class is a test class because it contains tests3.
  2. The tests are decorated with a [Fact]Ā attribute, which is equivalent to [TestMethod].
  3. The [TestInitialize]Ā and [TestCleanup]Ā attributes are gone. Instead, the class constructor is used for test initialization and the DisposeĀ method along with deriving from IDisposableĀ indicates that there is test cleanup code.

Overall, I love how the XUnit syntax works with C# syntax and .NET idioms in declaring tests. Not only does this reduce the ceremony around defining tests by reducing the various decorators, but it also allows for cleaner coding practices. For example, we can now correctly mark our memory stream member variable as readonly.

private MemoryStream _stream;

By relying on C# syntax and standard interfaces in this way, the lifecycle of a test is clearer too; XUnit will construct and dispose the class for each test in the class, making it easy to see how each test will run. This idiomatic way of declaring tests allows for separation of concerns, keeping test classes light and focused. This will be illustrated when we later look at otherĀ concepts in MSTest like [ClassInitialize]Ā and [ClassCleanup], TestContext, and [DeploymentItem],Ā and how XUnit tackles the problems these concepts solved.

  1. and for good reasons, IMHO []
  2. XUnit documentation has a handy table but I don't think it's as illustrative as it could be []
  3. why MSTest did not make assumptions like this, I do not know []

Migrating from MSTest to XUnit 2

We recently migrated most of our testing from the MSTest framework1 to XUnit 2 (from here on in, I will be referring to this as just XUnit). This was not a change taken lightly since it touched a lot of files, but we were motivated by a number of XUnit features, including reduced need to attribute test classes, easier data-driven tests, and parallel test execution.

Sadly, if you try this you may discover as we did that the XUnit documentation is equal parts super helpful and woefully lacking, depending on what you are trying to do. After hearing yet another colleague lament how hard it was to find information on some feature or other of XUnit, I thought it might be a good idea to document some of the things I have learned and hopefully, introduce yet another helpful XUnit resource to the Internet2.

For those not familiar with XUnit, the basics are pretty easy. In fact the existing XUnit documentation includes a handy table mapping concepts in other test frameworks to their XUnit equivalents. You can check that table out for details, but basically, through the use of attributes, constructors, IDisposable, and other interfaces, XUnit uses what I would describe as a more natural approach than other frameworks toĀ concepts like tests, test initialization and cleanup, and test fixtures. Of course, this means that migrating from one framework to XUnit involves a bunch of file editing, but fear not for there is help.

XUnitConverter

The bulk of the migration was made a lot easier by using the XUnitConverter, a tool available in the dotnet/codeformatter GitHub repository. Although it does not take care of everything (beware if you have multiple test classes per file) and, depending on your preferred code format, can mess your formatting up a bit, but it does make the migration a lot easier.

The XUnitConverter runs against a csprojĀ file. You can use PowerShell to recurse your solution and process all your projects like this:

Get-ChildItem -Recurse *.csproj | %{XUnitConverter $_.FullName}

Once the converter has done its thing, it is easy to identify further changes by using the compiler (things don't like to build if something did not work right). Although most things get converted with easeā€”[TestMethod]Ā becomes [Fact], [TestInitialize]Ā  becomes a constructor, complex tests will need a little more assistanceĀ to fully migrate. For example, XUnit uses interfaces and fixture classes to replace the kind of shared initialization and cleanup that MSTest provides via the [ClassInitialize]Ā and [ClassCleanup]. We will start tackling these issues next time.

  1. Version 1 of MSTest, not the new and improved MSTest version 2 []
  2. I would like to document my anecdotal information before I even consider tackling something a little more structured like contributing to the official documentation []

Testing AngularJS: $resource

This is the fourth post taking a look at testing various aspects of AngularJS. Previously, I covered:

In this installment we'll take a look at what I do to isolate components from their use of the $resourceĀ service and why.

$resource

The $resourceĀ service provides a simple way to define RESTful API endpoints in AngularJS and get updated data without lots of promise handling unnecessarily obfuscating your code. If you have created a resource for a specific route, you can get the returned data into your scope as easily as this:

$scope.data = myResource.get();

The $resourceĀ magic ensures that once the request returns, the data is updated. It's clever stuff and incredibly useful.

However, when testing components that use resources, I want to isolate the components from those resources.Ā While I couldĀ useĀ $httpBackendĀ or a cache to manipulate what results the resource returns, these can be cumbersome to setup and adds unnecessary complexity and churn to unit tests1. To avoid this complexity, I use a fakeĀ that can be substituted for $resource.

spyResource

My fake $resourceĀ is called spyResource. It is not quite a 1-1 replacement, but it does support the more common situations one might want (and it could be extended to support more). Here it is.

spyResource = function (name) {
	var resourceSpy = jasmine.createSpy(name + ' resource constructor').and.callFake(function () { angular.copy({}, this); });

	resourceSpy['get'] = resourceSpy.prototype['get'] = jasmine.createSpy('get');
	resourceSpy['$save'] = resourceSpy.prototype['$save'] = jasmine.createSpy('$save');
	resourceSpy['$delete'] = resourceSpy.prototype['$delete'] = jasmine.createSpy('$delete');

	return resourceSpy;
};

First of all, it is just a function. Since it is part of my testing framework, there is no need to wrap it in some fancy AngularJS factory, though we certainly could if we wanted.

Second, it mimics the $resourceĀ service by returning a function that ultimately copies itself. This is useful because you do not necessarily have access to the instances of a resource that are created in your code before posting an object update to your RESTful API. By copying itself, you can see if the $save()Ā call is made directly from the main spyResourceĀ definition, even if it was actually called on an instance returned by it because they share the same spies.

To use this in testing, the $provideĀ service can be used to replace a specific use of $resourceĀ with a spyResource. For example, if you defined a resource called someResource, you might have:

describe 'testing something that uses a resource', ->
  Given => module ($provide) =>
    $provide.value 'someResource', spyResource('someResource');return
  ...

Now, the fake resource will be injected instead of the real one, allowing us to not only spy on it, but to also ensure there are no side-effects that we have not explicitly set up.

Finally…

I have covered a very simple technique I use for isolating components from and spying on their usage of AngularJS resources. The simple fake resource I provided for this purpose can be easily tailoredĀ to cater to more complex scenarios.Ā For example, if the code under test needs data from the get()Ā method or the $promiseĀ property is expected in get()Ā return result, the spy can be updated to return that data.

Using this fake resource instead of $httpBackendĀ or a cache to manipulate the behavior of a real AngularJS resourceĀ not only simplifies the testing in general, but also reduces code churn by isolating the tests from the API routes that can often change during development.

As always, please leave a comment if you find this useful or have other feedback.

 

 

  1. API routes can often change during development, which would lead to updating `$httpBackend` test code so that it matches []

Testing AngularJS: Asynchrony

So far, we have looked at some techniques for testing simple AngularJS factories and directives. However, things are rarely simple when it comes to web development and one area that complicates things is that ofĀ asynchronous operations such as web requests, timeouts and promises.

Eventually, when writing AngularJS, you will rely on the $timeout, $interval,Ā or $qĀ servicesĀ toĀ defer an action by some interval or indefinitely using promises. I will not go very deep into their use here, you can read much of that on the AngularJS documentation, but since it is likely that you will use them, how do you test them? How do you testĀ asynchronous code without horribly complex and unreliable tests?

$timeout

Consider thisĀ simple example where we have a controller that defers some action using $timeout.

somewhatAbstract.controller('DeferredController', ['$scope', '$timeout', function ($scope, $timeout) {
    $scope.started = false;

    $timeout(function() {
        $scope.started = true;
    } );
}

Here we have a variable, startedĀ that is initialised to falseĀ and a deferred method that changes that value to true. A first stab at testing this might look a little like this:

describe 'deferred execution tests', ->
  describe 'started should be set to true', ->
    Given => module 'somewhatAbstract'
    Given => inject ($rootScope) =>
      @scope = $rootScope.$new()
    When => inject ($timeout, $controller) =>
      $controller 'deferredController', { $scope: @scope, $timeout: $timeout }
    Then => expect(@scope.started).toBeTruthy()

Unfortunately, such a testĀ will not passĀ because theĀ deferred codeĀ would not executeĀ until after the expectation was tested. We can mitigate this by using some AngularJS magic provided by the ngMock module.

The ngMock module adds theĀ $timeout.flush()Ā methodĀ so that code deferred using $timeoutĀ can be executed deterministically1. The test can therefore be modified such that it passes by adding the highlighted line below.

describe 'deferred execution tests', ->
  describe 'started should be set to true', ->
    Given => module 'somewhatAbstract'
    Given => inject ($rootScope) =>
      @scope = $rootScope.$new()
    When => inject ($timeout, $controller) =>
      $controller 'deferredController', { $scope: @scope, $timeout: $timeout }
      $timeout.flush()
    Then => expect(@scope.started).toBeTruthy()

$q

For promises that were deferred using $qĀ (including the promise returned from using $interval), we can use $scope.$apply()Ā to complete a resolved or rejected promise and execute any code depending on that promise.

In the following contrived example, we have a controller with a start()Ā method that returns a promise and a started()Ā method that resolves that promise.

somewhatAbstract.controller('deferredController', ['$scope', '$timeout', '$q', function ($scope, $timeout, $q) {
    var deferred = $q.defer();

    $scope.start = function () {
        return deferred.promise;
    };

    $scope.started = function() {
        deferred.resolve({started:true}); };
    };
}]);
describe 'deferred execution tests', ->
  describe 'started should be set to true when promise resolves', ->
    Given => module 'somewhatAbstract'
    Given => inject ($rootScope, $q, $controller) =>
      @scope = $rootScope.$new()
      $controller 'deferredController', { $scope: @scope, $q: $q }
    When =>
      @scope.start().then (result) => @actual = result
      @scope.started()
      @scope.$apply()
    Then => expect(@result).toBe { started: true }

The preceding test validates our controller and its promise. If you delete the highlighted line, you would see that the test fails because the resolved promise is never completed.

Finally…

In this post, we have taken a brief look at how AngularJS supports the testing of asynchronous code execution deferred using $timeout, $interval, or $q. The ability to synchronously control otherwise asynchronous actions not only allows us toĀ test that deferred code, but also to prevent it executing at all. This can be incredibly useful when isolating different parts of our codeĀ by reducing how much of it has to run to validate a specific method.

Of course, quite often, a promise is onlyĀ resolved after anĀ HTTP request responds or fails, such as when using $resource. When writing unit tests,Ā you may not have nor want the luxury of a back-end server that responds appropriately to testĀ requests. Instead, you either want to fake out $resource, or fake out and validate the HTTP requests and responses2.Ā In upcomingĀ posts, we'll look at a simple $resourceĀ fake for the former and the specialĀ $httpBackendĀ service that AngularJS provides for the latter.Ā Until then, please leave your comments.

  1. TheĀ `flush()` method even takes a delay parameter to control which timeouts will execute and a similar method exists for `$interval` []
  2. The requests that you expect your code to make and the responses that your code expects to receive – or doesn't, as the case may be []

Testing AngularJS: Directives

So far in this series of AngularJS-related posts, we have looked at some utility factories for tracking web requests and preventing navigation requests, whether in-app or not, and how to write tests for those factories using Jasmine. In this post, we will look at how I generally test directives and how that impacts their structure.

Directives

Directives are the things we write in AngularJS that allow us to extend and modify the HTML language. We provide a template, some code-behind, and data binding, and AngularJS performs its magic to create a new behavior or control that is inserted via tag, attribute or CSS class. However, directives can at first seem difficult to test.

Let's look at a convolutedĀ example.

sa.directive('saEditableField', function () {
    return {
        restrict: 'E',
		require: 'ngModel',
        template: '<label>{{fieldName}}</label><input type="text" ng-model="fieldValue" /><button ng-click="showField()">Alert!</button>',
        scope: {
            fieldName: '@saFieldName',
            fieldValue: '=ngModel'
        },
        controller: function($scope) {
            $scope.showField = function() {
                window.alert("Your " + $scope.fieldName + " is " + $scope.fieldValue);
            };
        }
    };
});

In this example we have a very simple directive. It specifiesĀ a template, some scope, and a controller where the business logic lives. It is used like this:

<sa-editable-field sa-field-name="First Name" ng-model="fieldValue"></sa-editable-field>

As it stands, there are a one or twoĀ things thatĀ either make this difficult to test or (I feel) could be clearer1. Let's refactor this directive with testing in mind.

Tests

When I have a working directive that I want to test, I tend to start writing tests. I realise this seems obvious, but it is worth noting as I do not tend to do much refactoring before writing tests and I don't tend to write tests before I have something to test. I find it much more useful to get a concept working and then think about what tests should be there and how I can implement those tests. When I find friction in authoring those tests, I identify my refactoring priorities.

Considering the directive presented above, what tests do we need? Here is the list of things that I thinkĀ should be tested:

  • That the directive exists
  • That the directive compiles
  • That the alert shows when the button is clicked
  • That the alert mentions the field name
  • That the alert mentions the field value

Testing that the directive exists is easy (note that we have to add DirectiveĀ to the end for the directive to be injected).

describe 'exists', ->
  When => inject (saEditableFieldDirective) =>
    @saEditableField = saEditableFieldDirective
  Then => expect(@saEditableField).toBeDefined()

Checking it compiles is a little more convoluted, but not difficult.

describe 'compiles', ->
  Given => inject ($rootScope) =>
    @scope = $rootScope.$new()
    @scope.field = 'test'
    @htmlFixture = angular.element('<sa-editable-field sa-field-name="Test" ng-model="field"></sa-editable-field>');
  When => inject ($compile) =>
    $compile(@htmlFixture)(@scope)
    @scope.$apply()
  Then => @htmlFixture.find('label').length == 1
  And => @htmlFixture.find('label').text() == 'Test'
  And => @htmlFixture.find('input').length == 1
  And => @htmlFixture.find('button').length == 1

The linking function that $compileĀ returns is called becauseĀ Ā $compileĀ can succeed even if our fixture has a typo.

Using knowledge of our directive template, we can verify that the linking worked without going so far as to validate every aspect of the magic Angular does for us. In fact, since this is really intended to testĀ the code that gets calledĀ inside the directive whenĀ it is linked to a scope,Ā the template for the directive could be as simple as <div></div>Ā (we will take a closer look in a future post at how we can make the directive template replaceable).

Testing the Alert

Testing that the directive exists, compiles and links is relatively straightforward when compared with testing that the alert is shown and shows the right thing. In order to test the alert, we need to be able to invoke showField, the method that shows it. We also need to checkĀ that showFieldĀ actuallyĀ shows the alert as we would like.

To invoke showField, we might try using the scope that is passed to the link method returned from $compile. Since the method is added to the scope in the controller, this should work, right? Of course not; this directive has an isolate scope and as such the method has been added to its own copy of the passed scope. We canĀ get to thatĀ isolate scope from the compiled element using the isolateScope()Ā method;

describe '#showField', ->
  Given => inject ($rootScope) =>
    @scope = $rootScope.$new()
    @scope.field = 'test'
    @htmlFixture = angular.element('<sa-editable-field sa-field-name="Test" ng-model="field"></sa-editable-field>');
  When => inject ($compile) =>
    $compile(@htmlFixture)(@scope)
    @scope.$apply()
  Then => expect(@htmlFixture.isolateScope().showFi

…but I feel like such a test involves too much setup ceremony just to get to the test2 and it relies on too many things outside of just the bit we want to validate. Instead, what if we couldĀ get at the controller independently of the directive?

sa.directive('saEditableField', function () {
    return {
        restrict: 'E',
		require: 'ngModel',
        template: '<label>{{fieldName}}</label><input type="text" ng-model="fieldValue" /><button ng-click="showField()">Alert!</button>',
        scope: {
            fieldName: '@saFieldName',
            fieldValue: '=ngModel'
        },
        controller: 'saFieldController'
    };
});

sa.controller('saFieldController', ['$scope', function($scope) {
    $scope.showField = function() {
        window.alert("Your " + $scope.fieldName + " is " + $scope.fieldValue);
    };
}]);

Now our controller is completely separate from the directive, which refers to the controller by name. This means we can write what I feel are clearer directive tests. Using a fake for the controller, we isolate the directiveĀ from the real thing and any side-effects it may have;

describe 'saEditableField', ->
  Given -> module ($provide) ->
    fakeFieldController = jasmine.createSpyObj 'saFieldController', ['showField']
    $provide.value 'saFieldController', fakeFieldController;return
  
  describe 'exists', ->
    When => inject (saEditableFieldDirective) =>
      @saEditableField = saEditableFieldDirective
    Then => expect(@saEditableField).toBeDefined()
      
  describe 'compiles', ->
    Given => inject ($rootScope) =>
      @scope = $rootScope.$new()
      @scope.field = 'test'
      @htmlFixture = angular.element('<sa-editable-field sa-field-name="Test" ng-model="field"></sa-editable-field>');
    When => inject ($compile) =>
      $compile(@htmlFixture)(@scope)
      @scope.$apply()
    Then => @htmlFixture.find('label').length == 1
    And => @htmlFixture.find('label').text() == 'Test'
    And => @htmlFixture.find('input').length == 1
    And => @htmlFixture.find('button').length == 1

…and now the controller can have its own tests too;

describe 'saFieldController', ->
  describe 'exists', ->
    Given => inject ($rootScope) =>
      @scope = $rootScope.$new()
    When => inject ($controller) =>
      @saFieldController = $controller 'saFieldController', { $scope: @scope }
    Then => expect(@saFieldController).toBeDefined()
      
  describe '#showField', ->
    describe 'exists', ->
      Given => inject ($rootScope) =>
        @scope = $rootScope.$new()
      When => inject ($controller) =>
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      Then => expect(@scope.showField).toEqual jasmine.any(Function)

$window

However, we are not quite finished. How do we know that the alert is shown when our showFieldĀ method is called?Ā We could spy on the global windowĀ object, but that's not really veryĀ Angular-y and our alert will get shown during testing (goodnessĀ knows what other side-effects we might face by using the global windowĀ object). What we need is an injectable version of windowĀ that we can replace with a fake during our tests.Ā Not unsurprisingly, AngularJS has us covered with theĀ $windowĀ service.

sa.controller('saFieldController', ['$scope', '$window', function($scope, $window) {
    $scope.showField = function() {
        $window.alert("Your " + $scope.fieldName + " is " + $scope.fieldValue);
    };
}]);

Now with our controller rewritten to use $window, we can tidy up and complete itsĀ test cases.

describe 'saFieldController', ->
  Given -> module ($provide) ->
    $provide.value '$window', jasmine.createSpyObj('$window', ['alert']);return
      
  describe 'exists', ->
    Given => inject ($rootScope) =>
      @scope = $rootScope.$new()
    When => inject ($controller) =>
      @saFieldController = $controller 'saFieldController', { $scope: @scope }
    Then => expect(@saFieldController).toBeDefined()
      
  describe '#showField', ->
    describe 'exists', ->
      Given => inject ($rootScope) =>
        @scope = $rootScope.$new()
      When => inject ($controller) =>
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      Then => expect(@scope.showField).toEqual jasmine.any(Function)
        
    describe 'calls $window.alert', ->
      Given => inject ($rootScope, $controller, $window) =>
        @scope = $rootScope.$new()
        @windowService = $window
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      When => @scope.showField()
      Then => expect(@windowService.alert).toHaveBeenCalled()
      
    describe 'alert includes $scope.fieldName', ->
      Given => inject ($rootScope, $controller, $window) =>
        @scope = $rootScope.$new()
        @scope.fieldName = "Test Name"
        @windowService = $window
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      When => @scope.showField()
      Then => expect(@windowService.alert.calls.mostRecent().args[0]).toMatch /.*Test Name.*/
        
    describe 'alert includes $scope.fieldValue', ->
      Given => inject ($rootScope, $controller, $window) =>
        @scope = $rootScope.$new()
        @scope.fieldValue = "Test Value"
        @windowService = $window
        @saFieldController = $controller 'saFieldController', { $scope: @scope }
      When => @scope.showField()
      Then => expect(@windowService.alert.calls.mostRecent().args[0]).toMatch /.*Test Value.*/

And there you have it. Some simple steps that make unit testing directives a little easier.

Finally…

In this post, we have taken a very brief look at how to structure a directive to simplify unit testing by separating the directive declaration from its controller and taking advantage of Angular services such as $window.

Although we have not covered some of the more complex directive concepts such as the linkĀ function and DOM manipulation3, these simple steps should take you a long way towards providing better test coverage of your AngularJS widgets.

Until next time, take care and don't forget to leave a comment.

  1. Not unexpected since I wrote it to discuss these things []
  2. we have to compile a fixture with a scope and then get the isolate scope from the element []
  3. I may cover that in a future post if there is interest or the whim takes me []