Creating and using your own AngularJS filters

I have been working on the client-side portion of a rather complex feature and I found myself needing to trim certain things off a string when binding it in my AngularJS code. This sounded like a perfect job for a filter. For those familiar with XAML development on .NET-related platforms like WPF, Silverlight and WinRT, a filter in Angular is similar to a ValueConverter. The set of built in filters for Angular is pretty limited and did not support my desired functionality, so I decided to write new filter of my own called trim. I even wrote some simple testing for it, just to make sure it works.

Testing

For the sake of argument, let's presume I followed TDD or BDD principles and wrote my test spec up front. I used jasmine to describe each of the behaviours I wanted1.

describe('trim filter tests', function () {
	beforeEach(module('awesome'));

	it('should trim whitespace', inject(function (trimFilter) {
		expect(trimFilter(' string with whitespace ')).toBe('string with whitespace');
	}));
		
	it('should trim given token', inject(function (trimFilter) {
		expect(trimFilter('stringtoken', 'token')).toBe('string');
	}));
		
	it('should trim token and remaining whitespace', inject(function (trimFilter) {
		expect(trimFilter(' string token ', 'token')).toBe('string');
	}));
});

An important point to note here is that for your filter to be injected, you have to append the word Filter onto the end. So if your filter is called bob, your test should have bobFilter as its injected parameter.

Implementing the Filter

With the test spec written, I could implement the filter. Like many things in Angular that aren't directives, filters are pretty easy to write. They are a specialization of a factory, returning a function that takes an input and some arbitrary parameters, and returning the filter output.

You add a filter to a module using the filter method. Below is the skeleton for my filter, trim.

var myModule = angular.module('awesome');

myModule.filter( 'trim', function() {
    return function (input, tokenToTrim) {
        var output = input;
        // Do stuff and return the result
        return output;
    };
});

Here I have created a module called awesome and then added a new filter called trim. My filter takes the input and a token that is to be trimmed from the input. However, currently, the filter does nothing with that token; it just returns the input. We can use this filter in an Angular binding as below.

<p style'font-style:italic'>Add More {{someValue | trim:'Awesome'}} Awesome</p>

You can see that I am applying the trim filter and passing the token, "Awesome". If someValue was "Awesome", this would output:

Add More Awesome Awesome

You can see that "Awesome" was not trimmed because we didn't actually implement the filter yet. Here is the implementation.

myModule.filter('trim', function () {
	return function (input, token) {
		var output = input.trim();

		if (token && output.substr(output.length - token.length) === token) {
			output = output.substr(0, output.length - token.length).trim();
		}
		return output;
	};
});

This takes the input and removes any extra spaces from the start and end. If we have a token and the trimmed input value ends with the token value, we take the token off the end, trim and trailing space and return that value. Our binding now gives us:

Add More Awesome

Perfect.

  1. Try not to get hung up on the quality of my tests, I know you are in awe []

Unit testing attribute driven late-binding

I've been working on a RESTful API using ASP WebAPI. It has been a great experience so far. Behind the API is a custom framework that involves some late-binding. I decorate certain types with an attribute that associates the decorated type with another type1.Β The class orchestrating the late-binding takes a collection of IDecorated instances. It uses reflection to look at their attributes to determine the type they are decorated with and then instantiates that type.

It's not terribly complicated. At least it wasn't until I tried to test it. As part of my development I have been using TDD, so I wanted unit tests for my late-binding code, but I soon hit a hurdle. In mocking IDecorated, how do I make sure the mocked concrete type has the appropriate attribute?

var mockedObject = new Mock();

// TODO: Add attribute

binder.DoSpecialThing( mockedObject.Object ).Should().BeAwesome();

I am using Moq for my mocking framework accompanied by FluentAssertions for my asserts2. Up until this point, Moq seemed to have everything covered, yet try as I might I couldn't resolve this problem of decorating the generated type. After some searching around I eventually found a helpful Stack Overflow question and answer that directed me to TypeDescriptor.AddAttribute, a .NET-framework method that provides one with the means to add attributes at run-time!

var mockedObject = new Mock();

TypeDescriptor.AddAttribute(
    mockedObject.Object.GetType(),
    new MyDecoratorAttribute( typeof(SuperCoolThing) );

binder.DoSpecialThing( new [] { mockedObject.Object } )
    .Should()
    .BeAwesome();

Yes! Runtime modification of type decoration. Brilliant.

So, why didn't it work?

My binding class that I was testing looked a little like this:

public IEnumerable<Blah> DoSpecialThing( IEnumerable<IDecorated> decoratedThings )
{
    return from thing in decoratedThings
           let converter = GetBlahConverter( d.GetType() )
           where d != null
           select converter.Convert( d );
}

private IConverter GetBlahConverter( Type type )
{
    var blahConverterAttribute = Attribute
        .GetCustomAttributes( type, true )
        .Cast<BlahConverterAttribute>()
        .FirstOrDefault();

    if ( blahConverterAttribute != null )
    {
        return blahConverterAttribute.ConverterType;
    }

    return null;
}

Looks fine, right? Yet when I ran it in the debugger and took a look, the result of GetCustomAttributes was an empty array. I was stumped.

After more time trying different things that didn't work than I'd care to admit, I returned to the StackOverflow question and started reading the comments; why was the answer accepted answer when it clearly didn't work? Lurking in the comments was the missing detail; if you use TypeDescriptor.AddAttributes to modify the attributes then you have to use TypeDescriptor.GetAttributes to retrieve them.

I promptly refactored my code with this detail in mind.

public IEnumerable<Blah> DoSpecialThing( IEnumerable<IDecorated> decoratedThings )
{
    return from thing in decoratedThings
           let converter = GetBlahConverter( d.GetType() )
           where d != null
           select converter.Convert( d );
}

private IConverter GetBlahConverter( Type type )
{
    var blahConverterAttribute = TypeDescriptor
        .GetAttributes( type )
        .OfType<BlahConverterAttribute>()
        .FirstOrDefault();

    if ( blahConverterAttribute != null )
    {
        return blahConverterAttribute.ConverterType;
    }

    return null;
}

Voila! My test passed and my code worked. This was one of those things that had me stumped for longer than it should have. I am sharing it in the hopes of making sure there are more hits when someone else goes Internet fishing for help. Now, I'm off to update Stack Overflow so that this is clearer there too.

  1. Something similar to TypeConverterAttribute usage in the BCL []
  2. Though I totally made up the BeAwesome() assertion in this blog post []

Testing Times

Developers, testers and testing

In the world of software, there are developers and there are testers. The developers often design and implement the software while the testers define and execute the test plans. Software engineering requires both testers and developers, and together they make quality software; one by finding problems and the other by solving problems1. At least, that's the way it should be. Unfortunately, many developers (including myself) have found themselves in situations where the QA department is nonexistent, where testing and the associated test plan updates lurk at the end of every development cycle or feature implementation.

System testJust to be clear, we're not talking unit tests like those used in test-driven development (TDD) with frameworks like NUnit or MSTest. Unit tests and TDD are somewhat unique in that they take the developer's strength of solving problems and trickpersuade developer's into seeing testing as yet another problem in need of resolution (just how do you prove a requirement was met – to the TDD Cave, Codeman!).

Sadly, manual tests found in system testing, integration testing and regression testing are not so exciting. They don't usually present cunning problems to be solved but instead provide a means for mind-numbing hours following detailed, inane instructions where the result feels obvious and the rewards are few. At least, that's my experience as a developer performing tests; the same cannot be said of testers. I've worked with some very talented, passionate quality assurance professionals whose joy found in their craft was inspiring and of whom I have been envious when I too have found myself burdened by testing.

Finding those team mates who take pride in testing and making a product better is like striking gold, but even those that find schadenfreude in identifying a colleague's mistakes can be a better option to a developer than having to run the testing themselves. However, dedicated resources for quality assurance are often seen as a luxury2, leaving developers with little option but to take that responsibility on themselves.

To be clear, I'm trying to say that developers generally hate testing and more specifically, I hate testing, but we'll do it anyway if pushed.

WHHHHAAAT??!

At this point you may be surprised to discover that I recently found myself testing some software. Whether it was a poorly defined test, a flaky feature, or just the mundanity of repeating the same operations (albeit with subtle adjustments) over and over and over again, it left me frustrated, weary and disengaged. Testing is just not my thing, but I do it because I have to – releasing untested software should never be an option for a professional software developer; our users are not our QA department. The all too familiar experience reminded me of steps that developers can take when they're the ones that have to update and execute manual testing; steps that I've seen in action and that make testing almost pleasurable (almost).

Just update the test plan

Have you ever updated a test plan without checking the test was correct, or perhaps executed a test plan that was incorrect? Updating a test plan is tedious, we have to check that existing tests are still relevant and work out where there are gaps in the test coverage. This usually means looking at requirements documents and change requests and determining various test paths, expected results, etc. It can be a lot of work and it is all too easy to fall into the trap of skipping some steps, like validating the test is correctly defined or pretending that there's no way the existing plan missed something. Not only that, but if you've diligently updated the test plan, validating each test as you go, executing it all over again is even more painful because you already know what does and does not work from updating the tests in the first place.

So, do it once and do it right. If you carefully update the test plan, validating existing tests, updating others and creating new ones, you will find yourself testing the product anyway. As tests that should work don't, change requests will get raised and the product will improve. Not only that, but you'll only need to update the document once and you won't need to run the tests more than is absolutely necessary. To cap it off, the act of defining tests is pretty close to problem solving, making it a little less tedious for a developer to perform (though it is documentation, so, you know, don't hurt yourself or anything).

Assume the tester knows nothing (and is a little slow)I met a hawk and it was red

All too often, I come across test plans that are written like a kindergarten story.

Start the application. And then open a file. And then click OK. And then check the background is white and the caption says "Bite me!".

Paragraphs of simple instructions, often with steps missing that the author assumes the tester will know and without any explanation of what it means if that test fails. Instead of this mess, introduce each test with an overview of its purpose and what failure means, followed by test instructions each on a separate line. This not only helps you and your team mates when running the tests but it also helps when they come to update the test plan. Think of the test as code; you wouldn't expect the processor to guess when you miss out lines of code (I hope) so don't expect a tester to do the same; don't forget to add comments where more detail is needed (such as why it's important to change what locale the system is using); and number each step so that it can be referred to easily in notes and change requests, e.g. "Test 2.6, step 10 failed with a value of 20 where 21 was expected"3. If you do this, you will thank yourself later.

Provide context for the results

When performing the test, you will want to be recording results for each step. When reviewing results, you will usually want to see the test step that garnered them, especially if there is a failure or an ambiguous result. Save yourself some time by specifying your tests as a table with a column for results. That way, results are recorded next to the test definition making both recording and reviewing much easier. Not only that, but you don't need to maintain a results sheet and the test definitions separately or contend with different people recording the results in different formats.

Conclusions

If you follow these three simple steps, you should end up with test definitions that look less like an account of your weeks at summer camp when you were 7 and more like the example below.

This test checks the flange sprocket exposes the doobrey flap.

Step Instructions Results
10 Open the flange sprocket. You should see the flange sprocket open. Pass – opened
20 Press the doobrey flap. Fail – unable to locate doobrey flap. Test lacking sufficient detail or doobrey flap was not exposed.
30

Of course, all this assumes you don't have a QA team or team members (or even some tools that help you define and execute manual testing). If you do, that's great; respect your QA team members (or your tools) and the work they do to keep your users from deploying their wrath upon thee. For the rest of us, stuck with ourselves and our office productivity applications in which to define and record our testing, following these tips will make our testing life (and that of those around us) just that little bit less tedious. Who knows, some of you might even start enjoying it.

  1. This is a very simplistic overview, I know. []
  2. There are valid and not so valid reasons for this, but we're not going to get into that here. []
  3. You might also consider spacing step numbers by 10 so it's easier to insert additional steps without renumbering all subsequent steps. []

Ann Arbor Day of .NET

On Saturday (29th Oct), I attended the Ann Arbor Day of .NET. I thought it would be nice to summarise what I heard. I doubt these notes on their own will be greatly useful, but I hope they act as a launch pad into deeper dives on the topics covered as well as a review of what topics were covered. There were five different tracks for the day: Cloud, Framewords & Platforms, Soft Skills, Tools and Mobile. I chose talks from the first four of these based on the talk itself, rather than the track to which it belonged (I ruled out presentations that I had seen a variation of before such as David Giard's (@DavidGiard) Introduction to Microsoft Windows Workflow and Jay R. Wren's (@jayrwren) Let's Go to C# On The iPhone, though they were excellent when I saw them).

Be A Better Developer

I started out the with Mike Wood (@mikewo) and his session, Being A Better Developer. This was a soft skills talk, meaning it was not there to show off some cool .NET feature or technology, or teach me all about C#. Instead, the focus was on what makes a great developer and what we can do to attain that status.

Mike explored the various roles that developers have to take on, the hats we have to wear. From the student learning new things everyday, to teacher imparting knowledge to those around them. From janitorβ€”maintaining what already exists, to researcherβ€”investigating and choosing frameworks, languages, platforms, etc. Using these roles as a foundation, we then moved on to some tips such as setting up time blocks in which to work. If the time limit is reached and the problem isn't solved, turn to someone else for help (or somewhere else, like the Internet1) to avoid thrashing and time wasting. This seems somewhat obvious and yet I'm betting that many of us don't do it as often as we should. The other tips were equally useful, obvious and often compromised in our daily development lives:

  • organize
  • prioritize
  • know your tools
  • set SMART2 goals
  • be a catalyst for change
  • be lazy…

Right, that last one is maybe a little less obvious, but the point wasn't: don't do more than you have to.

One of the best pieces of advice from this talk was to choose a good mentor. I was very fortunate when I started out my career to have several excellent mentors and I miss working with them almost every day. Even now, I imagine what they might have said in order to guide my efforts3. For an hour, Mike filled that role.

There was much more to this talk than what I've written here. This session was an excellent way to spend an hour. While much of what Mike presented could be considered commonsense, it was reassuring and also provided some new tricks for my arsenal to be deployed in any situation, not just day-to-day software development.

Things to check out after this talk


How I Learned To Love Dependency Injection

Next, on to James Bender (@jamesbender) and his presentation on how he much loves dependency injection4. This talk started out looking at the way things were and the ideas behind a loosely-coupled system; a system where each component knows as little as possible about the other components in its parent system, whether it uses the services those components provide or not. Tightly-coupled systems don't promote reuse, create brittle systems and are inherently difficult to test.

James told a compelling story, starting out with familiar conceptsβ€”a constructor that takes various interfaces through which the created object can obtain various services, the factory pattern, etc., but soon we were looking at an overview of dependency injection frameworks, what they do and how they do it.

And then, code. Code about cooking bagels. The only bad part about this was the lack of bagels to eat5. The talk moved quickly on to the various features of Ninject, an open source dependency injection framework. I would've preferred it if there was more emphasis on dependency injection, using Ninject to provide examples, rather than the "how to use Ninject" approach that was given. However, this was still very informative and laid a path towards the next part of the talk which showed how dependency injection and TDD6 go hand in hand. This in turn led to an introduction of mocking (the mock framework of choice in these examples was Rhino Mocks, but James recommended Moq for new work).

Things to check out after this talk


A Field Guide for Moving to the Cloud

We're back with Mike Wood (@mikewo) for this one. I've never done any Cloud development but I'm really interested in it and what it may do for me and the work I do, so I'm hanging a lot on this introduction (no pressure, Mike).

Mike started off with a Batman reference, tying the reason why I'm so tired (Batman: Arkham City) with the reason why I'm here. He then fired off some acronyms: IaaS, SaaS, PaaS. This is a great starting point for me as terminology is often the last refuge of miscommunication and I hate not understanding what all those acronyms and terms mean. One participant immediately asked, "What's the difference between IaaS and PaaS?" and most of us nodded, realising we didn't know either. To paraphrase, IaaS gives the most control as you're responsible for patching your OS, upgrading the frameworks, etc. PaaS manages all that for you. Mike did a great job explaining this (unlike my paraphrasingβ€”Mike used a whiteboard and everything) and we moved on, that bit more informed and ready to learn more.

At this point, Mike gave us a run through of the Windows Azure platform, again making sure we're all talking the same language as the presentation progresses. Mike's presentation style is nice and fluid, taking questions and interruptions in his stride, and he clearly knows his topic well (Mike is an Azure MVP, after all). He walked us through the various parts of Windows Azure, Microsoft SQL Azure and Windows Azure AppFabric before we moved on to planning for our move to the Cloud.

Mike discussed identifying suitable applications for moving to the Cloud, scale of the application and the independence of scale, the services used and tight integration with loose coupling (not the first time we've heard this today but I would hope, not the first time in our careers either, otherwise, you're doing it wrong), usage patterns, latency, security and many other facets to be considered when moving to the Cloud.

The final point related to whether the move would save money or not and the importance of answering that question before making the move. This kind of information was great to see and may prove very useful when talking with project managers or business development types. Mike also pointed out using techniques like multipurpose worker roles and disposable compute instances to save as much as 50% in costs.

And then it was lunch.

Things to check out after this talk


Develop IT: Intro to PowerShell

I admit it, I have only ever used PowerShell for things that I could've done from a regular command prompt, so this talk was one I didn't want to miss. I want to know more so I can do more. I feel like PowerShell is an exclusive club for productive individuals and I'd at least like to take a look inside, so this was my opportunity. Sarah Dutkiewicz (@sadukie) was the presenter for this session, a C# MVP and co-author of Automating Microsoft Windows Server 2008 R2 with Windows PowerShell 2.0. This talk was entirely presented using PowerShell, which certainly made it stand apart from other presentations given so far today.

The initial examples given by Sarah quickly demonstrated how PowerShell provides similar behaviour to the traditional command prompt but also how it is different, providing .NET objects (dir w* | Get-MemberΒ demonstrated how dirΒ provides an objectβ€”very cool). We then learned all about the standard PowerShell syntax that provides an easily dicoverable set of commands (known as Cmdlets in the PowerShell world) and some useful Cmdlets like Get-HelpΒ and Out-GridViewΒ (which outputs things to its own filterable grid in a window).

Sarah continued introducing us to a variety of PowerShell concepts and features including but not limited to:

  • functions
  • modules
  • manifests
  • PowerShell ISE7
  • providers
  • aliases
  • registry interaction

My biggest takeaway is how easy it can be to work with the registry from within PowerShell (just open PowerShell and enter cd hkcu:Β then dirΒ to see what I mean). Overall, a great introduction that has given me a starting point for exploring PowerShell and becoming more efficient.

Things to check out after this talk


Stone Soup or Creating a Culture of Change

For the final session of the day, I rejoined James Bender (@jamesbender). I was really looking forward to this having faced many challenges in changing culture as part of my efforts for meeting the requirements of CMMI8. This was expected by event organisers to be a popular talk and I still feel that it should have been; however, the turnout was disappointingly low. This made for a more intimate session and certainly did not detract from the informative content. James expressed that this was probably the last time he would present this talk, which is a shame as I found the anecdotes and the lessons that were drawn from them to be very insightful.

The things I've learned will definitely help me in my work and elsewhere. Things like:

  • Go for low hanging fruit
  • Don't change too much at once
  • Support the change and let it simmer
  • Don't judge
  • Know your tools
  • Only introduce changes you believe in
  • Understand the business
  • Know when to say when
  • Evangelize
  • Build a network of like-minded people
  • Be a politician
  • Be a therapist
  • Realise that it might be difficult to reach everyone
  • When all else fails, buy doughnuts
  • Be patient

There's not much more I could say about this talk that would do it justice (not that my notes have really given justice to the earlier talks), but suffice to say this presentation was very relevant to me and I am very grateful to have been able to see it.

Things to check out after this talk


To conclude, I had a great day. The organisers, sponsors and speakers deserve a huge "thank you" for setting up and supporting this event. Wandering the hallways of Washtenaw Community College, attending talks in rooms and lecture halls reminded me a little of being back at university, but the speed at which the day flew by certainly did not. It was a very informative and enjoyable way to spend the day and among the best $10 I've spent this year.

  1. Use Internet search before you ask someone. []
  2. Specific, Measurable, Achievable, Realistic/Relevant, Trackable []
  3. Besides, "Shut up, Jeff!" []
  4. An appropriate amount as allowed by law. []
  5. Mmm, bagels. []
  6. Test Driven Development []
  7. Integrated Scripting Environment []
  8. Capability-Maturity Model Integration []