Change Requests

Much like my previous post on Meeting Etiquette, this is a topic I feel strongly about. I am sure there are good reasons that people will hate some of my suggestions and I'd love to hear them, but here are my views on change requests1 based on my personal experiences.

All work is a change

I loathe projects that differentiate new work from changes to existing work. It creates two different process flows for little gain, creating points for confusion and mistakes. If all work items, whether new features, bug fixes or enhancements to existing features are raised as change requests, the work flow is the same. Everything should be tied back to requirements, regardless of the type of work, so arguments that claim there is a difference just don't wash with me. Consider a new feature as a change from not having it to having it, after all, that's exactly what it is.

Whatever the work is that is being conducted must still be implemented, reviewed and tested against requirements. Why make it harder than it needs to be?

Specify requirements, not solutions

There are many times I've been assigned a requirement that tells me how to fix something, not what needs fixing. Let's face it, everyone has an opinion but change requests are not the place to express them (except perhaps as a suggestion in the comments somewhere). A change request should clearly state the requirements that drive the change (i.e. the things that can be used to identify when the change request has been resolved) and any other information that may help (for example, steps to reproduce a bug or some rationale behind the change required).

Be descriptive

If I see one more change request with a summary or title like "Change to menu dropdown" or "Display control update", I will be rather miffed and may hurt someone (I'm British, "miffed" is just above "peeved" on the British Standards Anger Scale2). The title of a change request is very important and should give a clear indication of what the change request actually requires. Think of it a bit like twitter; it's much nicer reading some useful information in a tweet than it is to learn that someone just had a coffee. If the title is not clear, time is wasted in going to look at the description every time someone sees that change request. Every status meeting, every discussion, click click click. Save everyone the effort and get it right first time, and if you spot a title that isn't clear enough, fix it right away.

Add value

Finally, when adding comments, additional description, attachments or anything else to a change request, make sure it adds value. Leave an trail for those who follow in your footsteps so that they can discover what changed and why. Document important discussions and decisions. If you don't, you are destined to go around in circles.

Manage releases by managing change

Target changes at releases and review new changes regularly. This way, new requests raised during that release cycle can be considered for inclusion and deferred changes can be ignored until after the release. Each time a new release is started, review all the open requests and determine if they should be rejected, deferred or included in that release. Justify and document rejections in case a duplicate is raised and make sure to link duplicate issues as they can add value to one another.

Have meaningful states

I feel that there are the following possible states for a change request in any sane process to manage them.

  1. Raised
  2. Assigned
  3. In progress
  4. Ready for review
  5. Passed review
  6. Merged to trunk
  7. Rejected
  8. Closed

These are clear, unambiguous states.

  • If something is marked as "Raised", it hasn't been assigned to any release and no work should be happening on it.
  • If it's "Assigned", it should be targetted at a release (even if it's only intended for investigation at first – it can always be removed from the release back to "Raised" or rejected and closed).
  • If someone is working on something, that something should be marked as "In progress" as this helps to track progress at a glance and can also be useful if resources become available and things need reassigning.
  • If something passes testing, close it, unless you really don't trust the test team, in which case have a "Passed test" state and then review the results before closing3.

Considering these basic states, the workflow looks something like this:

State flow diagram
State flow diagram

You could add additional states if you so desired but I feel that these cover the bases well enough and provide an easy to follow work flow.

Closed means closed

Change request management gets messy when the process allows for closed requests to be re-opened. If a closed request seems like it really is now needed, raise a new request. Don't close change requests just because they aren't being done right away; if it's a real issue, then it should remain open until it is resolved.

Not always, but mostly

The guiding principle for me when it comes to change requests is simplicity. Don't make your process more complicated than it needs to be. Make it easy to follow and hard to get wrong. While I'm yet to encounter a project that required anything different to what I've suggested, I am certain there are exceptions, so if you have any, let me know. This is often a contentious, polarising topic, so I expect someone, somewhere to emphatically insist I am wrong. I can accept that, so I look forward to finding out even better ways to manage change.

  1. I made that up []
  2. You may also know them as issues, bugs, defects or some other moniker that ultimately means "a repository of things to do". []
  3. Flippant remarks aside, this may be valuable if you need to perform a round of customer acceptance testing after internal testing before closing out change requests. []

Testing Times

Developers, testers and testing

In the world of software, there are developers and there are testers. The developers often design and implement the software while the testers define and execute the test plans. Software engineering requires both testers and developers, and together they make quality software; one by finding problems and the other by solving problems1. At least, that's the way it should be. Unfortunately, many developers (including myself) have found themselves in situations where the QA department is nonexistent, where testing and the associated test plan updates lurk at the end of every development cycle or feature implementation.

System testJust to be clear, we're not talking unit tests like those used in test-driven development (TDD) with frameworks like NUnit or MSTest. Unit tests and TDD are somewhat unique in that they take the developer's strength of solving problems and trickpersuade developer's into seeing testing as yet another problem in need of resolution (just how do you prove a requirement was met – to the TDD Cave, Codeman!).

Sadly, manual tests found in system testing, integration testing and regression testing are not so exciting. They don't usually present cunning problems to be solved but instead provide a means for mind-numbing hours following detailed, inane instructions where the result feels obvious and the rewards are few. At least, that's my experience as a developer performing tests; the same cannot be said of testers. I've worked with some very talented, passionate quality assurance professionals whose joy found in their craft was inspiring and of whom I have been envious when I too have found myself burdened by testing.

Finding those team mates who take pride in testing and making a product better is like striking gold, but even those that find schadenfreude in identifying a colleague's mistakes can be a better option to a developer than having to run the testing themselves. However, dedicated resources for quality assurance are often seen as a luxury2, leaving developers with little option but to take that responsibility on themselves.

To be clear, I'm trying to say that developers generally hate testing and more specifically, I hate testing, but we'll do it anyway if pushed.

WHHHHAAAT??!

At this point you may be surprised to discover that I recently found myself testing some software. Whether it was a poorly defined test, a flaky feature, or just the mundanity of repeating the same operations (albeit with subtle adjustments) over and over and over again, it left me frustrated, weary and disengaged. Testing is just not my thing, but I do it because I have to – releasing untested software should never be an option for a professional software developer; our users are not our QA department. The all too familiar experience reminded me of steps that developers can take when they're the ones that have to update and execute manual testing; steps that I've seen in action and that make testing almost pleasurable (almost).

Just update the test plan

Have you ever updated a test plan without checking the test was correct, or perhaps executed a test plan that was incorrect? Updating a test plan is tedious, we have to check that existing tests are still relevant and work out where there are gaps in the test coverage. This usually means looking at requirements documents and change requests and determining various test paths, expected results, etc. It can be a lot of work and it is all too easy to fall into the trap of skipping some steps, like validating the test is correctly defined or pretending that there's no way the existing plan missed something. Not only that, but if you've diligently updated the test plan, validating each test as you go, executing it all over again is even more painful because you already know what does and does not work from updating the tests in the first place.

So, do it once and do it right. If you carefully update the test plan, validating existing tests, updating others and creating new ones, you will find yourself testing the product anyway. As tests that should work don't, change requests will get raised and the product will improve. Not only that, but you'll only need to update the document once and you won't need to run the tests more than is absolutely necessary. To cap it off, the act of defining tests is pretty close to problem solving, making it a little less tedious for a developer to perform (though it is documentation, so, you know, don't hurt yourself or anything).

Assume the tester knows nothing (and is a little slow)I met a hawk and it was red

All too often, I come across test plans that are written like a kindergarten story.

Start the application. And then open a file. And then click OK. And then check the background is white and the caption says "Bite me!".

Paragraphs of simple instructions, often with steps missing that the author assumes the tester will know and without any explanation of what it means if that test fails. Instead of this mess, introduce each test with an overview of its purpose and what failure means, followed by test instructions each on a separate line. This not only helps you and your team mates when running the tests but it also helps when they come to update the test plan. Think of the test as code; you wouldn't expect the processor to guess when you miss out lines of code (I hope) so don't expect a tester to do the same; don't forget to add comments where more detail is needed (such as why it's important to change what locale the system is using); and number each step so that it can be referred to easily in notes and change requests, e.g. "Test 2.6, step 10 failed with a value of 20 where 21 was expected"3. If you do this, you will thank yourself later.

Provide context for the results

When performing the test, you will want to be recording results for each step. When reviewing results, you will usually want to see the test step that garnered them, especially if there is a failure or an ambiguous result. Save yourself some time by specifying your tests as a table with a column for results. That way, results are recorded next to the test definition making both recording and reviewing much easier. Not only that, but you don't need to maintain a results sheet and the test definitions separately or contend with different people recording the results in different formats.

Conclusions

If you follow these three simple steps, you should end up with test definitions that look less like an account of your weeks at summer camp when you were 7 and more like the example below.

This test checks the flange sprocket exposes the doobrey flap.

Step Instructions Results
10 Open the flange sprocket. You should see the flange sprocket open. Pass – opened
20 Press the doobrey flap. Fail – unable to locate doobrey flap. Test lacking sufficient detail or doobrey flap was not exposed.
30

Of course, all this assumes you don't have a QA team or team members (or even some tools that help you define and execute manual testing). If you do, that's great; respect your QA team members (or your tools) and the work they do to keep your users from deploying their wrath upon thee. For the rest of us, stuck with ourselves and our office productivity applications in which to define and record our testing, following these tips will make our testing life (and that of those around us) just that little bit less tedious. Who knows, some of you might even start enjoying it.

  1. This is a very simplistic overview, I know. []
  2. There are valid and not so valid reasons for this, but we're not going to get into that here. []
  3. You might also consider spacing step numbers by 10 so it's easier to insert additional steps without renumbering all subsequent steps. []

Ann Arbor Day of .NET

On Saturday (29th Oct), I attended the Ann Arbor Day of .NET. I thought it would be nice to summarise what I heard. I doubt these notes on their own will be greatly useful, but I hope they act as a launch pad into deeper dives on the topics covered as well as a review of what topics were covered. There were five different tracks for the day: Cloud, Framewords & Platforms, Soft Skills, Tools and Mobile. I chose talks from the first four of these based on the talk itself, rather than the track to which it belonged (I ruled out presentations that I had seen a variation of before such as David Giard's (@DavidGiard) Introduction to Microsoft Windows Workflow and Jay R. Wren's (@jayrwren) Let's Go to C# On The iPhone, though they were excellent when I saw them).

Be A Better Developer

I started out the with Mike Wood (@mikewo) and his session, Being A Better Developer. This was a soft skills talk, meaning it was not there to show off some cool .NET feature or technology, or teach me all about C#. Instead, the focus was on what makes a great developer and what we can do to attain that status.

Mike explored the various roles that developers have to take on, the hats we have to wear. From the student learning new things everyday, to teacher imparting knowledge to those around them. From janitor—maintaining what already exists, to researcher—investigating and choosing frameworks, languages, platforms, etc. Using these roles as a foundation, we then moved on to some tips such as setting up time blocks in which to work. If the time limit is reached and the problem isn't solved, turn to someone else for help (or somewhere else, like the Internet1) to avoid thrashing and time wasting. This seems somewhat obvious and yet I'm betting that many of us don't do it as often as we should. The other tips were equally useful, obvious and often compromised in our daily development lives:

  • organize
  • prioritize
  • know your tools
  • set SMART2 goals
  • be a catalyst for change
  • be lazy…

Right, that last one is maybe a little less obvious, but the point wasn't: don't do more than you have to.

One of the best pieces of advice from this talk was to choose a good mentor. I was very fortunate when I started out my career to have several excellent mentors and I miss working with them almost every day. Even now, I imagine what they might have said in order to guide my efforts3. For an hour, Mike filled that role.

There was much more to this talk than what I've written here. This session was an excellent way to spend an hour. While much of what Mike presented could be considered commonsense, it was reassuring and also provided some new tricks for my arsenal to be deployed in any situation, not just day-to-day software development.

Things to check out after this talk


How I Learned To Love Dependency Injection

Next, on to James Bender (@jamesbender) and his presentation on how he much loves dependency injection4. This talk started out looking at the way things were and the ideas behind a loosely-coupled system; a system where each component knows as little as possible about the other components in its parent system, whether it uses the services those components provide or not. Tightly-coupled systems don't promote reuse, create brittle systems and are inherently difficult to test.

James told a compelling story, starting out with familiar concepts—a constructor that takes various interfaces through which the created object can obtain various services, the factory pattern, etc., but soon we were looking at an overview of dependency injection frameworks, what they do and how they do it.

And then, code. Code about cooking bagels. The only bad part about this was the lack of bagels to eat5. The talk moved quickly on to the various features of Ninject, an open source dependency injection framework. I would've preferred it if there was more emphasis on dependency injection, using Ninject to provide examples, rather than the "how to use Ninject" approach that was given. However, this was still very informative and laid a path towards the next part of the talk which showed how dependency injection and TDD6 go hand in hand. This in turn led to an introduction of mocking (the mock framework of choice in these examples was Rhino Mocks, but James recommended Moq for new work).

Things to check out after this talk


A Field Guide for Moving to the Cloud

We're back with Mike Wood (@mikewo) for this one. I've never done any Cloud development but I'm really interested in it and what it may do for me and the work I do, so I'm hanging a lot on this introduction (no pressure, Mike).

Mike started off with a Batman reference, tying the reason why I'm so tired (Batman: Arkham City) with the reason why I'm here. He then fired off some acronyms: IaaS, SaaS, PaaS. This is a great starting point for me as terminology is often the last refuge of miscommunication and I hate not understanding what all those acronyms and terms mean. One participant immediately asked, "What's the difference between IaaS and PaaS?" and most of us nodded, realising we didn't know either. To paraphrase, IaaS gives the most control as you're responsible for patching your OS, upgrading the frameworks, etc. PaaS manages all that for you. Mike did a great job explaining this (unlike my paraphrasing—Mike used a whiteboard and everything) and we moved on, that bit more informed and ready to learn more.

At this point, Mike gave us a run through of the Windows Azure platform, again making sure we're all talking the same language as the presentation progresses. Mike's presentation style is nice and fluid, taking questions and interruptions in his stride, and he clearly knows his topic well (Mike is an Azure MVP, after all). He walked us through the various parts of Windows Azure, Microsoft SQL Azure and Windows Azure AppFabric before we moved on to planning for our move to the Cloud.

Mike discussed identifying suitable applications for moving to the Cloud, scale of the application and the independence of scale, the services used and tight integration with loose coupling (not the first time we've heard this today but I would hope, not the first time in our careers either, otherwise, you're doing it wrong), usage patterns, latency, security and many other facets to be considered when moving to the Cloud.

The final point related to whether the move would save money or not and the importance of answering that question before making the move. This kind of information was great to see and may prove very useful when talking with project managers or business development types. Mike also pointed out using techniques like multipurpose worker roles and disposable compute instances to save as much as 50% in costs.

And then it was lunch.

Things to check out after this talk


Develop IT: Intro to PowerShell

I admit it, I have only ever used PowerShell for things that I could've done from a regular command prompt, so this talk was one I didn't want to miss. I want to know more so I can do more. I feel like PowerShell is an exclusive club for productive individuals and I'd at least like to take a look inside, so this was my opportunity. Sarah Dutkiewicz (@sadukie) was the presenter for this session, a C# MVP and co-author of Automating Microsoft Windows Server 2008 R2 with Windows PowerShell 2.0. This talk was entirely presented using PowerShell, which certainly made it stand apart from other presentations given so far today.

The initial examples given by Sarah quickly demonstrated how PowerShell provides similar behaviour to the traditional command prompt but also how it is different, providing .NET objects (dir w* | Get-Member demonstrated how dir provides an object—very cool). We then learned all about the standard PowerShell syntax that provides an easily dicoverable set of commands (known as Cmdlets in the PowerShell world) and some useful Cmdlets like Get-Help and Out-GridView (which outputs things to its own filterable grid in a window).

Sarah continued introducing us to a variety of PowerShell concepts and features including but not limited to:

  • functions
  • modules
  • manifests
  • PowerShell ISE7
  • providers
  • aliases
  • registry interaction

My biggest takeaway is how easy it can be to work with the registry from within PowerShell (just open PowerShell and enter cd hkcu: then dir to see what I mean). Overall, a great introduction that has given me a starting point for exploring PowerShell and becoming more efficient.

Things to check out after this talk


Stone Soup or Creating a Culture of Change

For the final session of the day, I rejoined James Bender (@jamesbender). I was really looking forward to this having faced many challenges in changing culture as part of my efforts for meeting the requirements of CMMI8. This was expected by event organisers to be a popular talk and I still feel that it should have been; however, the turnout was disappointingly low. This made for a more intimate session and certainly did not detract from the informative content. James expressed that this was probably the last time he would present this talk, which is a shame as I found the anecdotes and the lessons that were drawn from them to be very insightful.

The things I've learned will definitely help me in my work and elsewhere. Things like:

  • Go for low hanging fruit
  • Don't change too much at once
  • Support the change and let it simmer
  • Don't judge
  • Know your tools
  • Only introduce changes you believe in
  • Understand the business
  • Know when to say when
  • Evangelize
  • Build a network of like-minded people
  • Be a politician
  • Be a therapist
  • Realise that it might be difficult to reach everyone
  • When all else fails, buy doughnuts
  • Be patient

There's not much more I could say about this talk that would do it justice (not that my notes have really given justice to the earlier talks), but suffice to say this presentation was very relevant to me and I am very grateful to have been able to see it.

Things to check out after this talk


To conclude, I had a great day. The organisers, sponsors and speakers deserve a huge "thank you" for setting up and supporting this event. Wandering the hallways of Washtenaw Community College, attending talks in rooms and lecture halls reminded me a little of being back at university, but the speed at which the day flew by certainly did not. It was a very informative and enjoyable way to spend the day and among the best $10 I've spent this year.

  1. Use Internet search before you ask someone. []
  2. Specific, Measurable, Achievable, Realistic/Relevant, Trackable []
  3. Besides, "Shut up, Jeff!" []
  4. An appropriate amount as allowed by law. []
  5. Mmm, bagels. []
  6. Test Driven Development []
  7. Integrated Scripting Environment []
  8. Capability-Maturity Model Integration []