Prerelease packages with branches, Appveyor, and MyGet

We use a workflow where Appveyor is setup for CI and then it automatically pushes newly built nuget packages to a MyGet feed for consumption. Perhaps in a future post, I will go over how to set all this up, but for now, let's assume you already have this working; you push changes to a branch in your GitHub repo, which then gets built and tested on Appveyor, before being pushed to MyGet. Everything is nice and smooth.

Unfortunately, the magic ended there. Since there is no differentiation between pushing prerelease changes and release changes, I found that I would either have to limit what branches built in on Appveyor or spend a lot of time curating MyGet to remove intermediate builds I did not want used. I knew that MyGet supported prerelease packages but no matter what I tried, I could not get Appveyor to build them. Unsurprisingly, I found this frustrating. Then I stumbled on this particular answer on StackOverflow:

However, there were some issues I had with this.

  1. It seemed wrong that I had to use an after_build or on_success step to explicitly build my nuget package
  2. I didn't want every build to be prerelease
  3. It didn't work

The first point smelled enough that I wanted to see if I could not have to do that, and that second point seemed really important.

So, I delved a little deeper and discovered that the nuspec file, which has a handy $version$ substitution for the version takes that information from the value of the AssemblyInformationalVersion attribute, which I did not have declared in my AssemblyInfo.cs. Since it was not in there, the Appveyor step declared to patch it did not do anything. This was easy to fix, so I edited my AssemblyInfo.cs to include the attribute and tried again. This time the version updated as I wanted, even without the after_build or on_success shenanigans.

However, it still was not quite right since now, every build being performed was marked as prerelease. While this is a potential workflow, where the appveyor.yml is updated when finally reaching release, what I wanted was for releases to occur when I tagged a branch. For that, I looked at tweaking how the Appveyor build version updated and what environment variables Appveyor defined that I could leverage.

It turns out that Appveyor defines APPVEYOR_REPO_TAG, which is set to true if the build was started by a tag being pushed. It also defines APPVEYOR_REPO_BRANCH containing the name of the branch being built. Armed with these two variables, I updated my appveyor.yml to have two init scripts.

init:
- ps: $env:customnugetversion = if ($env:APPVEYOR_REPO_TAG -eq $True) { "$env:APPVEYOR_BUILD_VERSION" } else { "$env:APPVEYOR_BUILD_VERSION-$env:APPVEYOR_REPO_BRANCH" }
- ps: Update-AppveyorBuild -Version $env:customnugetversion

The first script creates a new environment variable. If the APPVEYOR_REPO_TAG is set to true, the new variable gets set to the value of APPVEYOR_BUILD_VERSION; if not, it is set to APPVEYOR_BUILD_VERSION-APPVEYOR_REPO_BRANCH. So, for example, if the build was going to be version 2.4.0, it was not a tag, and the branch was master, then the new variable would be set to 2.4.0-master; however, if it was a tag, it would just be 2.4.0.

The second script calls the Update-AppveyorBuild cmdlet provided by Appveyor, passing the value of the new environment variable as the -Version parameter value.

These two init scripts, plus the AssemblyInformationalVersion attribute in the AssemblyInfo.cs (and corresponding assembly_information_version field under the assembly_info section of the appveyor.yml) were all I needed. Now, whenever I push to a branch, I get a new prerelease nuget package that I can use in my development coding, and whenever I create a new tag, I get a release package instead. Not only does this reduce my need to manually manage my nuget packages on MyGet, but it also means I can take advantage of the different retention policy settings between prerelease and release packages.

All in all, I find this workflow much nicer than what I had before. Hopefully some of you do too. Examples of the appveyor.yml file and associated AssemblyInfo.cs change can be seen in the following Gist.

Octokit and the Documentation Nightmare

Before I get into the meat of this series of posts, I would like to set the scene. Like many organisations that perform some level of software development these days, we use GitHub. Here at CareEvolution, some developers use the web interface extensively, some use the command line, and others use the GitHub desktop client1, but most use a combination of two or more, depending on the task. This works great for developers, who have each found a comfortable workflow for getting things done, but it is not so great for those involved with DevOps, QA, or documentation where there is a need to find out user-friendly details of what the developers did. Quite often, a feature or bug fix involves several commits and while each has a comment or two, and perhaps an associated pull request (PR) or issue has a general description, but there is no definitive list of "this is what release X contains" that can be presented to a customer. Not only that but sometimes a PR or issue is resolved in an earlier release and merged forward. While we have lists of what a release is going to include, quite often there is more detail that we would like to include, and we often have additional changes as we adapt to the changing requirements of our customers. All this means that one or more people end up trawling the commits, trying to determine what the changes are. It is not a happy task.

"There is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things."

Niccolo Machiavelli
The Prince (1532)

Now, I know that this could all be avoided if people documented changes more clearly, perhaps added release notes to commits, raised issues for documentation changes, or created release notes on the release when it is made. However, no matter how noble change may be, anyone who has worked in process definition for any length of time will know that changing the behaviour of people is the hardest task of all, and therefore it should be avoided unless absolutely necessary. It was with that in mind that I decided mining the existing data for information would be an easier first step than jumping straight to asking people to change. So, with the aim of making life a little easier, I started looking at ways to automate the trawling.

I figured that by throwing out noisy and typical developer non-descriptive commits like "fixed spelling" or "updated comment", and by combining commits under the corresponding PR or issue, I could create useful summary of changes. This would not be customer-ready, but it would be ready for someone to turn into a release note without needing to trawl git history. In fact, if I included details of who committed the changes, it might even provide a feedback loop that would improve the quality of developer commit messages; developers do not like interruptions, so anyone asking for more detail on a commit they made should start to reinforce that if they wrote better commits, PRs, issues, they would get less interruptions.

Octokitty2

Octokit .NET logoAfter a dismissing using git locally to perform this task (I figured those who might need this tool would probably not want to get the repository locally) and reading up on the GitHub API a little, I cracked open LINQPad —my tool of choice for hacking— and went looking for a Nuget package to help. It was during that search that I happily stumbled on Octokit, the official GitHub library for interacting with the GitHub API. At the time of writing, Octokit reflects the polyglot nature of GitHub users, providing variants for Ruby, .NET, and Objective C, as well as experimental versions for Python, and Go. I installed the Octokit Nuget package into LINQPad and started hacking (there is also a reactive version for `IObservable` fans).

Poking around the various objects, and reading some documentation on GitHub (Octokit is open source), I got a feel for how the library wrapped the APIs. Though, I had not yet got any code running, I was making progress. Confident that this would enable me to create the tool I wanted to create, I started writing some code to gather a list of releases for a specific repository and stumbled over my first hurdle; authentication. It turns out it is not quite as straight-forward as I thought (the days of username and password are quite rightly behind us3), and so, my adventure began.

And then…

This is a good place to stop for this week, I think. As the series progresses, I will be piecing together the various parts of my "release note guidance" tool and hopefully, end up with a .NET library to augment Octokit with some useful history mining functionality. Next time, we will take a look at authentication with Octokit (and there will be code).

  1. OSX and Windows variants []
  2. or, James Bond for kids []
  3. OK, that's a lie, but I want to encourage good behaviour []

Getting Information About Your Git Repository With C#

During a hackathon not so long ago, I wanted to incorporate some source control data into my .NET assembly version information for the purposes of troubleshooting installations, making it easier for people to report the code in which they found a bug, and making it easier for people to find the code in which a bug was found1. The plan was to automatically encode the branch, the commit hash, and whether there were local commits or local changes into the `AssemblyConfiguration` attribute of my assemblies during the build.

At the time, I hacked together the `RepositoryInformation` class below that wraps the command line tool to extract the required information. This class supported detecting if the directory is a repository, checking for local commits and changes, getting the branch name and the name of the upstream branch, and enumerating the log. Though it felt a little wrong just wrapping the command line (and seemed pretty fragile too), it worked. Unfortunately, it was dependent on git being installed on the build system; I would prefer the build to get everything it needs using package management like NuGet and npm2.

class RepositoryInformation : IDisposable
{
    public static RepositoryInformation GetRepositoryInformationForPath(string path, string gitPath = null)
    {
        var repositoryInformation = new RepositoryInformation(path, gitPath);
        if (repositoryInformation.IsGitRepository)
        {
            return repositoryInformation;
        }
        return null;
    }
    
    public string CommitHash
    {
        get
        {
            return RunCommand("rev-parse HEAD");
        }
    }
    
    public string BranchName
    {
        get
        {
            return RunCommand("rev-parse --abbrev-ref HEAD");
        }
    }
    
    public string TrackedBranchName
    {
        get
        {
            return RunCommand("rev-parse --abbrev-ref --symbolic-full-name @{u}");
        }
    }
    
    public bool HasUnpushedCommits
    {
        get
        {
            return !String.IsNullOrWhiteSpace(RunCommand("log @{u}..HEAD"));
        }
    }
    
    public bool HasUncommittedChanges
    {
        get
        {
            return !String.IsNullOrWhiteSpace(RunCommand("status --porcelain"));
        }
    }
    
    public IEnumerable<string> Log
    {
        get
        {
            int skip = 0;
            while (true)
            {
                string entry = RunCommand(String.Format("log --skip={0} -n1", skip++));
                if (String.IsNullOrWhiteSpace(entry))
                {
                    yield break;
                }
                
                yield return entry;
            }
        }
    }
    
    public void Dispose()
    {
        if (!_disposed)
        {
            _disposed = true;
            _gitProcess.Dispose();
        }
    }
    
    private RepositoryInformation(string path, string gitPath)
    {
        var processInfo = new ProcessStartInfo
        {
            UseShellExecute = false,
            RedirectStandardOutput = true,
            FileName = Directory.Exists(gitPath) ? gitPath : "git.exe",
            CreateNoWindow = true,
            WorkingDirectory = (path != null && Directory.Exists(path)) ? path : Environment.CurrentDirectory
        };
        
        _gitProcess = new Process();
        _gitProcess.StartInfo = processInfo;
    }
    
    private bool IsGitRepository
    {
        get
        {
            return !String.IsNullOrWhiteSpace(RunCommand("log -1"));
        }
    }
    
    private string RunCommand(string args)
    {
        _gitProcess.StartInfo.Arguments = args;
        _gitProcess.Start();
        string output = _gitProcess.StandardOutput.ReadToEnd().Trim();
        _gitProcess.WaitForExit();
        return output;
    }
    
    private bool _disposed;
    private readonly Process _gitProcess;
}

If I were to approach this again today, I would use the LibGit2Sharp NuGet package or something similar3. Below is an updated version of `RepositoryInformation` that uses LibGit2Sharp instead of git command line. Clearly, you could forego any type of wrapper for LibGit2Sharp and I probably would if I were incorporating this into a bigger task like the one I originally had planned.

class RepositoryInformation : IDisposable
{
    public static RepositoryInformation GetRepositoryInformationForPath(string path)
    {
        if (LibGit2Sharp.Repository.IsValid(path))
        {
            return new RepositoryInformation(path);
        }
        return null;
    }
    
    public string CommitHash
    {
        get
        {
            return _repo.Head.Tip.Sha;
        }
    }
    
    public string BranchName
    {
        get
        {
            return _repo.Head.Name;
        }
    }
    
    public string TrackedBranchName
    {
        get
        {
            return _repo.Head.IsTracking ? _repo.Head.TrackedBranch.Name : String.Empty;
        }
    }
    
    public bool HasUnpushedCommits
    {
        get
        {
            return _repo.Head.TrackingDetails.AheadBy > 0;
        }
    }
    
    public bool HasUncommittedChanges
    {
        get
        {
            return _repo.RetrieveStatus().Any(s => s.State != FileStatus.Ignored);
        }
    }
    
    public IEnumerable<Commit> Log
    {
        get
        {
            return _repo.Head.Commits;
        }
    }
    
    public void Dispose()
    {
        if (!_disposed)
        {
            _disposed = true;
            _repo.Dispose();
        }
    }
    
    private RepositoryInformation(string path)
    {
        _repo = new Repository(path);
    }

    private bool _disposed;
    private readonly Repository _repo;
}

I have yet to use any of this outside of my hackathon work or this blog entry, but now that I have resurrected it from my library of coding exploits past to write about, I might just resurrect the original plans I had too. Whether that happens or not, I hope you found this useful or at least a little interesting; if so, or if you have some suggestions related to this post, please let me know in the comments.

  1. Sometimes, like a squirrel, you want to know which branch you were on []
  2. I had looked at NuGet packages when I was working on the original hackathon project, but had decided not to use one for some reason or another (perhaps the available packages did not do everything I wanted at that time) []
  3. PowerShell could be a viable replacement for my initial approach, but it would suffer from the same issue of needing git on the build system; by using a NuGet package, the build includes everything it needs []

Caching with LINQPad.Extensions.Cache

One of the tools that I absolutely adore during my day-to-day development is LINQPad . If you are not familiar with this tool and you are a .NET developer, you should go to www.linqpad.net right now and install it. The basic version is free and feature-packed, though I recommend upgrading to the professional version. Not only is it inexpensive, but it also adds some great features like Intellisense1 and Nuget package support.

I generally use LINQPad as a simple coding environment for poking around my data sources, crafting quick coding experiments, and debugging. Because LINQPad does not have the overhead of a solution or project, like a development-oriented tool such as Visual Studio, it is easy to get stuck into a task. I no longer write throwaway console or WinForms apps; instead I just throw together a quick LinqPad query. I could continue on the virtues of this tool2, but I would like to touch on one of its utility features.

As part of LINQPad , you get some useful methods and types for extending LINQPad , dumping information to LINQPad's output window, and more. Two of these methods are LINQPad.Extensions.Cache and Utils.Cache. Using either Cache method, you can execute some code and cache the result locally, then use the cached value for all subsequent runs of that query. This is incredibly useful for caching the results of an expensive database query or computationally-intensive calculation. To cache an IEnumerable<T>  or IObservable<T>  you can do something like this:

var thingThatTakesALongTime = from x in myDB.Thingymabobs
                              where x.Whatsit == "thingy"
                              select x;
var myThing = LINQPad.Extensions.Cache(thingThatTakesALongTime);

Or, since it's an extension method,

var myThing = thingThatTakesALongTime.Cache();

For other types, Util.Cache  will cache the result of an expression.

var x = Util.Cache(()=> { /* Something expensive */ });

The first time I run my LINQPad code, my lazily evaluated query or the expression is executed by the Cache method and the result is cached. From then on, each subsequent run of the code uses the cached value. Both Cache methods also take an optional name for the cached item, in case you want to differentiate items that might otherwise be indistinguishable (such as caching a loop computation).

This is, as I alluded earlier, one of many utilities provided within LINQPad that make it a joy to use. What tools do you find invaluable? Do you already use LINQPad ? What makes it a must have tool for you? I would love to hear your responses in the comments.

Updated to correct casing of LINQPad, draw attention to Cache being an extension method for some uses, and adding note of Util.Cache3.

  1. including for imported data types from your data sources []
  2. such as its support for F#, C#, SQL, etc. or its built-in IL disassembly []
  3. because, apparently, I am not observant to this stuff the first time around. SMH []