Octokit and the Authenticated Access

Last week, I introduced Octokit and my plans to write a tool that will mine our GitHub repositories for information that can be used to craft release notes. This week, we will look at the first step; authentication. I am using Octokit.NET for my hackery; if you choose to use another variant of Octokit, some of the types and methods available may be different, but you should be able to follow along. In addition, I have no intention of documenting every aspect of Octokit and the GitHub API, so if you are intrigued by anything that I do not discuss, I encourage you to explore the relevant documentation.

The main GitHubClient class, used to access the GitHub APIs, has several constructors, some that take credentials (sort of) and some that do not. All but one of the constructors take a ProductHeaderValue instance, which provides some basic information about the application that is accessing the API. According to the documentation, this information is used by GitHub for analytics purposes and can be whatever you want.

Now, if you only want to read information about publicly accessible repositories, you do not need to provide any authentication at all. You can create a client instance and just get stuck in, like this:

However, you can only perform some read-only tasks on public repositories and, unless you are performing the most trivial of tasks, you will hit rate limits for unauthenticated access.

NOTE: All of the Octokit.NET calls are awaitable

Authentication can be achieved in a several ways; via an implementation of ICredentialStore passed to a constructor of GitHubClient, by providing credentials to the GitHubClient.Connection.Credentials property, or by using the GitHubClient.Oauth. The OAuth API allows an application to authenticate without ever having access to a user's credentials; it is understandably a little more complex than approaches that just take credentials. Since, at this point, our focus is to craft some methods for extending the API functionality, we will worry about the OAuth workflow another time. The other two approaches are quite similar, although the constructor-based approach requires a little extra effort. The following two examples will both give you authenticated access, though I think the constructor-based access feels a little less hacky:

Two-factor Authentication

Of course, using your username and password is futile because you have two-factor authentication enabled1. Luckily there is a constructor on the Credentials class that takes a token, which you can generate on GitHub.

First, log into your GitHub account and choose Settings from the drop-down at the upper-right. On the fight, select Personal Access Tokens.

The right-hand side will change to the list of personal access tokens you have already created for your account (you may have created these yourself or an application may have created them via OAuth). Click the Generate New Token button and give it a useful name. You can now use this token as your credentials when using Octokit. I keep my token in the LINQPad password manager2 so that I can reference it in my code using the name I gave it, like this:

In conclusion…

And that is it for this week. In the next entry of this series on Octokit, we will start getting to grips with releases and some of the basic pieces for my release note utility library.

My apologies for the late post, I have been having hosting issues lately and my blog has not had high availability this week. I am looking for new hosts, so suggestions are welcome.

  1. If you do not, you should rectify that 

  2. The LINQPad password manager is available via the File menu in LINQPad 

Octokit and the Documentation Nightmare

Before I get into the meat of this series of posts, I would like to set the scene. Like many organisations that perform some level of software development these days, we use GitHub. Here at CareEvolution, some developers use the web interface extensively, some use the command line, and others use the GitHub desktop client1, but most use a combination of two or more, depending on the task. This works great for developers, who have each found a comfortable workflow for getting things done, but it is not so great for those involved with DevOps, QA, or documentation where there is a need to find out user-friendly details of what the developers did. Quite often, a feature or bug fix involves several commits and while each has a comment or two, and perhaps an associated pull request (PR) or issue has a general description, but there is no definitive list of "this is what release X contains" that can be presented to a customer. Not only that but sometimes a PR or issue is resolved in an earlier release and merged forward. While we have lists of what a release is going to include, quite often there is more detail that we would like to include, and we often have additional changes as we adapt to the changing requirements of our customers. All this means that one or more people end up trawling the commits, trying to determine what the changes are. It is not a happy task.

"There is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things."

Niccolo Machiavelli
The Prince (1532)

Now, I know that this could all be avoided if people documented changes more clearly, perhaps added release notes to commits, raised issues for documentation changes, or created release notes on the release when it is made. However, no matter how noble change may be, anyone who has worked in process definition for any length of time will know that changing the behaviour of people is the hardest task of all, and therefore it should be avoided unless absolutely necessary. It was with that in mind that I decided mining the existing data for information would be an easier first step than jumping straight to asking people to change. So, with the aim of making life a little easier, I started looking at ways to automate the trawling.

I figured that by throwing out noisy and typical developer non-descriptive commits like "fixed spelling" or "updated comment", and by combining commits under the corresponding PR or issue, I could create useful summary of changes. This would not be customer-ready, but it would be ready for someone to turn into a release note without needing to trawl git history. In fact, if I included details of who committed the changes, it might even provide a feedback loop that would improve the quality of developer commit messages; developers do not like interruptions, so anyone asking for more detail on a commit they made should start to reinforce that if they wrote better commits, PRs, issues, they would get less interruptions.


Octokit .NET logoAfter a dismissing using git locally to perform this task (I figured those who might need this tool would probably not want to get the repository locally) and reading up on the GitHub API a little, I cracked open LINQPad —my tool of choice for hacking— and went looking for a Nuget package to help. It was during that search that I happily stumbled on Octokit, the official GitHub library for interacting with the GitHub API. At the time of writing, Octokit reflects the polyglot nature of GitHub users, providing variants for Ruby, .NET, and Objective C, as well as experimental versions for Python, and Go. I installed the Octokit Nuget package into LINQPad and started hacking (there is also a reactive version for IObservable fans).

Poking around the various objects, and reading some documentation on GitHub (Octokit is open source), I got a feel for how the library wrapped the APIs. Though, I had not yet got any code running, I was making progress. Confident that this would enable me to create the tool I wanted to create, I started writing some code to gather a list of releases for a specific repository and stumbled over my first hurdle; authentication. It turns out it is not quite as straight-forward as I thought (the days of username and password are quite rightly behind us3), and so, my adventure began.

And then…

This is a good place to stop for this week, I think. As the series progresses, I will be piecing together the various parts of my "release note guidance" tool and hopefully, end up with a .NET library to augment Octokit with some useful history mining functionality. Next time, we will take a look at authentication with Octokit (and there will be code).

  1. OSX and Windows variants 

  2. or, James Bond for kids 

  3. OK, that's a lie, but I want to encourage good behaviour 

Monitoring My Blog Using Uptime Robot and IF

A week or two ago I discovered that my blog was not loading and I had no idea why it was throwing the 500 error code nor for how long it had been doing so. Having experienced this once or twice before, I went into my administration dashboard, stopped the website and application pool, then started them again. This fixed the immediate issue and my blog was back online, but I was not satisfied. I no longer wanted to discover this issue by chance so I went looking for ways to monitor my site.

I found several methods that could help, including one that uses my site's RSS feed as an IF trigger on IFTTT1, but I did not like this approach, so I looked around a little more. Eventually, after reading over a few options, I settled on using Uptime Robot. Uptime Robot allows up to 30 monitors on their free tier, which can be monitored at various frequencies down to every five minutes (if you want more monitors that are checked more frequently, you can look at their various paid options). Using this service, I not only will find out if my site goes down, but I also get stats over time on the reliability of my site.

Setting up a monitor on an HTTP(s) URL
Setting up a monitor on an HTTP(s) URL

Setting up a monitor was really easy and a quick test resulted in an email telling me the site was down, followed by another telling me it was back up once the site was restored. This was great although I felt an email was not enough. While Uptime Robot provides SMS support for sending alerts, they also provide you with an RSS feed on your account that syndicates your uptime alerts. Using an IF rule and the IF app on my phone, I was able to set up phone notifications for when my blog transitioned state between being up and down.

My Settings provides link to RSS feed for monitors
My Settings provides link to RSS feed for monitors
Trigger settings for IF rule to send notification to phone
Trigger settings for IF rule to send notification to phone

I retested the monitor (this meant taking the site down and waiting until the next monitor cycle) and convinced myself that the IF trigger and action were working satisfactorily. Now, whenever my blog experiences a glitch, I will know within about five minutes or so. Not only that, but if it fixes itself before I get chance to do so, I will have some stats that I can use to determine if there is a fundamental issue with my site's up-time. Uptime Robot provides a dashboard for managing monitors and viewing stats.

Uptime Robot dashboard view
Uptime Robot dashboard view

There is also a "TV Mode" for showing live stats, should you want a more permanent display in your office, for example. All of these views have a responsive layout, making it easy to check statuses from a mobile device.

Uptime Robot TV Mode
Uptime Robot TV Mode

Since setting the monitor up, my site has been down a lot. I do not know for sure if this is more or less than usual because I was not monitoring it this closely before, but I learned that my hosting provider has been updating servers recently. These hardware changes have caused all sorts of havoc with the reliability of site up-time for a lot of people, it seems2. Thankfully, due to both Uptime Robot and the responsiveness of my hosting providers support team, most issues were discovered and resolved in a reasonable time.

During these availability issues, I learned that just finding out when my site is down was not sufficient, so I added an additional "site back up" rule to IFTTT. This turns out to be really useful when your site is down while one is sleeping as it removes the need to go check if it the site is back up upon waking.

In Conclusion

While I am disappointed that my site was down, I was really happy to see that my Uptime Robot monitoring was doing exactly what I wanted. Not only that, but I have screen grab showing less than perfect stats, which makes for a great addition to this blog.

Overview from dashboard
Overview from dashboard

Uptime Robot is a nice discovery and a welcome addition to my suite of tools. The inclusion of a RSS feed to check monitor status as well as an API, which I am yet to explore, make it easy to integrate the information from Uptime Robot monitors into other tools.

  1. If This Then That: ifttt.com 

  2. I will refrain from going into detail on what I think about a company failing at their core purpose when doing something relating to that core business (feels a little like not serving food in a restaurant because they were buying new food)  

C#6: Collection Initializers

Patterns and Collection Initializers

Some of the cool parts of C# are pattern-based, rather than type-based as one might expect. For example, foreach does not need the enumerated type to implement IEnumerable in order to work, it just requires that it has a GetEnumerator() method. Another place where pattern-based compilation occurs that also happens to illustrate how useful this pattern-based approach can be is in collection initializers like this:

When this gets compiled, for each value in the initializer the C# compiler1 looks for an Add() method on the collection type with an appropriate number of arguments of the appropriate types, which it then calls for that value. The benefit to using a pattern-based approach is that the compiler does not need to know about every possible compatible type up front or what Add() methods it might support. It only enforces that the type derives from IEnumerable and that it has an Add() method that matches the initializer values. This allows us to create a collection types that can support a variety of different ways to add values without needing the compiler to know our type will ever exist. For example, we could create a collection of names with Add() methods that take one or two strings and then initialize elements with either just the surname or first name and surname2.

Collection initializers in C#6

In C#6, a new collection initializer syntax has been added and the way the compiler interprets the existing syntax has been modified. Before we look at the newly added syntax, let us look at how the compilation of the existing syntax has been changed. To do so, consider a collection of DateTimeOffset values where we want to simplify adding dates and times from parsable string values. To support this we could implement an entire new type with the appropriate calls or we could derive from an existing collection type List<DateTimeOffset> and then implement a new Add() method to support string.

Of course, not all collections are open for extension and creating new types for this is cumbersome since we want a list of DateTimeOffset we just happen to want to initialize it from another type. To get around sealed types and the need to implement wrapper types or derivations, VB.NET has supported using extension methods to expand the Add() options on a type. I like this idea since, in the previous example, our list is really still of DateTimeOffset and we want others to see it that way, we just happen to support adding string values; why should we be forced to use a different type for that? Alas ((Cue Top Gear voice style)), this feature was not included in C#…until now. As of C#6, this disparity between VB.NET and C# is no more; the compiler will use a matching Add() extension method in lieu of an appropriate Add() method on the type itself.

Interestingly, this change to how C# resolves overloaded methods is very specific in that it only supports Add() extension methods and not extension methods in other pattern-based scenarios like GetEnumerator. I am not certain why this so, since I can imagine some cases where enumerating an existing non-enumerated type might be quite nice3, though I expect is is because it would not be clear what was going to get enumerated and therefore, the code would be ambiguous and hard to follow4. The Add() method usage in an initializer does not have this ambiguity as the compiler makes it clear if it found a suitable Add method that matches both the collection type and the type of the element being added.

Index Initializers

The other change to collection initializers in C#6 is the introduction of index intializer syntax. This new syntax is similar to the existing collection initializer syntax we have discussed, except that instead of using Add() methods, it uses indexers. With index-based collection initialization we can specify values for specific indices in a collection. This works for any indexer that a collection implements. Traditionally, we might initialize a Dictionary<string,string> using the Add() method pattern like this:

But with the index initializer syntax, we can make it clear that one string indexes the other to make this much more readable as:

I cannot speak for anyone else, but I think this really makes the code easier to read. Note, however, that this new index syntax cannot be mixed with traditional initializer syntax; for example, the following is invalid:

I think it is okay that they cannot be mixed. One way is using Add() method overload resolution to set values and the other is using indexers; these use different semantics and often have different implementations and connotations. By mixing them, the code becomes muddled and loses meaning; are we specifying records in a collection or are we mapping specific indexes to their records?

In Conclusion

Both of these changes to collection initialization are reasonably subtle. Of all the features C#6 brings us, these are perhaps going to be used the least. In fact, when I started writing this post I was unsure of their value. However, as I wrote and thought of usage examples, I came to the realisation that although they cater to perhaps infrequent scenarios, these changes to collection initializers each provide nice additions to the C# language. Index initializers remove a little ambiguity from the initialization of indexed collections, such as dictionaries, whereas the expansion of Add() method overload resolution to include extension methods reduces the number of frivolous types we have to create. In short, they allow us to write simpler, clearer code, and that is a beautiful thing.

  1. pre-C#6 

  2. A contrived example to be sure, but illustrative none-the-less 

  3. Such as enumerating the lines from a file stream 

  4. Much clearer to write a LineEnumerator wrapper for FileStream and use it explicitly 

C#6: The nameof Operator

Before discussing the nameof operator in C#6, I want us to consider why nameof exists at all. So, let's head back ten years to the heady days of 2005.

Wayne's World Flashback

When version 2.0 of the .NET framework arrived, it transitioned the fledgling platform from a sketch of what might be to a fully-formed platform that could support ongoing and future desktop and web development. Since then, each release of the framework and its associated languages have added a variety of bells and whistles that simplify and enhance the way we develop. Among many of the concepts and types introduced by .NET 2.0 was System.ComponentModel.INotifyPropertyChanged, part of the enhanced data binding introduced to Windows Forms development. This interface turned out to be a workhorse and introduced many developers to a new problem; making sure the string that named a variable matched the name of an actual variable.

Now, you may well object to this claim since various versions of ArgumentException already demanded this of developers, but I think we both know that until our tooling got smarter (like FxCop and Resharper), many of us just did not fill that argument out if we could help it. After all, the stack trace would tell us where the crash happened, we could put something meaningful in the exception message, and keeping that variable name up-to-date after refactoring was a pain. With the advent of INotifyPropertyChanged the benefit of putting the variable name in a string started to outweigh the costs. Quickly, patterns emerged to try and simplify this, from dubiously performant uses of reflection to build-time code generation. As tools matured, we could get refactorings that took these strings into account and warnings that could shout at us if a variable was mentioned that didn't exist. Few of these were particularly elegant or entirely foolproof, and none were both1. In addition to ArgumentException and INotifyPropertyChanged, property names would be used for logging and debugging.

In the Name of Progress

There were calls for a new operator to accompany typeof; the new operator, infoof2 would provide the corresponding reflection information of a particular code construct (like MethodInfo or PropertyInfo), simplifying not just obtaining the name of something, but also any reflection operation involving that something. All this use and discussion of meta-information did not go unnoticed. Eric Lippert blogged about infoof and why it would be useful, why it was so difficult to implement, and indirectly foreshadowed where we would be today. However, amid the discusson, there was little action.

In 2012, .NET 4.5 brought us the CallerMemberNameAttribute type and its siblings, CallerLineNumberAttribute and CallerFilePathAttribute. These new attributes enabled developers to decorate method arguments, indicating that the appropriate piece of information was to be injected into that argument when the method was called. This fell short of an infoof operator, but it greatly simplified use of INotifyPropertyChanged (and INotifyPropertyChanging, introduced in .NET 3.5). Alas, argument exceptions, logging, debugging, and other uses of method, variable, and property names were left as they were, often leading to mismatched error messages, obscure data binding bugs, and other problems.

That changed in 2015 with the new releases of both .NET and C#, and the new nameof operator in C#6. The nameof operator is sublimely simple; in fact, its concept seems so obvious that it's a wonder it took so long to appear3. Using nameof, we can inject the names of variables, types, methods, events, and properties into all sorts of places at compile-time4, knowing that if we change the name, our refactoring tools can update all references with confidence. Not only that, but our intent is clear; we want the name of this thing to be here and not just some string that happens to look like the name of some thing. While the nameof operator does not replace CallerMemberNameAttribute, which so deftly simplified INotifyPropertyChanged5, it does simplify other scenarios like throwing ArgumentException, logging errors, and outputting debug information.

In Conclusion

When I first contemplated writing a whole blog entry dedicated to nameof, I thought it was too simple a feature to warrant such focus; now I have finished, I believe nameof to be entirely worthy of the attention. Along with the fantastic string interpolation in C#6, I believe nameof is one of the simplest and most useful additions to the C#6 language. Like many C# and .NET features we now take for granted, nameof is a beautifully simple concept that we will come to rely upon. I believe it will save us countless hours of fixing erroneous refactoring, arguing over coding style and code reviews, and head-scratching at spurious errors.

  1. IMHO 

  2. pronounced, "Info Of" 

  3. As is often the case in software development, we were all too busy discussing the most complex use-case we could think of rather than the one that really needed solving 

  4. unlike reflection-based solutions that do all the work at run-time 

  5. nameof does provide an alternative, more wordy alternative for that scenario