International Nodebots Day 2015

For the last few weeks, SEMjs, Girl Develop It Ann Arbor, and some volunteers from the local developer community have been working toward putting on an event for International Nodebots Day, a day of events held around the world for building and hacking bots using JavaScript, node, and Johnny-Five. Most of the Ann Arbor team got to enjoy the fruits of our labor on Saturday.

International Nodebots Day event locations
International Nodebots Day event locations

While most of the day was set aside for building the bots and getting them moving, the last couple of hours were set aside for the exciting finale; seeing the bots battle each other in the Octagon. The aim of each battle is simple; push the other bot out of the arena. It is always fun to see the vastly different bots come to blows as an audience of competitors and impartial spectators bey for bot blood. Sadly, due to concerns over the bot being badly damaged (a somewhat rare occurrence), technical problems, or a desire not to compete, not every bot enters the battle tournament. There were two bots independently created on Saturday that I really wish had battled each other, shown below; Tank Kitty and The Mouse1.

Tank Kitty and The Mouse
Tank Kitty and The Mouse

We had 17 competitors in our BattleBot tournament, with no two bots alike (always surprising to me since the base kit for each bot is identical). The winning bot for this event was Brick. Like the name implies, Brick was a brick, and a nimble brick at that. With wheels strapped to the bottom, an Arduino board strapped on top, and a hefty weight advantage, Brick maneuvered, dodged, and pushed its way to a victory that seemed inevitable from the first round. After watching Brick dominate the brilliantly (though perhaps, optimistically) named Gunther, The Destroyer of Dreams, there seemed little doubt that Brick was likely to demolish every opponent it faced. No flipper was powerful enough to lift Brick, and no wheel nor anchor had the grip to stop a brick in its tracks. It turns out that motorizing a ten pound brick is a solid strategy, though I still question the mind of a person who has the forethought to bring a brick to such an event. You can check out Brick's performance below in the highlight reel filmed and edited by John Chapman of SEMjs.

In organising this event, we drew on the experiences of our own in-house nodebot hackathon at CareEvolution, which Brian Genisio had organised, and the two day CodeMash events, also organised by Brian with the assistance of a small volunteer team. Whether it is experiences like "propellers should be banned" to assets like the lab of components and tools, or the fantastic CAD files for laser-cutting bot chassis and wheels (thanks to Ken Fox), we always learn something new from each event, and Saturday was no different. Thanks to Brick, we now know that a weight limit needs to be added to the battle rules2.

Not everyone who participated was working toward a showdown in the Octagon. There were quite a few people there doing their own thing, like the creator of FlipflopBot, who decided to work on a novel line following creation.

It was a long, but immensely satisfying day thanks to the creative kids and adults that participated, the hard work of the volunteer team (Ronda Bergman, Dennis Burton, Julie Cameron, John Chapman, Ken Fox, Brian Genisio, Jeanette Head, Chrissy Yates, and me), the support of their families, and the generosity of the sponsors; Menlo Innovations, which provided the space to host the event; CareEvolution, which sponsored the nodebots lab that participants used to build and customize their bots; and Quantum Signal, which generously sponsored awesome t-shirts for each of our attendees and volunteers.

International Nodebots Day Ann Arbor Participants
International Nodebots Day Ann Arbor Participants

With that, another successful nodebots event is behind us. It was barely over before the team started discussing what we might do next time. What might we do next time? I guess you'll have to wait and see.


If you're interested in more information on our event or nodebots in general, a good starting point would be nodebots.semjs.org or the nodebotsday github repository.


  1. A name I have invented since I failed to get the bot name from its creators 

  2. and perhaps, a new "BrickBot" event needs to be added for those that happen to bring their brick to battle 

Use Inbox, Use Bundles, Change Your Life

I love Google Inbox. At work, it has enabled me to banish pseudo-spam1, prioritize work, and be more productive. At home, it has enabled me to quickly find details about upcoming trips including up-to-date flight information, remind myself of tasks that I need to do around the house, and generally be more organized. While Inbox doesn't replace Gmail for all situations, it does replace it as a day-to-day email client. Gmail is awesome; Inbox is awesomer.

The main feature of Inbox that has enabled me to build a healthier relationship with my email (and achieve virtual Inbox Zero without having to make email management my day job) is Bundling. Bundles in Inbox are a lot like filters that label emails in Gmail; they allow you to group emails based on rules. Not only can you define filters to determine what emails go in the bundle, but you can also decide if the bundled messages go to your inbox (and how often you see the bundle) or if they are just archived away.

Inbox Sidebar
Inbox Sidebar

Inbox comes with some standard bundles: Purchases, Finance, Social, Updates, Forums, Promos, Low Priority, and Trips. Trips is a magic bundle that you cannot configure; it gathers information about things like hotel stays, flight reservations, and car rentals, and combines them into trips like "Weekend in St. Louis" or "Trip to United Kingdom". The other bundles, equivalent to the tabs that Gmail added a year or two ago, automatically define the filters (and as such, those filters are uneditable), but do allow you to control the behavior of the bundle.

When bundled emails appear in your Inbox, they appear as a single item that can be expanded to view the emails inside. You can also mark the entire bundle as done2, if you desire. These features mean you can bundle emails in multiple bundles and have those bundles appear in your Inbox as messages arrive, once a day (at a time of your choice), once a week (at a time and day of your choice), or never3 (bundling some of the pseudo-spam and only having it appear once a day or once a week has drastically improved the signal-to-noise ratio of my email).

Trips
Trips
Trip to NYC
Trip to NYC

While the default bundles are useful, the real power is in defining your own. You can start from fresh or you can use an existing label. Each label is shown in Inbox as unbundled and has a settings gear that allows you to setup bundling rules. In my example, I added a rule to label emails relating to Ann Arbor .NET Developers group.

Label settings without any bundle rules
Label settings without any bundle rules
Adding a filter rule
Adding a filter rule
Label settings showing bundling rules
Label settings showing bundling rules

With the bundle defined, every email that comes in matching the rule will be labelled and added to the bundle, which will appear in my inbox whenever a message arrives. Any messages I mark as done are archived4, removing them from the main inbox. However, they can be seen quickly by clicking the bundle name in the left-hand pane. This is great, except for one thing. The bundle definition only works for new emails as they arrive. It does not include messages you have received before the bundle was setup.

This just felt untidy to me so I was determined to fix it. As it turns out, Gmail provides all the tools to complete this part of Inbox bundles. Since each bundle is a set of filter rules and a label, you can actually edit those filters in Gmail, and Gmail includes the additional ability of applying that rule to existing emails.  To do this, go to Gmail and click the Settings gear, then the Filters tab within settings.

Bundling filters in Gmail
Filters that define my bundle in Gmail

Find the filters that represent your new bundling rules, then edit each one in turn. On the first settings box for the filter, click continue in the lower right. On the following screen, check the "Apply to matching conversations" checkbox and click the "Update filter" button.

First filter edit screen
First filter edit screen
Last filter editing screen
Last filter editing screen
Apply to existing messages
Apply to existing messages

After performing this action for each of my bundles, I returned to Inbox and checked the corresponding bundles; all my emails were now organised as I wanted.

In summary, if you haven't tried Inbox (and you have an Android or iPhone; a strange limitation that I wish Google would lift), I highly recommend spending some time with it, getting the bundles set up how you want, and using it as your primary interface for your Google email. The combination of bundling with the ability to treat emails as tasks (mark them as done, snooze them, pin them) and see them in a single timeline with your Google reminders makes Inbox a powerful yet simple way to manage day-to-day email. Before Inbox, I abandoned Inbox Zero long ago as a "fake work" task that held no value whatsoever, my Gmail inbox had hundreds of read and unread emails in it. Now that I have Inbox, I reached Inbox Zero in a day with minimal effort that one might consider to be just "reading email". I'm not saying Inbox Zero is valuable, I'm just saying that it is realistically achievable with Inbox because Inbox gets daily email management right.

Use bundles, change your life.


  1. I use the term "pseudo-spam" to describe those emails that you don't want to necessarily totally banish as spam so that you can search them later, but that aren't important to you at all such as support emails for projects you don't work on, or wiki update notifications 

  2. One of the great features of Inbox is the ability to treat emails as tasks, adding reminders to deal with them later or mark them as done; this makes drop,delegate,defer,do a lot easier to manage 

  3. If a bundle is marked as never, it is considered "unbundled" and works just as a filter that applies a label 

  4. This can be changed to "Move to Trash" in the Inbox settings 

The Need For Speed

Hopefully, those who are regular visitors to this blog1 have noticed a little speed boost of late. That is because I recently spent several days overhauling the appearance and performance with the intent of making the blog less frustrating and a little more professional. However, the outcome of my effort turned out to have other pleasant side effects.

I approached the performance issues as I would when developing software; I used data. In fact, it was data that drove me to look at it in the first place. Like many websites, this site uses Google Analytics, which allows me to poke around the usage of my site, see which of the many topics I have covered are of interest to people, what search terms bring people here (assuming people allow their search terms to be shared), and how the site is performing on various platforms and browsers. One day I happened to notice that my page load speeds, especially on mobile platforms, were pretty bad and that there appeared to be a direct correlation between the speed of pages loading and the likelihood that a visitor to the site would view more than one page before leaving2 . Thankfully, Google provides via their free PageSpeed Insights product, tips on how to improve the site. Armed with these tips, I set out to improve things.

Google PageSpeed Insights
Google PageSpeed Insights

Now, in hindsight, I wish I had been far more methodical and documented every step— it would have made for a great little series of blog entries or at least improved this one —but I did not, so instead, I want to summarise some of the tasks I undertook. Hopefully, this will be a useful overview for others who want to tackle performance on their own sites. The main changes I made can be organized into server configuration, site configuration, and content.

The simplest to resolve from a technical perspective was content, although it remains the last one to be completed mainly due to the time involved. It turns out that I got a little lazy when writing some of my original posts and did not compress images as much as I probably should have. The larger an image file is, the longer it takes to download, and this is only amplified by less powerful mobile devices. For new posts, I have been resolving this as I go by using a tool called PNGGauntlet to compress my images as either JPEG or PNG before uploading them to the site. Sadly, for images already uploaded to the site, I could only find plugins that ran on Apache (my installation of WordPress is on IIS for reasons that I might go into another time), would cost a small fortune to process all the images, or had reviews that implied the plugin might work great or might just corrupt my entire blog. I decided that for now, to leave things as they are and update images manually when I get the opportunity. This means, unfortunately, it will take a while. Thankfully, the server configuration options helped me out a little.

On the server side, there were two things that helped. The first, to ensure that the server compressed content before sending it to the web browser, did not help with the images, but it did greatly reduce the size of the various text files (HTML, CSS, and JavaScript) that get downloaded to render the site. However, the second change made a huge difference for repeat visitors. This was to make sure that the server told the browser how long it could cache content for before it needed to be downloaded again. Doing this ensured that repeat visitors to the site would not need to download all the CSS, JS, images, and other assets on every visit.

With the content and the server configuration modified to improve performance, the next and most important focus was the WordPress site itself. The biggest change was to introduce caching. WordPress generates HTML from PHP code. This takes time, so by caching the HTML it produces, the speed at which pages are available for visitors is greatly increased. A lot of caching solutions for WordPress are developed with Apache deployments in mind. Thankfully, I found that with some special IIS-specific tweaking, WP Super Cache works great3 .

At this point, the site was noticeably quicker and almost all the PageSpeed issues were eliminated. To finish off the rest, I added a few plugins and got rid of one as well. I used the Autoptimize plugin to concatenate, minify, compress, and perform other magic on the HTML, CSS, and JS files (this improved download times just a touch more by reducing the number of files the browser must request, and reducing the size of those files), I added JavaScript to Footer, a plugin that moves JavaScript to after the fold so that the content appears before the JavaScript is loaded, I updated the ad code (from Google) to use their latest asynchronous version, and I removed the social media plugin I was using, which was not only causing poor performance but was also doing some nasty things with cookies.

Along this journey of optimizing my site, I also took the opportunity to tidy up the layout, audit the cookies that are used, improve the way advertisers can target my ads, and add a sitemap generator to improve some of the ways Google (and other search engines) can crawl the site4. In all, it took about five days to get everything up and running in my spare time.

So, was it worth it?

Before and after
Before and after

From my perspective, it was definitely worth it (please let me know your perspective in the comments). The image above shows the average page load, server response, and page download times before the changes (from January through April – top row) and after the changes (June – bottom row). While the page download time has only decreased slightly, the other changes show a large improvement. Though I cannot tell for certain what changes were specifically responsible (nor what role, if any, the posts I have been writing have played5 ), I have not only seen the speed improve, but I have also seen roughly a 50-70% increase in visitors (especially from Russia, for some reason), a three-fold increase in ad revenue6, and a small decrease in Bounce Rate, among other changes.

I highly recommend taking the time to look at performance for your own blog. While there are still things that, if addressed, could improve mine (such as hosting on a dedicated server), and there are some things PageSpeed suggested to fix that are outside of my control, I am very pleased with where I am right now. As so many times in my life before, this has led me to the inevitable thought, "what if I had done this sooner?"


  1. hopefully, there are regular visitors 

  2. The percentage of visitors that leave after viewing only one page is known as the Bounce Rate 

  3. Provided you don't do things like enable compressing in WP Super Cache and IIS at the same time, for example. This took me a while to understand but the browser is only going to strip away one layer of that compression, so all it sees is garbled nonsense. 

  4. Some of these things I might blog about another time if there is interest (the cookie audit was an interesting journey of its own). 

  5. though I possibly could with some deeper use of Google Analytics 

  6. If that is sustained, I will be able to pay for the hosting of my blog from ad revenue for the first time 

Getting Information About Your Git Repository With C#

During a hackathon not so long ago, I wanted to incorporate some source control data into my .NET assembly version information for the purposes of troubleshooting installations, making it easier for people to report the code in which they found a bug, and making it easier for people to find the code in which a bug was found1. The plan was to automatically encode the branch, the commit hash, and whether there were local commits or local changes into the AssemblyConfiguration attribute of my assemblies during the build.

At the time, I hacked together the RepositoryInformation class below that wraps the command line tool to extract the required information. This class supported detecting if the directory is a repository, checking for local commits and changes, getting the branch name and the name of the upstream branch, and enumerating the log. Though it felt a little wrong just wrapping the command line (and seemed pretty fragile too), it worked. Unfortunately, it was dependent on git being installed on the build system; I would prefer the build to get everything it needs using package management like NuGet and npm2.

If I were to approach this again today, I would use the LibGit2Sharp NuGet package or something similar3. Below is an updated version of RepositoryInformation that uses LibGit2Sharp instead of git command line. Clearly, you could forego any type of wrapper for LibGit2Sharp and I probably would if I were incorporating this into a bigger task like the one I originally had planned.

I have yet to use any of this outside of my hackathon work or this blog entry, but now that I have resurrected it from my library of coding exploits past to write about, I might just resurrect the original plans I had too. Whether that happens or not, I hope you found this useful or at least a little interesting; if so, or if you have some suggestions related to this post, please let me know in the comments.


  1. Sometimes, like a squirrel, you want to know which branch you were on 

  2. I had looked at NuGet packages when I was working on the original hackathon project, but had decided not to use one for some reason or another (perhaps the available packages did not do everything I wanted at that time)  

  3. PowerShell could be a viable replacement for my initial approach, but it would suffer from the same issue of needing git on the build system; by using a NuGet package, the build includes everything it needs 

LINQ: Notation, Syntax, and Snags

Welcome to the final post in my four part series on LINQ. So far, we've talked about:

For our last look into LINQ (at least for this mini-series), I want to tackle the mini-war of "dot notation" versus "query syntax", and look at some of the pitfalls that can be avoided by using LINQ responsibly.

Let Battle Commence…

For anyone who has written LINQ using C# (or VB.NET), you are probably aware that there is more than one way to express the query (two of which, sane people might use):

  1. Old school static method calls
  2. Method syntax
  3. Query syntax

No one in their right mind should be using the first of these options; extension methods were invented to alleviate the pain that would be caused by writing LINQ this way1. Extension methods, static methods that can be called as though member methods, are the reason why we have the second option of method syntax (more commonly known as dot notation or fluent notation). The final option, query syntax, is also known as "syntactical sugar", some language keywords that can make coding easier. These keywords map to concepts found in LINQ methods and query syntax is what gives LINQ it's name; Language INtegrated Query2.

They all map to the same thing, a sequence of methods that can be executed, or translated into an expression tree, evaluated by a LINQ provider, and executed. Anything written in one of these approaches can be written using the others. There is often contention on whether to use dot notation or query syntax, as if one is inherently better than the other, but as we all know, only the Sith deal in absolutes3.  Hopefully, by the end of these examples you will see how each has its merits.

Why are LINQ queries not always called like regular methods?

Because sometimes, such as in LINQ-to-SQL or LINQ-to-Entity Framework, the method calls need to be translated into SQL or some other querying syntax, allowing queries to take advantage of server-side querying optimizations. For a more in-depth look at all things LINQ, including the way the language keywords map to the method calls, I recommend looking at Jon Skeet's Edulinq series, which is available as a handy e-book.

Before we begin, here is a quick summary of the C# keywords that we have for writing queries in query syntax: from, group, orderby, let, join, where and select.  There are also contextual keywords to be used in conjunction with one or two of the main keywords:in, into, ascending, descending, by, on and equals. Each of these keywords has a corresponding equivalent method or methods in LINQ although it can sometimes be a little more complicated as we shall see.

So, let us look at an example and see how it can be expressed using dot notation and query syntax4). For an example, let us look at a simple projection of people to their last names.

These two queries do the exact same thing, but I find that the dot notation wins out because it takes less typing and it looks clearer. However, if we decide we want to only get the ones that were born before 1980, things look a little more even.

Here, there is not much difference between them, so I'd probably leave this to personal preference5. However, as soon as we want a distinct list, the dot notation starts to win out again because C# does not contain a distinct keyword (though VB.NET does).

Mixing dot notation and query syntax in a single query can look messy, as shown here:

So, I prefer to settle on just one style of LINQ declaration for any particular query, or to use intermediate variables and separate the query into parts (this is especially useful on complex queries as it also provides clarity; being terse is cool, but it is unnecessary, and a great way to get people to hate you and your code).

The Distinct() method is not the only LINQ method that has no query syntax alternative, there are plenty of others like Aggregate(), Except(), or Range(). This often means dot notation wins out or is at least part of a query written in query syntax. So, thus far, dot notation seems to have the advantage in the battle against query syntax. It is starting to look like some of my colleagues are right, query syntax sucks. Even if we use ordering or grouping, dot notation seems to be our friend or at least is no more painful than query syntax:

However, it is not always so easy. What if we want to introduce variables, group something other than the original object, or use more than one source collection? It is in these scenarios where query syntax irons a lot more of the complexity. Let's assume we have another collection containing newsletters that we need to send out to all our people. To generate the individual mailings, we would need to combine these two collections6.

I know which one is clearer to read and easier to remember when I need to write a similar query. The dot notation example makes me think for a minute what it is doing; projecting each person to the newsletters collection and, using SelectMany(), flattening the list then selecting one result per person/newsletter combination. Our query syntax example is doing the same thing, but I don't need to think too hard to see that. Query syntax is starting to look useful.

If we were to throw in some mid-query variables (useful to avoid calculating something multiple times or to improve clarity), or join collections, query syntax becomes really useful. What if each newsletter is on a different topic and we only want to send newsletters to people who are interested in that topic?

I know for sure I would need to go look up how to do that in dot notation7. Query syntax is an easier way to write more complex queries like this and provided that you understand your query chain, you can declare clear, performant queries.

 

In conclusion…

In this post I have attempted to show how both dot notation and query syntax (aka fluent notation) have their vices and their virtues, and in turn, armed you with the knowledge to choose wisely.

So, think about whether someone can read and maintain what you have written. Break down complex queries into parts. Consider moving some things to lazily evaluated methods. Understand what you are writing; if you look at it and have to think about why it works, it probably needs reworking. Always favour clarity and simplicity over dogma and cleverness; to draw inspiration from Jurassic Park, even though you could, stop to think whether you should.

LINQ is a complex feature of C# and .NET (and all the other .NET languages) and there are many things I have not covered. So, if you have any questions, please leave a comment. If I can't answer it, I will hopefully be able to direct you to someone who can. Alternatively, check out Edulinq by the inimitable Jon Skeet, head over to StackOverflow where there is an Internet of people waiting to help (including Jon Skeet), or get binging (googling, yahooing, altavistaring, whatever…)8.

And that brings us to the end of this series on LINQ. From deferred execution and the query chain to dot notation versus query syntax, I hope that I have managed to paint a favourable picture of LINQ, and helped to clear up some of the prejudices and confusions that surround it. LINQ is a powerful weapon in the arsenal of a .NET programmer; to not use it, would be a waste.


  1. Just the thought of the nested method calls or high number of placeholder variables makes me shudder 

  2. I guess LIQ was too suggestive for Microsoft 

  3. That statement is an absolute, Obi Sith Kenobi 

  4. I am definitely leaving the nested static methods approach to you as an exercise (in futility 

  5. Though if you changed the person variable to p, there is less typing in the query syntax , if that is a metric you are concerned with 

  6. Yes, a nested foreach can achieve this simple example, but this is just illustrative, and I'd argue cleaner than a foreach approach 

  7. That's why I cheated and wrote it in query syntax, then used Resharper to change it to dot notation for me 

  8. Back in my day, it was called searching…grumble grumble