UserEcho is a service employed by the likes of OzCode and SublimeText for collecting and managing customer issues and suggestions; often regarding software features and bugs. It enables users and developers to discuss bugs and ideas, and respond to frequently asked questions.
Recently, I signed into the OzCode UserEcho site using my Google credentials. UserEcho supports the OpenID identity system, providing a wide range of ways to authenticate. Upon logging in, I was immediately confused; where was the issue I had raised a week or two earlier? I was certain it should be there but it was not. After a little thought, I realised I may have logged in with the wrong credentials, inadvertently creating a new account. I logged out and then, using my GitHub account to authenticate instead of Google, tried logging back in. Voila! My issue appeared.
For some, this would probably be the end of it, but it bugged me that I now had two accounts. You may think this is no big deal and you are right, but it was bothering me1.
Using the dropdown captioned with my name at the top-right of the UserEcho site, I chose User Profile. At the bottom of the subsequent page, I found a table of OpenID logins that the account used but no way to edit it. How could I merge my accounts or add new OpenID identities?
After searching around the UserEcho site a bit and trying a few Google searches2, I was almost ready to contact UserEcho for some help (or just give up), but then I had an idea. If UserEcho was like most sites these days, it probably keyed accounts using a primary email address for the user. So, I checked the two UserEcho accounts I knew I had and confirmed they had different email addresses.
I edited the email address for one of the two accounts to match the other, triggering UserEcho to send a verification email3, so I followed the instructions and verified the email address change.
Then I returned to the User Profile screen in OzCode's UserEcho. At the bottom, below the OpenID table, I was now presented with a message saying that there were other accounts with the same email address, including a Merge button. I clicked that button and immediately, the table showed both the Google and GitHub logins.
So, there you go. If you have multiple accounts for a UserEcho product site, make sure the email addresses match and that you have verified the email address on each account, then view one and click Merge. Job done.
In writing this blog and generating the screenshots, I discovered I actually had three accounts! [↩]
This is just a quick post. There was news today about malicious ads in reputable1 ad networks that can "surreptitiously hijack" computers. Though the Google ad network, the network used by this site, was not one of the networks reported to have been exploited, I decided to pull all syndicated advertising from my blog. Google may never be affected by this issue, but I don't want to wait to find out.
As a result of the changes I have made, I have also updated my cookie and privacy policies to reflect the changes, so please review those.
I did not want to take this action2, but I feel it is warranted given the seriousness of the possible outcomes. The nature of advertising online needs to change; consumers need confidence that the sites they visit are safe, advertising networks need to vet the ads they syndicate, and browsers need to empower their users. For more information on the malicious ads, I recommend reading the article on Ars Technica; for more information about online advertising and what needs to change, I recommend reading "The ethics of modern web ad-blocking" on Marco.org.
Finally, if anyone out there is interested in sponsoring my humble blog, please let me know. All the best and safe browsing.
Hopefully, those who are regular visitors to this blog1 have noticed a little speed boost of late. That is because I recently spent several days overhauling the appearance and performance with the intent of making the blog less frustrating and a little more professional. However, the outcome of my effort turned out to have other pleasant side effects.
I approached the performance issues as I would when developing software; I used data. In fact, it was data that drove me to look at it in the first place. Like many websites, this site uses Google Analytics, which allows me to poke around the usage of my site, see which of the many topics I have covered are of interest to people, what search terms bring people here (assuming people allow their search terms to be shared), and how the site is performing on various platforms and browsers. One day I happened to notice that my page load speeds, especially on mobile platforms, were pretty bad and that there appeared to be a direct correlation between the speed of pages loading and the likelihood that a visitor to the site would view more than one page before leaving2 . Thankfully, Google provides via their free PageSpeed Insights product, tips on how to improve the site. Armed with these tips, I set out to improve things.
Now, in hindsight, I wish I had been far more methodical and documented every step— it would have made for a great little series of blog entries or at least improved this one —but I did not, so instead, I want to summarise some of the tasks I undertook. Hopefully, this will be a useful overview for others who want to tackle performance on their own sites. The main changes I made can be organized into server configuration, site configuration, and content.
The simplest to resolve from a technical perspective was content, although it remains the last one to be completed mainly due to the time involved. It turns out that I got a little lazy when writing some of my original posts and did not compress images as much as I probably should have. The larger an image file is, the longer it takes to download, and this is only amplified by less powerful mobile devices. For new posts, I have been resolving this as I go by using a tool called PNGGauntlet to compress my images as either JPEG or PNG before uploading them to the site. Sadly, for images already uploaded to the site, I could only find plugins that ran on Apache (my installation of WordPress is on IIS for reasons that I might go into another time), would cost a small fortune to process all the images, or had reviews that implied the plugin might work great or might just corrupt my entire blog. I decided that for now, to leave things as they are and update images manually when I get the opportunity. This means, unfortunately, it will take a while. Thankfully, the server configuration options helped me out a little.
On the server side, there were two things that helped. The first, to ensure that the server compressed content before sending it to the web browser, did not help with the images, but it did greatly reduce the size of the various text files (HTML, CSS, and JavaScript) that get downloaded to render the site. However, the second change made a huge difference for repeat visitors. This was to make sure that the server told the browser how long it could cache content for before it needed to be downloaded again. Doing this ensured that repeat visitors to the site would not need to download all the CSS, JS, images, and other assets on every visit.
With the content and the server configuration modified to improve performance, the next and most important focus was the WordPress site itself. The biggest change was to introduce caching. WordPress generates HTML from PHP code. This takes time, so by caching the HTML it produces, the speed at which pages are available for visitors is greatly increased. A lot of caching solutions for WordPress are developed with Apache deployments in mind. Thankfully, I found that with some special IIS-specific tweaking, WP Super Cache works great3 .
At this point, the site was noticeably quicker and almost all the PageSpeed issues were eliminated. To finish off the rest, I added a few plugins and got rid of one as well. I used the Autoptimize plugin to concatenate, minify, compress, and perform other magic on the HTML, CSS, and JS files (this improved download times just a touch more by reducing the number of files the browser must request, and reducing the size of those files), I added JavaScript to Footer, a plugin that moves JavaScript to after the fold so that the content appears before the JavaScript is loaded, I updated the ad code (from Google) to use their latest asynchronous version, and I removed the social media plugin I was using, which was not only causing poor performance but was also doing some nasty things with cookies.
Along this journey of optimizing my site, I also took the opportunity to tidy up the layout, audit the cookies that are used, improve the way advertisers can target my ads, and add a sitemap generator to improve some of the ways Google (and other search engines) can crawl the site4. In all, it took about five days to get everything up and running in my spare time.
So, was it worth it?
From my perspective, it was definitely worth it (please let me know your perspective in the comments). The image above shows the average page load, server response, and page download times before the changes (from January through April – top row) and after the changes (June – bottom row). While the page download time has only decreased slightly, the other changes show a large improvement. Though I cannot tell for certain what changes were specifically responsible (nor what role, if any, the posts I have been writing have played5 ), I have not only seen the speed improve, but I have also seen roughly a 50-70% increase in visitors (especially from Russia, for some reason), a three-fold increase in ad revenue6, and a small decrease in Bounce Rate, among other changes.
I highly recommend taking the time to look at performance for your own blog. While there are still things that, if addressed, could improve mine (such as hosting on a dedicated server), and there are some things PageSpeed suggested to fix that are outside of my control, I am very pleased with where I am right now. As so many times in my life before, this has led me to the inevitable thought, "what if I had done this sooner?"
The percentage of visitors that leave after viewing only one page is known as the Bounce Rate [↩]
Provided you don't do things like enable compressing in WP Super Cache and IIS at the same time, for example. This took me a while to understand but the browser is only going to strip away one layer of that compression, so all it sees is garbled nonsense. [↩]
Some of these things I might blog about another time if there is interest (the cookie audit was an interesting journey of its own). [↩]
though I possibly could with some deeper use of Google Analytics [↩]
If that is sustained, I will be able to pay for the hosting of my blog from ad revenue for the first time [↩]
Google provides some extremely useful online tools that many of us have come to rely on. From Sheets, Slides, and Docs, to Gmail, Maps, and Keep, the advertising giant tends to cover all the angles. The most recent Google tool that I have started using is an improvement over an old service called My Places that started out as a part of Google Maps. It is called My Maps and provides users with the means to build custom annotated maps. Any maps you create can then be embedded into websites or shared via email and social media1.
When first visiting the site, you are offered an option to either create a new map or open an existing map. Any places you had in My Places are already transferred to My Maps and available to open if you wish.
Creating a new map presents you with a familiar Google Maps-style view but with additional overlays for editing the map. These are together at the top-left in two distinct groups. The first is the map structure where you can view and edit the name of the map and its layers, as well as add new layers and adjust the appearance of the base map layer. When I tried this, there were nine different base maps available; from left to right, top to bottom these are Map, Satellite, Terrain, Light Political, Mono City, Simple Atlas, Light Landmass, Dark Landmass, and Whitewater.
To edit the map name, map description, or layer name, just click the text.
Below the description are options to add a new layer and to share the map. There is also a drop down that provides options to open or create a map, delete, export, embed, and print the current map. Below that, each layer is shown. The layer drop down can be used to rename or delete the layer as well as view a data table of items on that layer. The data table allows you to add additional information about various things that have been added to the map (two columns for name and description are provided by default, but more can be added).
Next to the map structure is the toolbox. The toolbox contains a search bar, allowing you to find the area of the map on which to base your customizations. Below the search bar are buttons to undo, redo, select and manipulate, add places, draw lines, add directions, and measure distances and areas. Using these tools, you can build up map layers. When building the Stonehenge map for my blog post on our trip there, I was able to not only search for add mark existing places, but also add custom places. Each item added goes into the selected map layer, which is indicated by a colored bar on the left edge of the layer in the map structure. Clicking a layer changes the layer being edited.
The appearance of each item in a layer can be modified, either as a group or individually, by manipulating the styles option at the top of the layer and clicking the paint can icon on an individual item. By editing the layer style, you can also choose which column in the data table for that layer provides text for the items in that layer, and style items based on data in the data table (useful for representing data on the map). There is a lot of scope in this area, so I recommend playing around with it and seeing what works for your specific use case.
Once you are happy with the map you have created, you can share it, export it to KML (for use in Google Earth and other apps that support KML), and embed into websites. The main share options are familiar to anyone who has shared a document from Sheets, Slides, or Docs, allowing you to share a link to the map as well as control who can edit and view it. If you want to embed the map in a website, an embed code is provided via the map menu, however, as the site will tell you, you need to make the map public before you can embed.
All in all, I found My Maps a pleasant discovery and really nice to use. The styling options and ability to add additional data allow for some impressive customization. I am certainly going to use this application more in the future. How about you? Leave a comment and let me know your experiences with this new addition to Google's collection of online applications, or perhaps add details of alternatives that are out there.
Those planning to use Google My Maps for commercial use should review Google permissions and license terms before proceeding [↩]