πŸ— Creating An Express Server

Photo by Kelly Sikkema on Unsplash

This is part 4 of my series on server-side rendering (SSR):

  1. πŸ€·πŸ»β€β™‚οΈ What is server-side rendering (SSR)?
  2. ✨ Creating A React App
  3. 🎨 Architecting a privacy-aware render server
  4. [You are here] πŸ— Creating An Express Server
  5. πŸ–₯ Our first server-side render
  6. πŸ– Combining React Client and Render Server for SSR
  7. ⚑️ Static Router, Static Assets, Serving A Server-side Rendered Site
  8. πŸ’§ Hydration and Server-side Rendering
  9. 🦟 Debugging and fixing hydration issues
  10. πŸ›‘ React Hydration Error Indicator
  11. πŸ§‘πŸΎβ€πŸŽ¨ Render Gateway: A Multi-use Render Server

Over the previous three posts in this series we have described what server-side rendering (SSR) is, created a simple application using React, and discussed the architecture of a privacy-aware server to ensure we understand some of the sharp edges around SSR. Now, we will actually implement a basic server. Just as with the React application we created, the server we create will not be a complete solution, but it will provide a foundation from which we can continue to explore SSR.

✨ A New Project

Where do we start? Well, we need a server that can receive web requests and respond to them. For that, I am going to use Express1, but first I need a project.

NOTE: Where you see yarn, know that you can use your own package manager as you see fit.

  1. Add a new repository on GitHub (or your source control platform of choice).
  2. Make a new folder locally for your code
  3. cd to your new folder and run git init
  4. git remote add origin <your github repo URL>
  5. git pull origin master
  6. git branch --set-upstream-to=origin/master
  7. Create and commit a .gitignore file
  8. Initialize it for JavaScript package management with yarn init
  9. Run yarn install to generate the lock file
  10. Commit the yarn.lock and package.json to the git repository

Great, so now we have a project we can start working on. Let's add Express.

yarn add express

This should update our package.json and yarn.lock, so don't forget to commit those changes. I also recommend pushing often to your remote repository, that way your code is backed up online in case your computer suffers a nasty accident2.

πŸ‘‹πŸ» Hello World!

At this point we need to write some code. We need to setup a route for our server that can handle providing a rendered result for any URL that our application might have. There are a couple of ways we could do this:

  1. Assuming that our server is invoked by some intermediate layer, such as a cache, we could have the server implement a single route (e.g. /render) and pass the URL to be rendered as a query parameter.
  2. Our server could assume the URL is to be rendered by the client code and just accept any URL.

Option 1 gives us a great deal of flexibility in what our server can do, but it forces us to ensure that there is a layer between the original browser request and our server, as something has to be responsible for constructing the appropriate /render route request. Option 2 removes the need to have an intermediate layer, but it perhaps restricts us from expanding server functionality. Of course, option 2 can be changed to option 1 if the need arises, so we can go with option 2 for now, knowing that later, it can be updated to suit changing needs.

Normally, I would add lots of other things to this server to improve development and runtime investigations, such as linters, testing, and logging, but for the sake of brevity, right now we will stick to the main functionality.

const express = require("express");

const port = 3000;
const app = express();

app.get("/*", (req, res) => res.send("Hello World!"));

app.listen(port, () => console.log(`Example app listening on port ${port}!`));

This is our index.js file. It is not doing a lot. On line 4, we create our express app. On line 6, we tell it that for any route matching /*, return Hello World!. On line 8, we tell it to listen for requests on port 30003.

If we run this app with node index.js, we can go to our browser, visit any route starting with localhost:3000 and see the text, Hello World!. This is fantastic. We have a server and it is responding as we hope. Since we are going to run this often as we make changes, I will add a script to our package.json to run node index.js for us.

{
  "name": "hello-react-world-ssr",
  "version": "0.0.1",
  "description": "A server-side rendering server",
  "main": "index.js",
  "license": "MIT",
  "dependencies": {
    "express": "^4.17.1"
  },
  "scripts": {
    "start": "node index.js"
  }
}

In the package.json file shown above, I have highlighted the section I added containing the new start command. From now on, we can start our app with yarn start. The next step is getting our server to render our React application. Before we do that, consider these questions:

  1. How does the server know about and load the code for our React application?
  2. How does the server get the rendered result to send back?
  3. How do we isolate render requests to avoid side-effects bleeding across requests?

πŸ€” The Hows

The answers to the first two questions have implications beyond the server itself, possibly influencing both our client application and any deployment process.

How our server knows about and loads our client application may affect how our server is deployed. Some server-side rendering solutions involve deploying the client-side code with the server so that it has direct access to the appropriate code, others use a mechanism such as looking up in a manifest to identify the files to load from a separate location (such as a content delivery network (CDN)). Neither of these is necessarily a bad choice – they both have their advantages and disadvantages. For example, deploying the server with the right code means:

  • βœ…The server has fast access to the client application it is rendering
  • βœ…The server can integrate nicely with the client application
  • ❌The render server must be deployed every time the client application changes
  • ❌The server is closely coupled to the client application

Whereas, looking up files in a manifest and loading them from elsewhere means:

  • βœ…The render server rarely requires updating
  • βœ…The server can render more than one application
  • ❌The server will probably need to cache JavaScript files locally or be at the mercy of latency when communicating with the CDN
  • ❌The client applications that the server renders likely need to include custom code to support being rendered by that server

Being aware of how these approaches differ – and they differ in more than just the ways I have suggested, is useful in understanding the trade-offs we must make when implementing our render server. Perhaps answering the second question will help us decided which route to take; consider how will our server get a rendered result of the client application?

Our server is going to invoke a call from the React framework that renders our React application to a string, rather than mounting it inside the DOM of a browser. To do that, it needs a React component to render, so it must load our client application and get the root-level component. In addition, assuming our render server is rendering the entire page and not just the React component, the server is likely going to need to gather additional information, such as which files must be loaded in the page, the page title, etc.

This whole process of capturing the application render and associated metadata requires interplay between the server code and the client code. Revisiting the first question and the two approaches I gave: if the server has the client code deployed with it, the server could know exactly which files to load to render the component, importing those directly and using them accordingly; if the server is less-closely coupled, we likely need some mechanism whereby the client application itself does more of the heavy lifting by hooking into some framework provided by the server, even if that is just exporting a specific object so that server can identify the appropriate things to coordinate rendering.

Ultimately, either we have a server that is custom built to our application, or we have a server that is built to support many applications. What to do? I say, dive in and try them both. To that end, next time we will look at the first option where the server knows all about the client application (though we may cut some corners to get to the salient points), and we will answer that third question; how do we isolate our renders?

πŸ™‡πŸ»β€β™‚οΈ In Conclusion

Herein we have created our server, though it does not do much yet. We have also considered two different approaches to connect our server to our client application: closely-coupled or more open, and we have started to think about how the server will isolate and respond to render requests.

This week's entry turned out a little longer than I had intended, and covered less things than I had hoped. Sometimes that is the way it goes. One of the biggest reasons I write these blogs is to discover what I do and do not know about something. Often in the effort of explaining it to someone else, I identify a bias that I have without any supporting evidence, or a topic I grasp that is far harder to explain than I expected.

Until next time, when we start to implement our server-side rendering, please leave a comment. Perhaps you have a question, a personal experience writing a render server, or want to take umbrage at something I have stated. I look forward to learning with you as we continue this journey into the land of SSR. πŸ—Ί

  1. I find Express easy enough to use and well-supported, though there are other options that one could use instead if one were so inclined []
  2. A lesson from bitter experience; hard drives die (especially SSDs) without warning, drinks spill, laptops get dropped – keep your work backed up []
  3. The port is currently hard-coded for simplicity, but we could make this configurable []

Hackery: Line following bots at CodeMash

NodeBots @ CodeMash

CodeMash 2.0.1.5, the latest installment of the popular community-organised conference is fast approaching. This time, I will be attending with several of my colleagues from CareEvolution, which is sponsoring the NodeBot precompiler sessions. One such colleague and good friend, Brian Genisio (also a co-organiser of the Southeast Michigan JavaScript group more commonly known as SEMjs) has been working night and day for months to prepare for each of the two epic software and hardware hacking events that will be the NodeBot precompilers. Though myself and a few other friends (many of which you can meet in person at CodeMash) have assisted Brian over the last few weeks, the success of this event really is down to his vision and commitment. From creating documentation to submitting Johnny Five pull requests1, ordering components to building kits, Brian's efforts have been considerable; if you join us to hack NodeBots (and you really should), be sure to take a moment and show himΒ your gratitude.

My biggest contribution to the NodeBots preparation was to organize and take part in a hack day at work where Brian, a few colleaguesΒ (Brandon Charnesky, Greg Weaver, and Kyle Neumeier), John Chapman (another co-organizer of SEMjs and the NodeBots precompilers), and I could test and finalize kits and components, review and update documentation, and give some of the challenges and components a dry run in the process. Participants at CodeMash will be able to take part in one of two competitions with their NodeBots; a sumo-inspired Battle Bots competition where bots can compete for supremacy inΒ the ring, or a line racing time trialΒ where bots must follow a track in the fastest time2. My main efforts duringΒ the hack day were to create a sample line-following bot and provide some example code as a starting point for our precompiler hackers.Β The examples for both the basic line follower and basic sumo bot, as well as some other examples for specific components, can be found on GitHub in the CodeMash NodeBots Docs repository. Instructions on getting started are available on the official CodeMash NodeBots website.

Healthcare and NodeBots?

CareEvolution logo

Some of you may have been wondering: "why would a healthcare IT company like CareEvolution chose to sponsor an event hacking robots?" If you would like to know more, please come to our vendor session at CodeMash (2 p.m. on Thursday, January 8) where I will be presenting "We're Not All Robots: Hacking NodeBots, Healthcare, and the Workplace".

The Line-Following Hardware

Before hacking the code, I needed to work out how the hardware worked and build my bot. I started out with the IR (infrared) reflectance array component; an array of six IR emitters and corresponding receivers that will be theΒ eye to see the line.

IR array and cable
IR array and cable

In the image above, you can see the front of the array as well as the cable to attach the array to the controller (we are using Arduino Uno clones for the precompilers).Β Using the pins already attached3, I connected the array to the board.

Rear of array showing attached pins
Rear of array showing attached pins

Wiring diagram of reflectance array connected to the controller
Wiring diagram of reflectance array connected to the controller

In the wiring diagram above, you can see each of the six analog pins on the Arduino going to one of the output pins (labelled 1-6) on the reflectance array4. Pin 13 of the Arduino has been connected to the LED ON pin of the reflectance array, which is used to activate the infrared LED's.

With everything connected, I used the usage code from the Johnny Five documentationΒ to create a quick tester and verify that I was able to receive output from my reflectance array.

var five = require("johnny-five"),
    board = new five.Board();
 
board.on("ready", function() {
  
  var eyes = new five.IR.Reflect.Array({
    emitter: 13,
    pins: ["A0", "A1", "A2", "A3", "A4", "A5"]
  });
 
  eyes.on('line', function(err, lineValue) {
    console.log( "Line Position: ", lineValue);
  });
 
  eyes.enable();
});

After verifying the reflectance array was wired and working,Β I followed the reference kit buildΒ instructions to create a robot chassis on which I could mount the reflectance array.

Reference bot
Reference bot

I then mounted the array at the front, near the wheels, using some padded double-sided tape (the array must be within a quarter of an inch of the line, so a little padding was required). To avoid confusion, the array was oriented so that its left (pin 1, according to the documentation) was also the bot's left (assuming the wheels are the front of the bot).

Reflectance array mounted at the front of the bot. Pin 1 is on the right in this picture (the bot's left).
Reflectance array mounted at the front of the bot. Pin 1 is on the right in this picture (the bot's left).

The Line-Following Software

With the bot constructed, I needed to tell it what to do. My aim was not to create the best line-following bot ever (that is a task that possibly awaits you at CodeMash), I merely wanted to make something that demonstrates the basic concepts.

The first thing that the bot needs to do is to "see". Although we had a little code to check the array worked, we had not actually calibrated the array. Calibration allows us to show the array the extremes that it is to understand, i.e. the materials that represent the existence and non-existence of a line. Thankfully, the Johnny Five driver for the reflectance array makes calibration easy with the calibrateUntilΒ function.

var five = require("johnny-five");
var board = new five.Board();

var stdin = process.stdin;
stdin.setRawMode(true);
stdin.resume();

board.on("ready", function () {
    var eyes = new five.IR.Reflect.Array({
        emitter: 13,
        pins: ["A0", "A1", "A2", "A3", "A4", "A5"]
    });
    
    var calibrating = true;
    
    // Start calibration
    // All sensors need to see the extremes so they can understand what a line is,
    // so move the eyes over the materials that represent lines and not lines during calibration.
    eyes.calibrateUntil(function () { return !calibrating; });
    console.log("Press the spacebar to end calibration...");
    
    stdin.on("keypress", function(chunk, key) {
        if (!key || key.name !== 'space') return;
        
        calibrating = false;
    });

    eyes.on("line", function(err, line) {
        console.log(line);
    });
    
    eyes.enable();
});

In my updated code, I also added keyboard input capture so that the calibration mode could be exited via the space bar. Running this with my bot, I was able to drag a piece of paper with a thick black electrical tape line under the array and calibrate it. After calibration, I could see from the console output that my bot recognised the line and in which direction it had last seen it5.

Next, I needed to be able to move the bot based on the line position. For this, I added some simple wheel commands and thresholds. The code is shown below.

var five = require("johnny-five");
var board = new five.Board();

var stdin = process.stdin;
stdin.setRawMode(true);
stdin.resume();

board.on("ready", function () {
    var wheels = {
        left: new five.Servo({ pin: 9, type: 'continuous' }),
        right: new five.Servo({ pin: 10, type: 'continuous' }),
        stop: function () {
            wheels.left.center();
            wheels.right.center();
        },
        forward: function () {
            wheels.left.ccw();
            wheels.right.cw();
            console.log("goForward");
        },
        pivotLeft: function () {
            wheels.left.center();
            wheels.right.cw();
            console.log("turnLeft");
        },
        pivotRight: function () {
            wheels.left.ccw();
            wheels.right.center();
            console.log("turnRight");
        }
    };
    
    var eyes = new five.IR.Reflect.Array({
        emitter: 13,
        pins: ["A0", "A1", "A2", "A3", "A4", "A5"]
    });
    
    var calibrating = true;
    var running = false;

    wheels.stop();
    
    // Start calibration
    // All sensors need to see the extremes so they can understand what a line is,
    // so move the eyes over the materials that represent lines and not lines during calibration.
    eyes.calibrateUntil(function () { return !calibrating; });
    console.log("Press the spacebar to end calibration and start running...");
    
    stdin.on("keypress", function(chunk, key) {
        if (!key || key.name !== 'space') return;
        
        calibrating = false;
        running = !running;
        
        if (!running) {
            wheels.stop();
            console.log("Stopped running. Press the spacebar to start again...")
        }
    });

    eyes.on("line", function(err, line) {
        if(!running) return;
    
        if (line < 1000) {
            wheels.pivotLeft();
        } else if (line > 4000) {
            wheels.pivotRight();
        } else {
            wheels.forward();
        }
        console.log(line);
    });
    
    eyes.enable();
});

The first thing I added was a wheels object to encapsulate the motor controls. Movement is provided by two continuous servos attached to pins 9 and 10. After defining left and right servos, I created the following methods:

  • forward
    Both servos turning such that they rotate toward the front of the bot
  • pivotLeft
    The left servo rotates in reverse while the right servo rotates forward
  • pivotRight
    The right servo rotates in reverse while the left servo rotates forward
  • stop
    Both servos stop moving

Next, I made sure that stop()Β Β was called on startup to ensure the bot was not wandering around aimlessly. I then updated the space bar handling to act as a toggle that on first use stopped calibration and started the bot on its line following quest, but on subsequent uses merely stopped or started the line following. Finally, I added some thresholds to the lineΒ Β event handler to determine when the bot should drive forward and when it should pivot in either direction based on the value sent from the array.

And with that, my simple line-following robot was complete. It does a fair job at following a course, but it is in need of fine tuning if it is to win any races. Perhaps you will be up to the task when you take part in the CareEvolution-sponsored NodeBots precompilers at CodeMash 2.0.1.5. If you wish to take part in our hacking extravaganza, you will need to register, so be sure to reserve your spot.

  1. which earned Brian the privilege of becomingΒ a core committer []
  2. of course, you don't have to compete in either; you can just hack []
  3. thanks to the efforts of John Chapman, no one will need to solder pins to the reflectance arrays []
  4. pins 7 and 8 are unused as the reflectance sensors for those pins have been separated from the component []
  5. the `line` event from the arrayΒ uses 0 to mean the line was last seen to the left and 5001 to mean it was last seen to the right; any value between 1 and 5000 means the line is under the array with the value indicating its position []

Controlling a bot using node.js and express

Last week was our work hackathon. During these events we get to spend a day hacking around with something fun, whether it is work related or not. Thanks to my friend and colleague, Brian Genisio, this time around we got to tinker with hardware and build some bots.

Using node.js, johnny-five, an Arduino Uno board and a bunch of additional components, teams created their own sumo bots. At the end of the day, we competed to see who had the best bot. Ours was the only botΒ that walked instead of using wheels and we were confident our design could have won. Unfortunately, Β we faced some technical difficulties and a couple of design issues that prevented us from achieving our full potential. You can see our bot (it's the large gold one that lumbersΒ in from the bottom) take on all the others in this video and slowly start pushing them all out of the way.

http://youtu.be/pW6t5qfsc4g

As I am sure you can tell from the audio, this was a thoroughly enjoyable and highly competitive hackathon. There wereΒ a variety of problems to address as we developed our bots. Some of them were unique to the bot being created, others were comment to all. One such problem was how to control the bot.Β Regardless of how the signal got to the Arduino board (Bluetooth, RF and USB were available), we had to command our bots to move forwards, backwards, left and right (and in some cases, to deploy an extensive range of weaponry and distractions).

After some trial and error, I settled on using a simple web server and web page front-end that made API calls to theΒ server. The server would then map these API calls to bot controls. This provided a way for us to use mouse, keyboard and touch input to control our electronic sumo minion. You can see the very basic user interface1 in this Vine that I took during our build.

https://vine.co/v/Ounjjiu6Br5

Using AngularJS, the buttons in the web page were connected to API calls. By clicking buttons in the web page, using the numpad or AWSD keys, or touching the screen of my laptop, we could control the robot. The API itself was implemented using the Express package in node.

Express

I installed express into our nodeΒ application, using npm:

npm install express

Then I added express to our bot code and defined a simple API to process web requests:

var express = require("express");

var app = express();

app.post('/move', doMove);
app.post('/rotate', doRotate);
app.post('/stop', doStop);

app.use(express.static(__dirname + '/public'));

app.listen(4242);

This snippet of code has been edited down to show the pertinent details; you can view the real code on GitHub. First, we require the express module, then we use it to create our server app. The three calls to postΒ set up our three API methods and the handlers for those methods. Using the postΒ method defined theseΒ asΒ POST endpoints, we could have used put, getΒ or delete, if it were appropriate. The useΒ call sets up a redirect for static page requests so that those requests are satisfied from our public directory. Finally, we tell the app to listen on port 4242.

Each request that matches one of the three calls I have setup will be sent to the appropriate handling methods. These handlers each takeΒ a requestΒ object and a responseΒ object, which they can use to get additional information about the request and craft an appropriate response.

Here is an implementation of the doRotateΒ method:

function doRotate(req, res) {
    var direction = req.param('direction');
    var rate = req.param('rate');
    drive.rotate(direction, rate);
    res.send();
}

In this handler, we get the direction and rate parameters from the request and pass them to the code that does the real work. At the end, we respond to the request. We could provide data in our response or even send an error if we wanted.

This allowed me to host a local website and API for controlling our bot.Β It was that simple.

Conclusion

Hacking a robot using node.jsΒ was a great way to delve into a new facetΒ of JavaScript programming; hacking hardware. Not only that, but it allowed me to discover some of the cooler things that can be done quickly and easily using node.js, such as setting up a web server using express.

Have you hacked a robot with node? How did you implement control? Please leave a comment with your experience or any questions you may have. And if you are interested in hacking a bot of your own, watch this space.

  1. and an early prototype of our robot []