Technical

Now serving over http/2

http/2 is well and truly here, and is even partially supported by Internet Explorer 11. Check out the can I use page for http/2. It’s time to stop bundling your JavaScript and Stylesheets into single files without consideration.

Web server implementation for http/2 is also good, with NGINX having supported it from it’s core install for a few years now.

A big http/2 advantage is that you will not run into http 1.1’s limits on the maximum number of parallel connections to a single domain. These vary between browser with Chrome by default supporting 6 connections (it’s worth noting that each browser install can be manually configured to change this number, although I doubt many people do).

The http 1.1 way

Let’s have a think about what happens when we request an imaginary webpage – edspencer.me.uk/test-page.html – over http 1.1.

Looking at a browser that supports a maximum of 6 connections at a time, imagine our test-page.html contains references to 9 external assets, referenced in the header, all hosted on the same domain.

  • edspencer.me.uk/stylesheet1.css
  • edspencer.me.uk/stylesheet2.css
  • edspencer.me.uk/stylesheet3.css
  • edspencer.me.uk/scripts1.js
  • edspencer.me.uk/scripts2.js
  • edspencer.me.uk/scripts3.js
  • edspencer.me.uk/image1.png
  • edspencer.me.uk/image2.png
  • edspencer.me.uk/font1.woff2

What is going to happen here? Well, assuming that our cache is empty, the first 6 referenced files will be downloaded first. Anything after the first 6 will be queued until one of the 6 download slots frees up. This will happen when a download completes.

A simplified analogy would be you’re queuing at a checkout, and there are 6 tills staffed by 6 operators. A maximum of 6 people can be served at the same time.

This also happens for Ajax requests to the same domain, which must also form an orderly queue, with a maximum of 6 going over the wire at the same time.

There were a few workarounds for this in the http 1.1 world. One was to combine your assets into bundles. So in our above example, our 3 stylesheets become one single stylesheet, and our 3 JavaScript files would become a single .js file. This reduces the number of request slots needed by 4.

Another way would be to serve your assets from different domains in order to bypass the 6 connections per domain limit. For example, having your images served from images.edspencer.me.uk and your stylesheets from styles.edspencer.me.uk.

Both of these techniques worked well under http 1.1, but had their downsides.

Downside 1 – Serving styles and JavaScript for entire areas of a website that a client may never access

Imagine I have a whole section in my web application that is only for users with admin access. If I’m bundling all application styles and scripts into two respective files, I’m burdening the clients that will never access the admin tools of my application with the code for my admin tool. Their experience of my website would improve if I served them a smaller set of assets that did not include the code needed to run the admin tool that they will never access.

Downside 2 – Maintaining web infrastructure for serving from multiple subdomains

Setting up subdomains requires webserver and DNS configuration. I also then need to work out how I’m going to get the web application’s static assets onto their relevant subdomains. It’s a lot of effort.

The http/2 way

With http/2, you don’t need to bundle anymore, and you can instead split and serve your web app using multiple files without worrying about blocking one of those limited download slots. This is largely because of the improvements in the protocol transport, which have resulted in the recommended minimum limit for browsers implementing http/2 being much greater, at 100 concurrent connections.

Splitting your bundles more logically, instead of bundling into one will result in more, smaller bundles being sent to the client. For example you could have a bundle that contains the JavaScript for the admin pages of a web app, and it’ll only get served to the client should they land on an admin page.

If someone visits your web app and only lands on the homepage, you don’t need to serve them with the code needed to run the admin pages of your web app.

Enabling http/2 on NGINX

Setting up http/2 on NGINX is trivial, with the only change needed being the inclusion of the characters “h2” into the listen definition for your site:

server {
  listen 443 ssl http2;
  ...
}

All you then need to do is restart NGINX, and you’re good to go. You can test this out by taking a peek at the network tab in the developer tools of your preferred browser. Note the “h2” in the Protocol column:

The gains

This is a WordPress blog, and whilst I’ve taken steps to improve it’s performance, I have struggled to get the raw number of assets needed by the site down. After enabling http/2, I got an immediate and significant performance score improvement from Google Lighthouse:

I’ll be migrating this blog to a headless CMS soon which will give me much more control and will hopefully give me a nice score of 100! Watch this space.

Notes

  • http/2 is only supported over SSL
  • AWS s3 does not support http/2. You will need to put Cloudfront in front of it in order to get http/2 support
  • Google Cloud CDN supports http/2 out of the box, without any additional services
Babel

What Babel 7 does and doesn’t do

Babel 7 will not transpile any of the following out of the box:

If you want to use any of these, you should either bring in your own polyfills for what you need, or you should include corejs with Babel.

Babel 7 will transpile the following out of the box:

There are plugins for each of these but they do not appear to be necessary. It looks like Babel is trying to steer towards transpiling syntax and keyword features, and not functions or objects, which need to be polyfilled.

Babel, JavaScript

Running Babel should be the one of the last stages of your front end build

After recently upgrading from Babel 6 to 7.4 for a project, I decided to check over the outputted file to make sure all was well.

I started by looking for the basics – was const getting changed to var? Yes it was. Were ES6 classes being converted to functions? Yes they were. And were object spreads being transpiled down? Yes they were, but this was now being done in a different way to Babel 6.

In Babel 7.4, using an object spread operator will result in Babel injecting a polyfill function called _objectSpread into the top of the outputted JavaScript file. This function will then be called wherever you are using an object spread. E.g:

Input:

function someOtherTest () {
  const p1 = {
    name: 'p1'
  };
  
  const combinedP1 = {
    height: 100,
    ...p1
  }
}

Results in the following, after being run through Babel:

function ownKeys(object, enumerableOnly) { var keys = Object.keys(object); if (Object.getOwnPropertySymbols) { var symbols = Object.getOwnPropertySymbols(object); if (enumerableOnly) symbols = symbols.filter(function (sym) { return Object.getOwnPropertyDescriptor(object, sym).enumerable; }); keys.push.apply(keys, symbols); } return keys; }

function _objectSpread(target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i] != null ? arguments[i] : {}; if (i % 2) { ownKeys(source, true).forEach(function (key) { _defineProperty(target, key, source[key]); }); } else if (Object.getOwnPropertyDescriptors) { Object.defineProperties(target, Object.getOwnPropertyDescriptors(source)); } else { ownKeys(source).forEach(function (key) { Object.defineProperty(target, key, Object.getOwnPropertyDescriptor(source, key)); }); } } return target; }

function _defineProperty(obj, key, value) { if (key in obj) { Object.defineProperty(obj, key, { value: value, enumerable: true, configurable: true, writable: true }); } else { obj[key] = value; } return obj; }

function someOtherTest() {
  var p1 = {
    name: 'p1'
  };

  var combinedP1 = _objectSpread({
    height: 100
  }, p1);
}

Here’s a link to see this in action on the Babel repl.

As you can see, it’s a lot more code, but is a tax that some of us have to pay in order to support some older browsers.

And this is where we need to be careful. Most front end build processes typically follow this flow:

  • Gather the source js files
  • Combine them into one file
  • Minify the single combined file
  • Write the output to disk

What you do not want to do, is to run Babel before the output has been combined into one file. If you follow this pattern:

  • Gather the source js files
  • Transpile them using Babel
  • Combine them into one file
  • Minify the single combined file
  • Write the output to disk

You will end up with some serious bloat! In every file that you have used the object spread operator in, Babel will inject a locally scoped _objectSpread function. So you could easily end up with multiple copies of the exact same polyfill function.

This is the web, and we don’t like unnecessary bloat – so the correct workflow for your front end build is:

  • Gather the source js files
  • Combine them into one file
  • Transpile them using babel
  • Minify the single combined file
  • Write the output to disk

This way, you’ll only get polyfill functions injected once into your outputted JS, even if the polyfill is used multiple times.

JavaScript, webpack

Should you use WebPack for building SCSS?

In a previous post, I outlined how you could use WebPack to have different build outputs for development and production.

If you read that post, it should be clear that bundling and uglifying your JavaScript with webpack is very straightforward. However, getting webpack to do other front end build operations, such as compile SCSS before uglifying and bundling the generated CSS, is not a simple task. In fact, it’s something I feel I put too much time into.

Since that post, I’ve updated that project so that webpack does one thing – bundle JavaScript.

All of the other build operations (compile SCSS, etc) are now being done in the simplest way possible – Yarn / NPM scripts.

These were previously jobs we’d leave to a task runner, like gulp. These worked and were reliable, but they had their downsides:

  • More gulp specific packages would be needed to act as a glue between the task runner and the task (e.g. gulp-sass)
  • Debugging a massive set of piped streams was not easy, and often needed more packaged installed to do so (see gulp-debug)
  • Managing the parallel nature of these tasks was often tricky, and sometimes required even more packages installed to help you orchestrate the order

You don’t need a task runner

Instead of using gulp or webpack for our entire font end build, we’re going to use yarn / NPM scripts. So, let’s start with our SCSS. We can use node-sass package to compile our SCSS into CSS files:

yarn add node-scss --dev

Now we just need to add a command into the “scripts” section of our package.json file that will call the node-scss package:

...
  "scripts": {
    "build-scss": "node-sass --omit-source-map-url styles/appstyles.scss dist/css/appstyles.css"
  }

It’s basically a macro command. Calling “build-scss” is a shortcut for the longer command that we’ve entered into our package.json file. Here’s how we call it:

yarn run build-scss

Now let’s add another script to call webpack so that it can do what it’s good at – bundling our JavaScript modules:

...
  "scripts": {
    "build-scss": "node-sass --omit-source-map-url styles/appstyles.scss dist/css/appstyles.css",
    "build-js": "webpack --mode=development"
  }

Which now means that we can run:

yarn run build-js

To build our JavaScript.

Bringing it all together

We’ve now got two different yarn script commands. I don’t want to run two commands every time I need to run a front-end build, so wouldn’t it be great if I could run these two commands with a single “build” command? Yes it would!

...
  "scripts": {
    "build-scss": "node-sass --omit-source-map-url styles/appstyles.scss dist/css/appstyles.css",
    "build-js": "webpack --mode=development",
    "build": "yarn run build-scss && yarn run build-js"
  }

All we need to do now is run

yarn run build

And our CSS and JavaScript will be generated. You could add more steps and more yarn scripts as needed – for example, a step to uglify your generated CSS.

Technical, webpack

webpack 4 – How to have separate build outputs for production and development

A pretty common use case is wanting to have different configs for development and production for front end builds. For example, when running in development mode, I want sourcemaps generated, and I don’t want any of my JavaScript or CSS to be minified to help with my debugging.

Webpack 4 introduced a new build flag that is supposed to make this easier – “mode”. Here’s how it’s supposed to be used:

webpack --mode production

This actually won’t be much help if you want to get different build outputs. Whilst this flag will be picked up by Webpack’s internals and will produced minified JavaScript, it won’t help if you want to do something like conditionally include a plugin.

For example, just say I have a plugin to build my SCSS, if I want to minify and bundle my generated CSS into a single file, the best way to do it is to use a plugin – OptimizeCssAssetsPlugin. It would be great if I could detect this mode flag at build time, and conditionally include this plugin if I’m building in production mode. The goal is that in production mode, my generated CSS gets bundled and minified, and in development mode, it doesn’t.

In your webpack.config.js file, it’s not possible to detect this mode flag, and to then conditionally add the plugin based on this flag. This is because the mode flag can only be used in the DefinePlugin phase, where it is mapped to the NODE_ENV variable. The configuration below will make a global variable “ENV” available to my JavaScript code, but not to any of my Webpack configuration code:

module.exports = {
  ...
  plugins: [
    new webpack.DefinePlugin({
      ENV: JSON.stringify(process.env.NODE_ENV),
      VERSION: JSON.stringify('5fa3b9')
    })
  ]
}

Tying to access process.env.NODE_ENV outside of the DefinePlugin phase will return undefined, so we can’t use it. In my application JavaScript, I can use the “ENV” and the “VERSION” global variables, but not in my webpack config files themselves.

The Solution

The best solution even with webpack 4, is to split your config files. I now have 3:

  • webpack.common.js – contains common webpack configs for all environments
  • webpack.dev.js – contains only the dev config settings (e.g. source map gens)
  • webpack.prod.js – contains only prod config, like the OptimizeCssAssetsPlugin

The configs are merged together using the webpack-merge package. For example, here’s my webpack.prod.js file:

const merge = require('webpack-merge');
const common = require('./webpack.common.js');
const OptimizeCssAssetsPlugin = require('optimize-css-assets-webpack-plugin');

module.exports = merge(common, {
  plugins: [
    new OptimizeCssAssetsPlugin({
      assetNameRegExp: /\.css$/g,
      cssProcessor: require('cssnano'),
      cssProcessorPluginOptions: {
        preset: ['default', { discardComments: { removeAll: true } }],
      },
      canPrint: true
    })
  ]
});

You can then specify which config file to use when you call webpack. You can do something like this in your package.json file:

{
  ...
  "scripts": {
    "build": "webpack --mode development --config webpack.dev.js",
    "build-dist": "webpack --mode production --config webpack.prod.js"
  }
}

Some further information on using the config splitting approach can be found on the production page in the webpack documentation.

JavaScript

Javascript: Alias a function and call it with the correct scope

Today I needed to call one of two possible functions depending on a condition. The interface into these functions was the same (they accepted the same parameters) and they returned data in the same format.

getData() {
  const params = {
    orgId: this.orgId,
    siteId: this.siteId
  };

  this.dataService.getFullData(params)
     .then(result => {
        // ... do stuff
      })
      .catch(this.handleError);
}

Essentially what I want to do is call a leaner “getLeanData” function instead of “getFullData”, if no siteId is provided.

There are few approaches to this problem. One would be to change the way getFullData worked, and to move any switching logic into it. The other would be to break up the promise callback functionality and move it into a separate function, and to just have an if block.

I didn’t really like either of those approaches and knew that I could alias the function that I wanted to call instead. Here was my first, non working attempt:

getData() {
  const params = {
    orgId: this.orgId,
    siteId: this.siteId
  };

  let dataFunction = this.dataService.getFullData;

  if (typeof params.siteId === 'undefined') {
    dataFunction = this.dataService.getLeanData;
  }

  dataFunction(params)
     .then(result => {
        // ... do stuff
      })
      .catch(this.handleError);
}

The above looked like the right approach, and would have worked if:

  • This was not an ES6 class. Operating in strict mode means that the scope of the function being called (this) is actually something, and not just the global scope.
  • The data functions were local to the ES6 class, and not in a depended upon class

Basically, the function call worked, but it was not executing the dataFunction within the scope of the dataService class – resulting in errors.

Why is this happening?

This happens because when assigning a function to the local variable “dataFunction”, only the body of the function, not the object that it needs to be called on, is copied. If the getFullData and getLeanData functions contained non scope specific code, such as a simple console log statement, the behaviour would have been as expected. However in this case, we need the scope of the dataFunction class.

The solution

The solution is to call the function with the scope (officially called the “thisArg”) explicitly set. This can be done using the Function​.prototype​.call() method.

getData() {
  const params = {
    orgId: this.orgId,
    siteId: this.siteId
  };

  let dataFunction = this.dataService.getFullData;

  if (typeof params.siteId === 'undefined') {
    dataFunction = this.dataService.getLeanData;
  }

  dataFunction.call(this.dataService, params)
     .then(result => {
        // ... do stuff
      })
      .catch(this.handleError);
}

Calling .bind on the function call will also work with you, but I think the use of .call is a little more readable

hardware

Surface Book keyboard review

I’ve said it before, and I’ll say it again – I prefer chiclet keyboards. In my opinion, they are more suited to a long days worth of typing and programming than a mechanical keyboard, or a more conventional keyboard.

The Surface Book keyboard is the second chiclet keyboard that I’ve purchased in the last month, with the previous one being the Cherry KC 6000. I bought the Surface Book keyboard to be a replacement for my home setup, where I was previously using an ageing and horribly tatty Microsoft Ergonomic USB natural keyboard.

The requirements

My requirements for my home keyboard were slightly different to my workplace keyboard requirements:

  • Must have chiclet keys
  • Must not look too garish. Whilst this is for my home setup, I like having a neat desk, and a luminous keyboard with fancy lighting just wouldn’t fit into this
  • Must not be a natural keyboard. I had previously used an ergonomic natural keyboard for a good few years. It was good in the sense that I didn’t get any discomfort using it, but I don’t think my typing style was every completely suited to it. My left hand in particular liked to travel into the right hand section of the keyboard during some keypress combos. I also found some aspects about the key layout to be jarring – such as the double spaced “n” key.
  • Must be bluetooth. Like I say, I keep a neat desk and the fewer wires, the better.
  • Must have a numeric keypad
  • Must not have any keys in strange places. As a developer, I use the ctrl, alt, super, home, end and paging keys a lot. Any re-arrangement of these keys would probably impact my productivity, so these must firmly stay where they would normally be

The matching keyboards

I found two keyboard that matched my needs. The first was the Logitech Craft, which also has the added bonuses of being able to pair with multiple devices, and also has a big wheel that can be used for additional control, although this sounds like it does not typically stretch beyond volume control:

However, there are two downsides to this keyboard:

1 – it’s expensive, priced at about £160. Whilst I think that as a developer I need the best tooling, this just feels like a stretch.

2 – It’s hard to get hold of. When I was trying to purchase this keyboard, I struggled to find anyone that actually had it in stock, including Amazon. I eventually found it in stock on very.co.uk, but it wasn’t really in stock and they sat on my order for over a week before I lost my patience and cancelled.

This left one other keyboard – the Surface Book Keyboard.

Surface Book Keyboard Pros

The keyboard is aesthetically very pleasing, with a small bezel and a simple and tasteful grey and white colour scheme used. The Bluetooth connection helps with the appearance of the keyboard as it means you don’t have any wires to try and hide or make neat. I’d describe the footprint of the keyboard as low profile, as it has such small bezels and looks discreet yet impressive in the middle of your desk in complete isolation from any sort of cabling.

It is super comfortable to type on, with the key presses feeling light yet satisfying to depress, with a sufficient level of feedback delivered to your finger tips. Typing on it is pleasurable and fast.

The general typing comfort is helped along by a healthy level of banking on the keyboard towards the user, which is something that the Cherry KC 6000 fails at. The brilliant thing about this banking is that it’s a stroke of smart design – the bank towards the user comes from the battery compartment:

The keyboard layout is sensible, and doesn’t try to be too clever in this area by re-stacking keys or jigging around the layout of anything. It very much is laid out like you would find a laptop keyboard – with the Function lock key placed between the Ctrl and Super keys. This probably is a big plus for you if you tend to dock your laptop and work off of it, or if you’re used to working on a laptop keyboard. Plus points for me on both – when I’m at home, I work off of my laptop on a stand.

Availability wise, this keyboard is very easy to get hold of. I ordered this online at about 4pm through PC World’s website, and was able to collect it the next day at 11am from my local store. It’s also well stocked elsewhere around the web, which is a marked departure from my experience when trying to purchase the Logitech Craft.

Cost wise, the Surface Book keyboard can be yours for £79.99. This appears to be a fixed price, much in the way that Apple price their hardware. It’s the same price everywhere, unless you look at used items. Here are some links:

Surface Book keyboard cons

The only real downside for me (and this is a nitpick) with the Surface Book keyboard is that it can only be paired to one device at a time. I often switch between my desktop PC for gaming, and my Xubuntu laptop for everything else. This means that every time I do this, I need to pair the keyboard again. Luckily, this isn’t much more than a slight inconvenience, as the pairing is a quick and painless experience on both Windows and Ubuntu based operating systems from version 18 onwards.

Protip – don’t throw away your USB keyboard

USB keyboards have two big advantages. They don’t need batteries, and they will work as soon as they have a physical connection, even whilst your machine is still booting up. If you need to do anything in the BIOS, for example, you will not be able to do this with a Bluetooth keyboard as the drivers for it will not have been loaded. So, keep your dusty old USB keyboard for the day when you run out of power and have no batteries, or for when you need to jump into your BIOS.

Conclusion

On the whole, this keyboard is fantastic, and I’ve give it a 9 out of 10. I’d highly recommend this keyboard for general typing, programming, and some gaming.

hardware, Xubuntu

Does the Surface Keyboard work with Ubuntu? Yes, but only with Ubuntu 18 onwards

The other keyboard that I’ve recently purchased as well as the Cherry KC 6000 is the Microsoft Surface Keyboard. After reading a few reviews online and watching a few videos, I decided on getting this bluetooth keyboard for my home setup.

My main concern however, was whether this keyboard would work with my Xubuntu laptop. Some googling revealed mixed answers – someone running Linux Mint, which is based on Ubuntu, had no luck. Whereas a separate thread on Reddit seemed to indicate that it worked without any problems.

After purchasing the keyboard and using it, here is what I found: In order to successfully pair the keyboard with any PC, the keyboard will ask the PC to present the user with a passcode challenge. On Windows, when pairing, a dialogue will pop up that contains a 6 digit number. It will prompt you to enter a 6 digit passcode on the keyboard. One this is entered on the Surface Keyboard, it will be paired and will work as expected. When pairing the keyboard with my Xubuntu laptop, here’s what I found:

The Surface Keyboard will not work with Ubuntu 16 or any derivatives

This is because of a few bugs in the bluetooth stack in Ubuntu 16. The passcode dialogue will never appear, meaning that you will not be able to successfully pair the keyboard to a machine running Ubuntu 16 or below. This problem also happens when attempting to pair in the terminal using bluetoothctl.

The Surface Keyboard will work with Ubuntu 18 upwards

Luckily, I have a spare laptop, and was able to use that to test out the pairing on Ubuntu 18 before committing myself to upgrading my main work laptop. Using the spare, I was able to successfully pair on Ubuntu 18.04 through the bluetooth gui as the passcode prompt was now appearing. I took the plunge and updated my main Xubuntu laptop and can confirm that the pairing with the Surface Book keyboard fully works on Ubuntu 18.04 and any derivatives.

Xubuntu

How to fix: Xubuntu update manager not showing new distro releases

I’d previously been running Xubuntu 16.04 on my main laptop without any issues, and had been waiting for a while before upgrading to Xubuntu 18.04.

Because I was having problems with bluetooth, and some furious googling had lead me to conclude that a distribution upgrade would resolve my issues, I decided that now was a suitable time to update my operating system.

Xubuntu’s documentation states that when there is a major LTS release available for you to upgrade to, the update manager will pop up with a dialogue box informing you, and giving you the option to update. It looks similar to this:

You will not get this prompt if you have broken PPA repository URLs configured. The best way to find out which PPA URLs are causing problems, is to run…

$ sudo apt-get update

From your terminal, and observing the results:

Reading package lists... Done   
E: The repository 'http://ppa.launchpad.net/pinta-maintainers/pinta-stable/ubuntu xenial Release' does not have a Release file

In my case, my PPA configuration for the excellent image editor, Pinta, appeared to be broken. Simply disabling this PPA from the software updater by unchecking the checkbox allowed the OS to fully pull update information, and then prompted me with the distribution update dialogue.