Technical, webpack

webpack 4 – How to have separate build outputs for production and development

A pretty common use case is wanting to have different configs for development and production for front end builds. For example, when running in development mode, I want sourcemaps generated, and I don’t want any of my JavaScript or CSS to be minified to help with my debugging.

Webpack 4 introduced a new build flag that is supposed to make this easier – “mode”. Here’s how it’s supposed to be used:

webpack --mode production

This actually won’t be much help if you want to get different build outputs. Whilst this flag will be picked up by Webpack’s internals and will produced minified JavaScript, it won’t help if you want to do something like conditionally include a plugin.

For example, just say I have a plugin to build my SCSS, if I want to minify and bundle my generated CSS into a single file, the best way to do it is to use a plugin – OptimizeCssAssetsPlugin. It would be great if I could detect this mode flag at build time, and conditionally include this plugin if I’m building in production mode. The goal is that in production mode, my generated CSS gets bundled and minified, and in development mode, it doesn’t.

In your webpack.config.js file, it’s not possible to detect this mode flag, and to then conditionally add the plugin based on this flag. This is because the mode flag can only be used in the DefinePlugin phase, where it is mapped to the NODE_ENV variable. The configuration below will make a global variable “ENV” available to my JavaScript code, but not to any of my Webpack configuration code:

module.exports = {
  ...
  plugins: [
    new webpack.DefinePlugin({
      ENV: JSON.stringify(process.env.NODE_ENV),
      VERSION: JSON.stringify('5fa3b9')
    })
  ]
}

Tying to access process.env.NODE_ENV outside of the DefinePlugin phase will return undefined, so we can’t use it. In my application JavaScript, I can use the “ENV” and the “VERSION” global variables, but not in my webpack config files themselves.

The Solution

The best solution even with webpack 4, is to split your config files. I now have 3:

  • webpack.common.js – contains common webpack configs for all environments
  • webpack.dev.js – contains only the dev config settings (e.g. source map gens)
  • webpack.prod.js – contains only prod config, like the OptimizeCssAssetsPlugin

The configs are merged together using the webpack-merge package. For example, here’s my webpack.prod.js file:

const merge = require('webpack-merge');
const common = require('./webpack.common.js');
const OptimizeCssAssetsPlugin = require('optimize-css-assets-webpack-plugin');

module.exports = merge(common, {
  plugins: [
    new OptimizeCssAssetsPlugin({
      assetNameRegExp: /\.css$/g,
      cssProcessor: require('cssnano'),
      cssProcessorPluginOptions: {
        preset: ['default', { discardComments: { removeAll: true } }],
      },
      canPrint: true
    })
  ]
});

You can then specify which config file to use when you call webpack. You can do something like this in your package.json file:

{
  ...
  "scripts": {
    "build": "webpack --mode development --config webpack.dev.js",
    "build-dist": "webpack --mode production --config webpack.prod.js"
  }
}

Some further information on using the config splitting approach can be found on the production page in the webpack documentation.

JavaScript

Javascript: Alias a function and call it with the correct scope

Today I needed to call one of two possible functions depending on a condition. The interface into these functions was the same (they accepted the same parameters) and they returned data in the same format.

getData() {
  const params = {
    orgId: this.orgId,
    siteId: this.siteId
  };

  this.dataService.getFullData(params)
     .then(result => {
        // ... do stuff
      })
      .catch(this.handleError);
}

Essentially what I want to do is call a leaner “getLeanData” function instead of “getFullData”, if no siteId is provided.

There are few approaches to this problem. One would be to change the way getFullData worked, and to move any switching logic into it. The other would be to break up the promise callback functionality and move it into a separate function, and to just have an if block.

I didn’t really like either of those approaches and knew that I could alias the function that I wanted to call instead. Here was my first, non working attempt:

getData() {
  const params = {
    orgId: this.orgId,
    siteId: this.siteId
  };

  let dataFunction = this.dataService.getFullData;

  if (typeof params.siteId === 'undefined') {
    dataFunction = this.dataService.getLeanData;
  }

  dataFunction(params)
     .then(result => {
        // ... do stuff
      })
      .catch(this.handleError);
}

The above looked like the right approach, and would have worked if:

  • This was not an ES6 class. Operating in strict mode means that the scope of the function being called (this) is actually something, and not just the global scope.
  • The data functions were local to the ES6 class, and not in a depended upon class

Basically, the function call worked, but it was not executing the dataFunction within the scope of the dataService class – resulting in errors.

Why is this happening?

This happens because when assigning a function to the local variable “dataFunction”, only the body of the function, not the object that it needs to be called on, is copied. If the getFullData and getLeanData functions contained non scope specific code, such as a simple console log statement, the behaviour would have been as expected. However in this case, we need the scope of the dataFunction class.

The solution

The solution is to call the function with the scope (officially called the “thisArg”) explicitly set. This can be done using the Function​.prototype​.call() method.

getData() {
  const params = {
    orgId: this.orgId,
    siteId: this.siteId
  };

  let dataFunction = this.dataService.getFullData;

  if (typeof params.siteId === 'undefined') {
    dataFunction = this.dataService.getLeanData;
  }

  dataFunction.call(this.dataService, params)
     .then(result => {
        // ... do stuff
      })
      .catch(this.handleError);
}

Calling .bind on the function call will also work with you, but I think the use of .call is a little more readable

hardware

Surface Book keyboard review

I’ve said it before, and I’ll say it again – I prefer chiclet keyboards. In my opinion, they are more suited to a long days worth of typing and programming than a mechanical keyboard, or a more conventional keyboard.

The Surface Book keyboard is the second chiclet keyboard that I’ve purchased in the last month, with the previous one being the Cherry KC 6000. I bought the Surface Book keyboard to be a replacement for my home setup, where I was previously using an ageing and horribly tatty Microsoft Ergonomic USB natural keyboard.

The requirements

My requirements for my home keyboard were slightly different to my workplace keyboard requirements:

  • Must have chiclet keys
  • Must not look too garish. Whilst this is for my home setup, I like having a neat desk, and a luminous keyboard with fancy lighting just wouldn’t fit into this
  • Must not be a natural keyboard. I had previously used an ergonomic natural keyboard for a good few years. It was good in the sense that I didn’t get any discomfort using it, but I don’t think my typing style was every completely suited to it. My left hand in particular liked to travel into the right hand section of the keyboard during some keypress combos. I also found some aspects about the key layout to be jarring – such as the double spaced “n” key.
  • Must be bluetooth. Like I say, I keep a neat desk and the fewer wires, the better.
  • Must have a numeric keypad
  • Must not have any keys in strange places. As a developer, I use the ctrl, alt, super, home, end and paging keys a lot. Any re-arrangement of these keys would probably impact my productivity, so these must firmly stay where they would normally be

The matching keyboards

I found two keyboard that matched my needs. The first was the Logitech Craft, which also has the added bonuses of being able to pair with multiple devices, and also has a big wheel that can be used for additional control, although this sounds like it does not typically stretch beyond volume control:

However, there are two downsides to this keyboard:

1 – it’s expensive, priced at about £160. Whilst I think that as a developer I need the best tooling, this just feels like a stretch.

2 – It’s hard to get hold of. When I was trying to purchase this keyboard, I struggled to find anyone that actually had it in stock, including Amazon. I eventually found it in stock on very.co.uk, but it wasn’t really in stock and they sat on my order for over a week before I lost my patience and cancelled.

This left one other keyboard – the Surface Book Keyboard.

Surface Book Keyboard Pros

The keyboard is aesthetically very pleasing, with a small bezel and a simple and tasteful grey and white colour scheme used. The Bluetooth connection helps with the appearance of the keyboard as it means you don’t have any wires to try and hide or make neat. I’d describe the footprint of the keyboard as low profile, as it has such small bezels and looks discreet yet impressive in the middle of your desk in complete isolation from any sort of cabling.

It is super comfortable to type on, with the key presses feeling light yet satisfying to depress, with a sufficient level of feedback delivered to your finger tips. Typing on it is pleasurable and fast.

The general typing comfort is helped along by a healthy level of banking on the keyboard towards the user, which is something that the Cherry KC 6000 fails at. The brilliant thing about this banking is that it’s a stroke of smart design – the bank towards the user comes from the battery compartment:

The keyboard layout is sensible, and doesn’t try to be too clever in this area by re-stacking keys or jigging around the layout of anything. It very much is laid out like you would find a laptop keyboard – with the Function lock key placed between the Ctrl and Super keys. This probably is a big plus for you if you tend to dock your laptop and work off of it, or if you’re used to working on a laptop keyboard. Plus points for me on both – when I’m at home, I work off of my laptop on a stand.

Availability wise, this keyboard is very easy to get hold of. I ordered this online at about 4pm through PC World’s website, and was able to collect it the next day at 11am from my local store. It’s also well stocked elsewhere around the web, which is a marked departure from my experience when trying to purchase the Logitech Craft.

Cost wise, the Surface Book keyboard can be yours for £79.99. This appears to be a fixed price, much in the way that Apple price their hardware. It’s the same price everywhere, unless you look at used items. Here are some links:

Surface Book keyboard cons

The only real downside for me (and this is a nitpick) with the Surface Book keyboard is that it can only be paired to one device at a time. I often switch between my desktop PC for gaming, and my Xubuntu laptop for everything else. This means that every time I do this, I need to pair the keyboard again. Luckily, this isn’t much more than a slight inconvenience, as the pairing is a quick and painless experience on both Windows and Ubuntu based operating systems from version 18 onwards.

Protip – don’t throw away your USB keyboard

USB keyboards have two big advantages. They don’t need batteries, and they will work as soon as they have a physical connection, even whilst your machine is still booting up. If you need to do anything in the BIOS, for example, you will not be able to do this with a Bluetooth keyboard as the drivers for it will not have been loaded. So, keep your dusty old USB keyboard for the day when you run out of power and have no batteries, or for when you need to jump into your BIOS.

Conclusion

On the whole, this keyboard is fantastic, and I’ve give it a 9 out of 10. I’d highly recommend this keyboard for general typing, programming, and some gaming.

hardware, Xubuntu

Does the Surface Keyboard work with Ubuntu? Yes, but only with Ubuntu 18 onwards

The other keyboard that I’ve recently purchased as well as the Cherry KC 6000 is the Microsoft Surface Keyboard. After reading a few reviews online and watching a few videos, I decided on getting this bluetooth keyboard for my home setup.

My main concern however, was whether this keyboard would work with my Xubuntu laptop. Some googling revealed mixed answers – someone running Linux Mint, which is based on Ubuntu, had no luck. Whereas a separate thread on Reddit seemed to indicate that it worked without any problems.

After purchasing the keyboard and using it, here is what I found: In order to successfully pair the keyboard with any PC, the keyboard will ask the PC to present the user with a passcode challenge. On Windows, when pairing, a dialogue will pop up that contains a 6 digit number. It will prompt you to enter a 6 digit passcode on the keyboard. One this is entered on the Surface Keyboard, it will be paired and will work as expected. When pairing the keyboard with my Xubuntu laptop, here’s what I found:

The Surface Keyboard will not work with Ubuntu 16 or any derivatives

This is because of a few bugs in the bluetooth stack in Ubuntu 16. The passcode dialogue will never appear, meaning that you will not be able to successfully pair the keyboard to a machine running Ubuntu 16 or below. This problem also happens when attempting to pair in the terminal using bluetoothctl.

The Surface Keyboard will work with Ubuntu 18 upwards

Luckily, I have a spare laptop, and was able to use that to test out the pairing on Ubuntu 18 before committing myself to upgrading my main work laptop. Using the spare, I was able to successfully pair on Ubuntu 18.04 through the bluetooth gui as the passcode prompt was now appearing. I took the plunge and updated my main Xubuntu laptop and can confirm that the pairing with the Surface Book keyboard fully works on Ubuntu 18.04 and any derivatives.

Xubuntu

How to fix: Xubuntu update manager not showing new distro releases

I’d previously been running Xubuntu 16.04 on my main laptop without any issues, and had been waiting for a while before upgrading to Xubuntu 18.04.

Because I was having problems with bluetooth, and some furious googling had lead me to conclude that a distribution upgrade would resolve my issues, I decided that now was a suitable time to update my operating system.

Xubuntu’s documentation states that when there is a major LTS release available for you to upgrade to, the update manager will pop up with a dialogue box informing you, and giving you the option to update. It looks similar to this:

You will not get this prompt if you have broken PPA repository URLs configured. The best way to find out which PPA URLs are causing problems, is to run…

$ sudo apt-get update

From your terminal, and observing the results:

Reading package lists... Done   
E: The repository 'http://ppa.launchpad.net/pinta-maintainers/pinta-stable/ubuntu xenial Release' does not have a Release file

In my case, my PPA configuration for the excellent image editor, Pinta, appeared to be broken. Simply disabling this PPA from the software updater by unchecking the checkbox allowed the OS to fully pull update information, and then prompted me with the distribution update dialogue.

hardware, Technical

Keyboard: Cherry KC 6000 review

Cherry KC 6000 Keyboard

I prefer chiclet keyboards. I haven’t done any scientific analysis, but I’m confident that my words per minute typing is higher wen I’m using a keyboard that has chiclet keys. Amongst developers this in an unpopular opinion, with many developers preferring mechanical keyboards (don’t be that guy smashing a mechanical keyboard in an open plan office!)

Chiclet keyboards have keys that do not need to depress as far in order to register. They also have an evenly sized gap between each key, making it more difficult for you to fumble the keyboard and hit the wrong key. They are typically found on laptop keyboards.

With that in mind, I needed a new keyboard to replace the one I take into client offices and leave there whilst on a contract. I was previously using an old, fairly standard Dell USB keyboard that was becoming embarrassingly tatty – most of the key letters were completely worn off. It also seemed to be forever caked in a layer of dirt that no amount of cleaning could remove.

My requirements were fairly simple:

  • It must be comfortable. This will be used for long periods of time (6+ hours a day) and I’ve found myself experiencing some discomfort in the later hours when using my bog standard keyboard.
  • It must be USB – I understand the need for a wireless mouse, but a wireless keyboard is an unnecessary luxury for my day to day desk based work. Also, the less interacting with bluetooth, the better.
  • It must not be garish – I don’t want to demo things to clients and my keyboard be a huge distraction because it is letting off a luminous glow.
  • It must have chiclet keys for the productivity and preferences outlined above.
  • It must have a numeric keypad (it does make me wonder how developers and other creatives work for large amounts of time without a numeric keypad)

The keyboard I found that fits the above is the Cherry KC 6000, which is priced nicely at £35 on Amazon (The linked product incorrectly claims the product is the Cherry KC 600, but this is just a typo – the Cherry KC 600 does not exist and having taken delivery of this item, can confirm that it is indeed the Cherry KC 6000).

On the whole, I am very happy with this keyboard, and would give it 4 out 5 stars:

Pro: Super comfortable to type on

Cherry KC 6000 Keyboard

This is by far the most important factor on any keyboard! The keys have a really nice weight and feel to them, and typing for a long amount of time on this keyboard is comfortable and does not result in any straining pains that I would sometimes get on my previous bog standard keyboard.

Cherry KC 6000 Keyboard Chiclet keys
Cherry KC 6000 Keyboard Chiclet keys

Pro: It is aesthetically pleasing and not too garish

This keyboard has no crazy backlighting and comes in two fairly neutral colours – a silver body with white keys, or a black body with black keys. Some may view the lack of backlighting as a negative, but this isn’t a problem for me. I don’t type in the dark as I don’t have the eyes for it, and I touchtype.

Pro: It has a slim, low profile

This helps with having a general feel of neatness on your desk. The keyboard has only moderate bezels and has no ridges where dust and other crap can get stuck.

Con: The F11 and F12 keys are not directly above the backspace button

Cherry KC 6000 Keyboard function keys

This is a small irritation as it just takes some getting used to. On most keyboards, the function keys are laid out in banks of 4, with a bigger space between every 4th function key. This space is gone on the Cherry KC 6000, and the saved space is given to two additional buttons – one to open your default browser, and another to lock your machine. I don’t mind having these extra buttons, but annoyingly they are right above the backspace key, so it will take some getting used to not being able to naturally travel to the F11 key to go fullscreen, or the F12 key to open my Guake terminal.

Con: There is a backspace key in the numeric keypad

Cherry KC 6000 Keyboard numeric pad

Again, this is another one of those small things that will take you a day or so to get used to. You’d normally only find one backspace key on a keyboard and would not expect to have one on the numeric pad. This one is positioned where the minus key normally is, so I’ve found myself accidentally deleting characters rather than putting in the minus character a few times.

Other reviews online think that the keyboard is not banked enough towards the user (the way most keyboards have legs that you can have up or down. The keyboard did initially look a little flat on my desk when I first set it up, but I’ve found that it has not impacted my typing at all.

Conclusion – a productivity win!

To conclude, I’m happy with the Cherry KC 6000 keyboard. I t has made me more productive, and is comfortable to use for long typing stints (think 6+ hours of programming!)

Books

2018 in books

2018 is the year that I got back into reading. Here is a list of some of the non fiction books that I have read throughout the year. I’ve started the list with the tech books, then put the social reads toward the end of the list.

American Kingpin: Catching the Billion-Dollar Baron of the Dark Web by Nick Bilton

Fascinating re-telling of the story of the infamous dark web black market website, the Silk Road. It covers the development of the Silk Road, how it came to exist, who was behind it, how it ran, and how it got taken down. The amount of money the Silk Road made at it’s peak was incredible!

How Music Got Free: What happens when an entire generation commits the same crime by Stephen Witt

This book is awesome! A great retelling of late 90s and early 2000s music piracy crews, who was behind them, how they came to prominence and how they ultimately got caught. It also covers the format wars of digital music, and how the MP3 came to dominate. This is well worth a read for all of you techies and will take you back to the Napster days!

Bad Blood: Secrets and Lies in a Silicon Valley Startup by John Carreyrou

You might have heard of Theranos and it’s strange CEO, Elizabeth Holmes. The now defunct startup was built on lies, and ended up collapsing like a house of cards, having burned $750 million from duped investors. Theranos claimed to have ground breaking technology that could run hundreds of blood tests on a single pin prick of blood. In reality, they could only produce unreliable results from a vial of blood. The facade continued for years with Elizabeth Holmes managing to persuade several investors to value Theranos at over a billion dollars. A truly 5 star read.

Flash Boys by Michael Lewis

The author behind The Big Short took his investigative skills to look into the murky world of high frequency trading. The rise of high frequency trading ties up with the rise of the internet age, and lead to stock trading companies spending millions on faster connections, and to even have their servers physically located closer to certain machines in data centres in order to make a connection a fraction faster. A brilliant read, with a great mix of scandal and technology.

Disrupted: Ludicrous Misadventures in the Tech Start-up Bubble by Dan Lyons

Hilarious and alarming account of respected journalist that spent over a year working at a startup, HubSpot. HubSpot sounds like a toxic place to work, with regular firings (known within the cult of HubSpot as “Graduations”), non existent on boarding, and a self cultivated cult of personality around it’s leaders. Oh, and it’s product really isn’t innovative. HubSpot is a real company and is still trading.

Outliers: The Story of Success by Malcolm Gladwell

Is Bill Gates really an extraordinary individual, or did his circumstances make him an extraordinary individual? Are elite sports professionals really the best in their peer group, or is it just that the month of the year that they were born in meant that they were physically more developed than their peers? Gladwell does a great job of exploring the above and several other outliers, and this is a great read for the curious mind. I’d highly recommend picking up this book, and it’ll probably be cheap as it was first published a decade ago!

PostCapitalism: A Guide to Our Future by Paul Mason

I didn’t really enjoy this read. Let me prefix this by stating that I like Paul Mason and that I find him insightful. This book starts of well with an analysis of previous industrial and technological eras, how they’ve progressed, and where we are now – genuinely interesting stuff. What I found tricky to follow was just how heavy this book got when it descended into economic theory, backed up with not so brilliantly annotated graphs. Anyway, the tl;dr is that we’re all a bit fucked because globalisation.

Enough by John Naish

A book in part about minimalism and in part about consumerism and when to stop. The book does make you think about where your constant drive to own more stuff comes from, and also covers how technology might be playing a big role in pushing our desire for more. A pretty good read but could have been a little more concise.

Natives: Race and Class in the Ruins of Empire by Akala

Excellent, well worth a read. A comprehensive look at our mindset towards race and class in Britain and where it comes from, and what the future might hold. This is one of a few intellectual commentaries on race in the UK that I have read this year.

The Good Immigrant by Nikesh Shukla

Another great read. This isn’t really by Nikesh Shukla, but is instead a collection of essays by several well known minorities in the UK. My favourite essay in the book was by Riz Ahmed, where he discusses how his career as an actor and going to auditions helped him deal with the special attention he gets at airport security.

Brit(ish): On Race, Identity and Belonging by Afua Hirsch

Another great intellectual commentary on how we look at race and identity in modern Britain. It also captures a lot of the identity self questioning that many mixed race people experience. This is one of my personal favourites of 2018.

This is Going to Hurt: Secret Diaries of a Junior Doctor by Adam Kay

This first hand account of the first 5 years of a junior doctor’s career in the NHS is a gripping read. I ploughed through this book in about 3 days. It’s very easy to read, but is also compelling with its mixture of funny and incredibly sad accounts of the experiences of a junior doctor.

Mental – Bad Behaviour, Ugly Truths and the Beautiful Game by Jermaine Pennant

This is the only sports related biography I read in 2018. I picked it up because of genuine curiosity about the famous story of footballer Jermaine Pennant forgetting that he had left a car at a train station in Spain, and it staying there, running, in the car park, for a week before running out of fuel. Predictably, Pennant plays this whole episode down and claims that he did forget that the car was there as he was rushing back to the UK. However he does strongly claim that he did not leave the engine running. Immediately after this assertion, in the next paragraph, is a statement from Pennant’s agent that contradicts this: “I know for a fact that he left that car running”.

Grime Kids: The Inside Story of the Global Grime Takeover by DJ Target

One of several books covering the rise of Grime music that was released in 2018. This is a good coverage of some of the early artists in grime music emerging out of East London. Whilst it does not cover all grime artists, it gives you a good overview of some of the original members of grime collectives Pay As You Go, and Roll Deep. A good read if you want to know more about Grime music,

Azure, DevOps, Node.Js, Technical

Hosting personal projects on low cost dedicated servers, not the cloud

For personal projects, the cloud is probably too expensive.

I say this as a person that previously hosted all of their personal projects on the Azure cloud platform. I had multiple app engines inside a single Azure App service, which worked pretty well. The main advantage of running anything in the cloud is that you don’t have to worry about server maintenance, and, if used correctly, can be cheaper for projects over a certain size. I had a few reasons for getting off the cloud:

Restricted Tech stack

Generally, a cloud service will restrict you to running some specific technologies. For example, a specific version of Node.Js. Even if your cloud service supports the technology you want to use, finding out if it supports the version you want to use is not easy. Right now, if you want to know which versions of Node.Js Azure supports, your best place to get that answer is StackOverflow , even though this information should be prominently displayed in the documentation.

The cloud offer is fast moving and hard to keep up with

The offer from cloud service providers changes a lot. I actually had to look back over my previous blog post detailing how I hosted my projects in the cloud to remind myself how it all worked. The problem is more acute on AWS, where the naming is so whacky that someone maintains a page where the names of AWS products are translated into useful plain English.  Elastic Beanstalk, anyone?

There is also a danger of just choosing the wrong cloud product. Like the guy that used Amazon Glacier (real name) storage and ended up racking up a $150 bill for a 60Gb download,

You need to learn the platform and not the technology

The argument put forward by the cloud guys often is that you get to focus on building your app and not deploying it or worrying about the infrastructure. This is absolutely true for a more trivial apps.

However we all know that it is rare for a useful application to be this flat. If you want to connect your application into another service to store data or even just files, you’ll need to learn what that service might be on that platform, and then you’ll need to learn that platform’s API in order to program against it. Generally, once you’ve got that service wired in, you’ll get some benefits, but the upfront effort is a cost that should be taken into account.

Run projects for less on physical servers

Newsflash – physical servers can be very cheap, especially used ones. Helpfully someone on reddit put together a megalist of cheap server providers. Below is a list of the main providers that are worth checking out in Europe:

  • Kimsufi – based in France. Cheap used servers. Limited availability.  No gigabit connections
  • Online.net – based in France, cheap line of “dediboxes”
  • OneProvider – based in France, but have servers located globally

As of writing, you can get a gigabit connected, 4 core,  4 Gb Ram server with OneProvider for €8 a month. Whilst comparing the cost of this hardware on a cheap provider to a cloud provider would be silly (you’re only supposed to deploy the hardware you need in the cloud), as soon as you run two web apps on an €8 box, you’re saving money.

I’ve got one server with Kimsufi, and one with Online.net. I’ve had one very small outage on Kimsufi in the 2 years that I’ve had a box there. The box was unavailable for around 10 minutes before Kimsufi’s auto monitoring rebooted the box.

I’ve had a single lengthier outage in the 18 months I’ve used Online.net which lead to a server being unavailable for about 5 hours. This was a violation of their SLA and meant that I got some money back. I detailed this outage in a previous post – “Lessons Learned from a server outage“.

Run whatever you want with on a dedicated server

When you aren’t running inside of a neatly packaged cloud product, you have full ownership of the server and can run whatever you want. The downside of this is that there is more responsibility, but in my opinion, the pros out weigh the cons, such as not having to wait for some service to become available (Example – GCS doesn’t support http to https redirection, I’m guessing you need to buy another service for that – this is a few simple lines of config in nginx)

Being able to run whatever you want also opens the door for you to be able to play around with more technology without having to worry about an increase in cost. Running my own dedicated boxes has been fun, and I’ve learned a lot by doing so

React

Adding React to an existing web application and build pipeline

In order to brush up on my React knowledge, in this post I’m going to outline how I added it into the admin tools of links.edspencer.me.uk.

This walk through will have an emphasis on not just being a hello world application, but will also focus on integrating everything into your existing build pipeline – so you’ll actually be able to run your real application in a real environment that isn’t your localhost.

I’m writing this up because I felt the setup of React was a bit confusing. You don’t just add a reference to a minified file and off you go – the React setup is a lot more sophisticated and is so broken down and modular that you can swap out just about any tool for another tool. You need a guide just to get started.

We’ll be going along the path of least resistance by using the toolchain set that most React developers seem to be using – Webpack for building, and Babel for transpiling.

The current application

The current application works on purely server side technology – there is no front end framework currently being used. The server runs using:

  • NodeJs
  • ExpressJs
  • The Pug view engine

The build pipeline consists of the following:

  • A docker image built on top of the node 6.9.4 image, with some additionally installed global packages
  • An npm install
  • A bower install for the front end packages
  • A gulp task build (to compile the SCSS, and minify and combine the JS and CSS)

Step 1 – moving to Yarn

You can get your React dependencies through npm, but I figured that moving to Yarn form my dependency management was a good move. Aside from it being suggested in most tutorials (which isn’t a good enough reason alone to move):

  1. Bower is not deprecated , but the Bower devs recommend moving to Yarn
  2. The current LTS version of Node (8.9.4) ships with npm 5, which has some checksum problems that might cause your builds to fail
  3. Yarn is included in the Node 8.9.4 docker image

Hello node 8.9.4

While we’re here we may as well update the version of node that the app runs under to 8.9.4, which is the current LTS version.

In dockerized applications, this is as easy as changing a single line of code in your docker image:

FROM node:6.9.4

Becomes:

FROM node:8.9.4

Goodbye Bower

Removing Bower was easy enough. I just went through the bower.json file, and ran yarn add for each item in there. This added the dependencies into the package.json file.

The next step was then to update my gulp build tasks to pull the front end dependencies out of the node_modules folder instead of the bower_components folder.

I then updated my build pipeline to not run bower install or npm install, and to instead run yarn install, and deleted the bower.json file. In my case, this was just some minor tweaks to my dockerfile.

Build pipleline updates

The next thing to do was to remove any calls to npm install from the build, and to instead call yarn install.

Yarn follows npm’s package file name and folder install conventions, so this was a very smooth change.

Step 2 – Bringing in Webpack

You have to use a bundler to enable you to write modular React code. Most people tend to use Webpack for this, so that’s the way we’re going to go. You can actually use Webpack to do all of your front end building (bundling, minifying, etc), but I’m not going to do this.

I have a working set of gulp jobs to do my front end building, so we’re going to integrate Webpack and give it one job only – bundling the modules.

Firstly, let’s add the dependency to webpack.

yarn add webpack --dev

Now we need to add an empty configuration file for Webpack:

touch webpack.config.js

We’ll fill out this file later.

Lastly, lets add a yarn task to actually run webpack, which we’ll need to run when we make any React changes. Add this to the scripts section of your package.json file:

{
  "scripts": {
    "build-react": "webpack"
  }
}

You may be thinking that we could just run the webpack command directly, but that will push you down the path of global installations of packages. Yarn steers you away from doing this, so by having the script in your package.json, you know that your script is running within the context of your packages available in your node_modules folder.

Step 3 – Bringing in Babel

Babel is a Javascript transpiler that will let us write some ES6 goodness without worrying too much about the browser support. Babel will dumb our javascript code down into a more browser ready ES5 flavour.

Here’s the thing – every tutorial I’ve seen involves installing these 4 babel packages as a minimum. This must be because Babel has been broken down into many smaller packages, and I do wonder if this was a bit excessive or not:

yarn add babel-core babel-loader babel-preset-es2015 babel-preset-react

Once the above babel packages have been installed, we need to wire it up with webpack, so that webpack knows that it needs to run our react specific javascript and jsx files through Babel.

Update your webpack.config.js to include the following.

module.exports = {
 module: {
   loaders: [
     { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ },
     { test: /\.jsx$/, loader: 'babel-loader', exclude: /node_modules/ }
   ]
 }
}

Note that the webpack.config.js file is not yet complete – we’ll be adding more to this shortly.

Step 4 – Bringing in React

We’re nearly ready to write some React. We just need to add a few more packages first, and that’ll be it, I promise.

yarn add react react-dom

This will add the core of React, and another React module that will allow us to do some DOM manipulation for our first bit of react coding. This does feel a bit daft to me. Webpack is sophisticated enough to run through our modules and output exactly what is needed into a single JS file. Why, therefore, do we need to break the install of a server side package down so much, if Webpack is going to grab what we need?

Step 5 – Actually writing some React code

We’re now at the point where we can write some react. So in my application, I’m going to have a small SPA (small is how SPAs should be – if you need to go big, build a hybrid) that will be the admin tool of my web application.

So, in the root of our web app, let’s add a folder named client, and in this folder, let’s add the following two files:

client/admin.js

import React from 'react';
import ReactDOM from 'react-dom';
import App from './admin-app.jsx';

ReactDOM.render(<App />, document.getElementById('react-app'));

client/admin-app.jsx

import React from 'react';

export default class App extends React.Component {
 render() {
 return (
 <div style={{textAlign: 'center'}}>
   <h1>Hello World - this is rendered by react</h1>
 <div>);
 }
}

The above two files aren’t doing a lot. The JSX file is declaring our “App” class, and the JS is telling react DOM to push our app JSX into a html element with the id “react-app”.

Updating our webpack config

Now we should complete our webpack config to reflect the location of our React files. Update webpack.config.js so that the entire file looks like this:

const path = require('path');
module.exports = {
  entry:'./client/admin.js',
  output: {
    path:path.resolve('dist'),
    filename:'admin_bundle.js'
  },
  module: {
    loaders: [
      { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ },
      { test: /\.jsx$/, loader: 'babel-loader', exclude: /node_modules/ }
    ]
  }
}

We’re telling webpack where then entry point of our React application is, and where to output the bundled code. (dist/admin_bundle.js).

Note we’re also leaning on a new package (path) to help us with some directory walking, go ahead and add that using yarn add path.

Now, let’s go ahead and ask webpack to bundle our React app:


yarn run build-react

Now if everything as worked as expected, webpack would have generated a bundle for us in dist/admin_bundle.js. Go ahead and take a peek at this file – it contains our code as well as all of the various library code from React that is needed to actually run our application.

Step 6 – Plugging the bundled react code into our application

As this SPA is only going to run in the admin section, we need to add two things to the main page in the admin tool that we want this to run on.

In my case, I’m using the awesome pug view engine to render my server side html. We need to add a reference to our bundled Javascript, and a div with the id “react-app”, which is what we coded our react app to look for.

Here’s a snippet of the updated admin pug view:

div.container
  div.row
    div.col-xs-12
      h1 Welcome to the admin site. This is rendered by node
  #react-app

script(type="text/javascript", src="/dist/admin_bundle.js")

Alternatively, here’s how it looks in plain old html:

<div class="container">
  <div class="row">
    <div class="col-xs-12">
        <h1>Welcome to the admin site. This is rendered by node</h1>
    </div>
  </div>
  <div id="react-app"></div>
</div>

<script type="text/javascript" src="/dist/admin_bundle.js">

Now all we need to do is to run our application and see if it works:

Et voila!

Step 8 – A little more build cleanup

As previously stated, I’ve got an existing set of gulp tasks for building my application, that I know work and that I’d like to not have to re-write.

We’ve kept out webpack build separate, so let’s now tie it up with our existing gulp task.

Previously, I would run a gulp task that I had written that would bundle and minify all Js and SCSS:

gulp build

But now we also need to run webpack. So, let’s update our package.json file to include some more custom scripts:

"scripts": {
  "build-react": "webpack",
  "build": "gulp build && webpack"
}

Now, we can simply run:

yarn run build

Now we just need to update our build process to run the above yarn command instead of gulp build. In my case, this was a simple update to a dockerfile.

Conclusion

The above setup has got us to a place where we can actually start learning and writing some React code.

This setup is a lot more involved than with other frameworks I’ve worked  with. This is because the React is a lot less opinionated on how you do things, and the tooling that you use. The disadvantage to this is that it makes the setup a bit trickier, but the advantage is that you have more freedom elsewhere in the stack.

I do however think that there may be a bit too much freedom, and a little more convention and opinion wouldn’t hurt. For example some sensible install defaults that could be changed later would cover this off. Whilst there may be some yeoman generators out there, they wont help with integration into existing web applications.

What is interesting is noting the differences between how you build your React applications to how you build your AngularJs applications. With AngularJs, you reference one lib and off you go. You may only use a small part of AngularJs, but you’ve referenced the whole thing, whereas with React, and Angular, you have a build process which outputs you a file containing exactly what you need.

I think this overall is a good thing, but we should still put this into context and be careful to not quibble over kilobytes.

We also saw how we can integrate the React toolchain into our existing toolchain without throwing everything out.

Next up, I’ll be writing a blog post on what you should do in order to run React in production.

DevOps

Lessons learned from a server outage

At just gone midnight this morning when I was nicely tucked up in bed, Uptime Robot sent me an email telling me that this very website, was not reachable.

I was fast asleep, and my phone was on do not disturb, so that situation stayed as it was until I woke up.

After waking up and seeing the message, I assumed that the docker container running the UI bit of this website had stopped running. Easy, I thought, I’ll just start up the container again, or I’ll just trigger another deployment and let my awesome GitLab CI pipeline do it’s thing.

Except this wasn’t the case. After realising, in a mild panic, that I could not even SSH onto the server that hosts this site, I got into contact with the hosting company (OneProvider) for some support.

I sent off a support message and sat back in my chair and reflected for a minute. Had I been a fool for rejecting the cloud? If this website was being hosted in a cloud service somewhere, would this have happened? Maybe I was stupid to dare to run my own server.

But as I gathered my thoughts, I calmed down and told myself I was being unreasonable. Over the last few years, I’ve seen cloud based solutions in various formats, also fail. One of the worst I experienced was with Amazon Redshift, when Amazon changed an obscure setup requirement meaning that you essentially needed some other Amazon service in order to be able to use Redshift. I’ve also been using a paid BitBucket service for a while with a client, which has suffered from around 5 outages of varying severity in the last 12 months. In a weird co-incidence, one of the worst outages was today. In comparison, my self hosted GitlLab instance has never gone down outside of running updates in the year and a half that I’ve had it.

Cloud based or on my own server, if an application went down I would still go through the same support process:

  • Do some investigation and try to fix
  • Send off a support request if I determined the problem was to do with the service or hosting provider

Check your SLA

An SLA or a Service Level Agreement basically outlines a service provider’s responsibilities for you. OneProvider’s SLA states that they will aim to resolve network issues within an hour, and hardware issues within two hours for their Paris data centre. Incidentally, other data centres don’t have any agreed time because of their location – like the Cairo datacenter. If they miss these SLAs, they have self imposed compensation penalties.

Two hours had long gone, so whatever the problem was, I was going to get some money back.

Getting back online

I have two main services running off of this box:

I could live with the link archive going offline for a few hours, but I really didn’t want my website going down. It has content that I’ve been putting on here for years, and believe it or not, it makes me a little beer money each month though some carefully selected and relevant affiliate links.

Here’s where docker helped. I got the site back online pretty quickly simply by starting the containers up on one of my other servers. Luckily the nginx instance on that webserver was already configured to handle any requests for edspencer.me.uk, so once the containers were started, I just needed to update the A record to point at the other server (short TTL FTW).

Within about 20 minutes, I got another email from Uptime Robot telling me that my website was back online, and that it had been down for 9 hours (yikes).

Check your backups

I use a WordPress plugin (updraft) to automate backups of this website. It works great, but the only problem is that I had set this up to take a backup weekly. Frustratingly, the last backup cycle had happened before I had penned two of my most recent posts.

I started to panic again a little. What if the hard drive in my server had died and I had lost that data forever? I was a fool for not reviewing my backup policy more. What if I lost all of the data in my link archive? I was angry at myself.

Back from the dead

At about 2pm, I got an email response from OneProvider. This was about 4 hours after I created the support ticket.

The email stated that a router in the Paris data centre that this server lives in, had died, and that they were replacing it and it would be done in 30 minutes.

This raises some questions about OneProvider’s ability to provide hosting.

  • Was the problem  highlighted by their monitoring, or did they need me to point it out?
  • Why did it take nearly 14 hours from the problem occurring to it being resolved?

I’ll be keeping my eyes on their service for the next two months.

Sure enough, the server was back and available, so I switched the DNS records back over to to point to this server.

Broken backups

Now it was time to sort out all of those backups. I logged into all of my servers and did a quick audit of my backups. It turns out there were numerous problems.

Firstly, my GitLab instance was being backup up, but the backup files were not being shipped off of the server. Casting my memory back, I can blame this on my Raspberry Pi that corrupted itself a few months ago that was previously responsible for pulling backups into my home network. I’ve now setup Rsync to pull the backups onto my RAIDed NAS.

Secondly, as previously mentioned, my WordPress backups were happening too infrequently. This has now been changed to a daily backup, and Updraft is shunting the backup files into Dropbox.

Thirdly – my link archive backups. These weren’t even running! The database is now backed up daily using Postgres’ awesome pg_dump feature in a cron job. These backups are then also pulled using Rsync down to my NAS.

It didn’t take long to audit and fix the backups – it’s something I should have done a while ago.

Lessons Learned

  • Build some resilience in. Deploy your applications to multiple servers as part of your CI pipeline. You could even have a load balancer do the legwork of directing your traffic to servers that it knows are responding correctly.
  • Those servers should be in different data centres, possibly with different hosting providers.
  • Make sure your backups are working. You should check these regularly or even setup an alerting system for when they fail.
  • Make sure your backups are sensible. Should they be running more frequently?
  • Pull your backups onto different servers so that you can get at them if the server goes offline