hardware, Technical

Keyboard: Cherry KC 6000 review

Cherry KC 6000 Keyboard

I prefer chiclet keyboards. I haven’t done any scientific analysis, but I’m confident that my words per minute typing is higher wen I’m using a keyboard that has chiclet keys. Amongst developers this in an unpopular opinion, with many developers preferring mechanical keyboards (don’t be that guy smashing a mechanical keyboard in an open plan office!)

Chiclet keyboards have keys that do not need to depress as far in order to register. They also have an evenly sized gap between each key, making it more difficult for you to fumble the keyboard and hit the wrong key. They are typically found on laptop keyboards.

With that in mind, I needed a new keyboard to replace the one I take into client offices and leave there whilst on a contract. I was previously using an old, fairly standard Dell USB keyboard that was becoming embarrassingly tatty – most of the key letters were completely worn off. It also seemed to be forever caked in a layer of dirt that no amount of cleaning could remove.

My requirements were fairly simple:

  • It must be comfortable. This will be used for long periods of time (6+ hours a day) and I’ve found myself experiencing some discomfort in the later hours when using my bog standard keyboard.
  • It must be USB – I understand the need for a wireless mouse, but a wireless keyboard is an unnecessary luxury for my day to day desk based work. Also, the less interacting with bluetooth, the better.
  • It must not be garish – I don’t want to demo things to clients and my keyboard be a huge distraction because it is letting off a luminous glow.
  • It must have chiclet keys for the productivity and preferences outlined above.
  • It must have a numeric keypad (it does make me wonder how developers and other creatives work for large amounts of time without a numeric keypad)

The keyboard I found that fits the above is the Cherry KC 6000, which is priced nicely at £35 on Amazon (The linked product incorrectly claims the product is the Cherry KC 600, but this is just a typo – the Cherry KC 600 does not exist and having taken delivery of this item, can confirm that it is indeed the Cherry KC 6000).

On the whole, I am very happy with this keyboard, and would give it 4 out 5 stars:

Pro: Super comfortable to type on

Cherry KC 6000 Keyboard

This is by far the most important factor on any keyboard! The keys have a really nice weight and feel to them, and typing for a long amount of time on this keyboard is comfortable and does not result in any straining pains that I would sometimes get on my previous bog standard keyboard.

Cherry KC 6000 Keyboard Chiclet keys
Cherry KC 6000 Keyboard Chiclet keys

Pro: It is aesthetically pleasing and not too garish

This keyboard has no crazy backlighting and comes in two fairly neutral colours – a silver body with white keys, or a black body with black keys. Some may view the lack of backlighting as a negative, but this isn’t a problem for me. I don’t type in the dark as I don’t have the eyes for it, and I touchtype.

Pro: It has a slim, low profile

This helps with having a general feel of neatness on your desk. The keyboard has only moderate bezels and has no ridges where dust and other crap can get stuck.

Con: The F11 and F12 keys are not directly above the backspace button

Cherry KC 6000 Keyboard function keys

This is a small irritation as it just takes some getting used to. On most keyboards, the function keys are laid out in banks of 4, with a bigger space between every 4th function key. This space is gone on the Cherry KC 6000, and the saved space is given to two additional buttons – one to open your default browser, and another to lock your machine. I don’t mind having these extra buttons, but annoyingly they are right above the backspace key, so it will take some getting used to not being able to naturally travel to the F11 key to go fullscreen, or the F12 key to open my Guake terminal.

Con: There is a backspace key in the numeric keypad

Cherry KC 6000 Keyboard numeric pad

Again, this is another one of those small things that will take you a day or so to get used to. You’d normally only find one backspace key on a keyboard and would not expect to have one on the numeric pad. This one is positioned where the minus key normally is, so I’ve found myself accidentally deleting characters rather than putting in the minus character a few times.

Other reviews online think that the keyboard is not banked enough towards the user (the way most keyboards have legs that you can have up or down. The keyboard did initially look a little flat on my desk when I first set it up, but I’ve found that it has not impacted my typing at all.

Conclusion – a productivity win!

To conclude, I’m happy with the Cherry KC 6000 keyboard. I t has made me more productive, and is comfortable to use for long typing stints (think 6+ hours of programming!)

Books

2018 in books

2018 is the year that I got back into reading. Here is a list of some of the non fiction books that I have read throughout the year. I’ve started the list with the tech books, then put the social reads toward the end of the list.

American Kingpin: Catching the Billion-Dollar Baron of the Dark Web by Nick Bilton

Fascinating re-telling of the story of the infamous dark web black market website, the Silk Road. It covers the development of the Silk Road, how it came to exist, who was behind it, how it ran, and how it got taken down. The amount of money the Silk Road made at it’s peak was incredible!

How Music Got Free: What happens when an entire generation commits the same crime by Stephen Witt

This book is awesome! A great retelling of late 90s and early 2000s music piracy crews, who was behind them, how they came to prominence and how they ultimately got caught. It also covers the format wars of digital music, and how the MP3 came to dominate. This is well worth a read for all of you techies and will take you back to the Napster days!

Bad Blood: Secrets and Lies in a Silicon Valley Startup by John Carreyrou

You might have heard of Theranos and it’s strange CEO, Elizabeth Holmes. The now defunct startup was built on lies, and ended up collapsing like a house of cards, having burned $750 million from duped investors. Theranos claimed to have ground breaking technology that could run hundreds of blood tests on a single pin prick of blood. In reality, they could only produce unreliable results from a vial of blood. The facade continued for years with Elizabeth Holmes managing to persuade several investors to value Theranos at over a billion dollars. A truly 5 star read.

Flash Boys by Michael Lewis

The author behind The Big Short took his investigative skills to look into the murky world of high frequency trading. The rise of high frequency trading ties up with the rise of the internet age, and lead to stock trading companies spending millions on faster connections, and to even have their servers physically located closer to certain machines in data centres in order to make a connection a fraction faster. A brilliant read, with a great mix of scandal and technology.

Disrupted: Ludicrous Misadventures in the Tech Start-up Bubble by Dan Lyons

Hilarious and alarming account of respected journalist that spent over a year working at a startup, HubSpot. HubSpot sounds like a toxic place to work, with regular firings (known within the cult of HubSpot as “Graduations”), non existent on boarding, and a self cultivated cult of personality around it’s leaders. Oh, and it’s product really isn’t innovative. HubSpot is a real company and is still trading.

Outliers: The Story of Success by Malcolm Gladwell

Is Bill Gates really an extraordinary individual, or did his circumstances make him an extraordinary individual? Are elite sports professionals really the best in their peer group, or is it just that the month of the year that they were born in meant that they were physically more developed than their peers? Gladwell does a great job of exploring the above and several other outliers, and this is a great read for the curious mind. I’d highly recommend picking up this book, and it’ll probably be cheap as it was first published a decade ago!

PostCapitalism: A Guide to Our Future by Paul Mason

I didn’t really enjoy this read. Let me prefix this by stating that I like Paul Mason and that I find him insightful. This book starts of well with an analysis of previous industrial and technological eras, how they’ve progressed, and where we are now – genuinely interesting stuff. What I found tricky to follow was just how heavy this book got when it descended into economic theory, backed up with not so brilliantly annotated graphs. Anyway, the tl;dr is that we’re all a bit fucked because globalisation.

Enough by John Naish

A book in part about minimalism and in part about consumerism and when to stop. The book does make you think about where your constant drive to own more stuff comes from, and also covers how technology might be playing a big role in pushing our desire for more. A pretty good read but could have been a little more concise.

Natives: Race and Class in the Ruins of Empire by Akala

Excellent, well worth a read. A comprehensive look at our mindset towards race and class in Britain and where it comes from, and what the future might hold. This is one of a few intellectual commentaries on race in the UK that I have read this year.

The Good Immigrant by Nikesh Shukla

Another great read. This isn’t really by Nikesh Shukla, but is instead a collection of essays by several well known minorities in the UK. My favourite essay in the book was by Riz Ahmed, where he discusses how his career as an actor and going to auditions helped him deal with the special attention he gets at airport security.

Brit(ish): On Race, Identity and Belonging by Afua Hirsch

Another great intellectual commentary on how we look at race and identity in modern Britain. It also captures a lot of the identity self questioning that many mixed race people experience. This is one of my personal favourites of 2018.

This is Going to Hurt: Secret Diaries of a Junior Doctor by Adam Kay

This first hand account of the first 5 years of a junior doctor’s career in the NHS is a gripping read. I ploughed through this book in about 3 days. It’s very easy to read, but is also compelling with its mixture of funny and incredibly sad accounts of the experiences of a junior doctor.

Mental – Bad Behaviour, Ugly Truths and the Beautiful Game by Jermaine Pennant

This is the only sports related biography I read in 2018. I picked it up because of genuine curiosity about the famous story of footballer Jermaine Pennant forgetting that he had left a car at a train station in Spain, and it staying there, running, in the car park, for a week before running out of fuel. Predictably, Pennant plays this whole episode down and claims that he did forget that the car was there as he was rushing back to the UK. However he does strongly claim that he did not leave the engine running. Immediately after this assertion, in the next paragraph, is a statement from Pennant’s agent that contradicts this: “I know for a fact that he left that car running”.

Grime Kids: The Inside Story of the Global Grime Takeover by DJ Target

One of several books covering the rise of Grime music that was released in 2018. This is a good coverage of some of the early artists in grime music emerging out of East London. Whilst it does not cover all grime artists, it gives you a good overview of some of the original members of grime collectives Pay As You Go, and Roll Deep. A good read if you want to know more about Grime music,

Azure, DevOps, Node.Js, Technical

Hosting personal projects on low cost dedicated servers, not the cloud

For personal projects, the cloud is probably too expensive.

I say this as a person that previously hosted all of their personal projects on the Azure cloud platform. I had multiple app engines inside a single Azure App service, which worked pretty well. The main advantage of running anything in the cloud is that you don’t have to worry about server maintenance, and, if used correctly, can be cheaper for projects over a certain size. I had a few reasons for getting off the cloud:

Restricted Tech stack

Generally, a cloud service will restrict you to running some specific technologies. For example, a specific version of Node.Js. Even if your cloud service supports the technology you want to use, finding out if it supports the version you want to use is not easy. Right now, if you want to know which versions of Node.Js Azure supports, your best place to get that answer is StackOverflow , even though this information should be prominently displayed in the documentation.

The cloud offer is fast moving and hard to keep up with

The offer from cloud service providers changes a lot. I actually had to look back over my previous blog post detailing how I hosted my projects in the cloud to remind myself how it all worked. The problem is more acute on AWS, where the naming is so whacky that someone maintains a page where the names of AWS products are translated into useful plain English.  Elastic Beanstalk, anyone?

There is also a danger of just choosing the wrong cloud product. Like the guy that used Amazon Glacier (real name) storage and ended up racking up a $150 bill for a 60Gb download,

You need to learn the platform and not the technology

The argument put forward by the cloud guys often is that you get to focus on building your app and not deploying it or worrying about the infrastructure. This is absolutely true for a more trivial apps.

However we all know that it is rare for a useful application to be this flat. If you want to connect your application into another service to store data or even just files, you’ll need to learn what that service might be on that platform, and then you’ll need to learn that platform’s API in order to program against it. Generally, once you’ve got that service wired in, you’ll get some benefits, but the upfront effort is a cost that should be taken into account.

Run projects for less on physical servers

Newsflash – physical servers can be very cheap, especially used ones. Helpfully someone on reddit put together a megalist of cheap server providers. Below is a list of the main providers that are worth checking out in Europe:

  • Kimsufi – based in France. Cheap used servers. Limited availability.  No gigabit connections
  • Online.net – based in France, cheap line of “dediboxes”
  • OneProvider – based in France, but have servers located globally

As of writing, you can get a gigabit connected, 4 core,  4 Gb Ram server with OneProvider for €8 a month. Whilst comparing the cost of this hardware on a cheap provider to a cloud provider would be silly (you’re only supposed to deploy the hardware you need in the cloud), as soon as you run two web apps on an €8 box, you’re saving money.

I’ve got one server with Kimsufi, and one with Online.net. I’ve had one very small outage on Kimsufi in the 2 years that I’ve had a box there. The box was unavailable for around 10 minutes before Kimsufi’s auto monitoring rebooted the box.

I’ve had a single lengthier outage in the 18 months I’ve used Online.net which lead to a server being unavailable for about 5 hours. This was a violation of their SLA and meant that I got some money back. I detailed this outage in a previous post – “Lessons Learned from a server outage“.

Run whatever you want with on a dedicated server

When you aren’t running inside of a neatly packaged cloud product, you have full ownership of the server and can run whatever you want. The downside of this is that there is more responsibility, but in my opinion, the pros out weigh the cons, such as not having to wait for some service to become available (Example – GCS doesn’t support http to https redirection, I’m guessing you need to buy another service for that – this is a few simple lines of config in nginx)

Being able to run whatever you want also opens the door for you to be able to play around with more technology without having to worry about an increase in cost. Running my own dedicated boxes has been fun, and I’ve learned a lot by doing so

React

Adding React to an existing web application and build pipeline

In order to brush up on my React knowledge, in this post I’m going to outline how I added it into the admin tools of links.edspencer.me.uk.

This walk through will have an emphasis on not just being a hello world application, but will also focus on integrating everything into your existing build pipeline – so you’ll actually be able to run your real application in a real environment that isn’t your localhost.

I’m writing this up because I felt the setup of React was a bit confusing. You don’t just add a reference to a minified file and off you go – the React setup is a lot more sophisticated and is so broken down and modular that you can swap out just about any tool for another tool. You need a guide just to get started.

We’ll be going along the path of least resistance by using the toolchain set that most React developers seem to be using – Webpack for building, and Babel for transpiling.

The current application

The current application works on purely server side technology – there is no front end framework currently being used. The server runs using:

  • NodeJs
  • ExpressJs
  • The Pug view engine

The build pipeline consists of the following:

  • A docker image built on top of the node 6.9.4 image, with some additionally installed global packages
  • An npm install
  • A bower install for the front end packages
  • A gulp task build (to compile the SCSS, and minify and combine the JS and CSS)

Step 1 – moving to Yarn

You can get your React dependencies through npm, but I figured that moving to Yarn form my dependency management was a good move. Aside from it being suggested in most tutorials (which isn’t a good enough reason alone to move):

  1. Bower is not deprecated , but the Bower devs recommend moving to Yarn
  2. The current LTS version of Node (8.9.4) ships with npm 5, which has some checksum problems that might cause your builds to fail
  3. Yarn is included in the Node 8.9.4 docker image

Hello node 8.9.4

While we’re here we may as well update the version of node that the app runs under to 8.9.4, which is the current LTS version.

In dockerized applications, this is as easy as changing a single line of code in your docker image:

FROM node:6.9.4

Becomes:

FROM node:8.9.4

Goodbye Bower

Removing Bower was easy enough. I just went through the bower.json file, and ran yarn add for each item in there. This added the dependencies into the package.json file.

The next step was then to update my gulp build tasks to pull the front end dependencies out of the node_modules folder instead of the bower_components folder.

I then updated my build pipeline to not run bower install or npm install, and to instead run yarn install, and deleted the bower.json file. In my case, this was just some minor tweaks to my dockerfile.

Build pipleline updates

The next thing to do was to remove any calls to npm install from the build, and to instead call yarn install.

Yarn follows npm’s package file name and folder install conventions, so this was a very smooth change.

Step 2 – Bringing in Webpack

You have to use a bundler to enable you to write modular React code. Most people tend to use Webpack for this, so that’s the way we’re going to go. You can actually use Webpack to do all of your front end building (bundling, minifying, etc), but I’m not going to do this.

I have a working set of gulp jobs to do my front end building, so we’re going to integrate Webpack and give it one job only – bundling the modules.

Firstly, let’s add the dependency to webpack.

yarn add webpack --dev

Now we need to add an empty configuration file for Webpack:

touch webpack.config.js

We’ll fill out this file later.

Lastly, lets add a yarn task to actually run webpack, which we’ll need to run when we make any React changes. Add this to the scripts section of your package.json file:

{
  "scripts": {
    "build-react": "webpack"
  }
}

You may be thinking that we could just run the webpack command directly, but that will push you down the path of global installations of packages. Yarn steers you away from doing this, so by having the script in your package.json, you know that your script is running within the context of your packages available in your node_modules folder.

Step 3 – Bringing in Babel

Babel is a Javascript transpiler that will let us write some ES6 goodness without worrying too much about the browser support. Babel will dumb our javascript code down into a more browser ready ES5 flavour.

Here’s the thing – every tutorial I’ve seen involves installing these 4 babel packages as a minimum. This must be because Babel has been broken down into many smaller packages, and I do wonder if this was a bit excessive or not:

yarn add babel-core babel-loader babel-preset-es2015 babel-preset-react

Once the above babel packages have been installed, we need to wire it up with webpack, so that webpack knows that it needs to run our react specific javascript and jsx files through Babel.

Update your webpack.config.js to include the following.

module.exports = {
 module: {
   loaders: [
     { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ },
     { test: /\.jsx$/, loader: 'babel-loader', exclude: /node_modules/ }
   ]
 }
}

Note that the webpack.config.js file is not yet complete – we’ll be adding more to this shortly.

Step 4 – Bringing in React

We’re nearly ready to write some React. We just need to add a few more packages first, and that’ll be it, I promise.

yarn add react react-dom

This will add the core of React, and another React module that will allow us to do some DOM manipulation for our first bit of react coding. This does feel a bit daft to me. Webpack is sophisticated enough to run through our modules and output exactly what is needed into a single JS file. Why, therefore, do we need to break the install of a server side package down so much, if Webpack is going to grab what we need?

Step 5 – Actually writing some React code

We’re now at the point where we can write some react. So in my application, I’m going to have a small SPA (small is how SPAs should be – if you need to go big, build a hybrid) that will be the admin tool of my web application.

So, in the root of our web app, let’s add a folder named client, and in this folder, let’s add the following two files:

client/admin.js

import React from 'react';
import ReactDOM from 'react-dom';
import App from './admin-app.jsx';

ReactDOM.render(<App />, document.getElementById('react-app'));

client/admin-app.jsx

import React from 'react';

export default class App extends React.Component {
 render() {
 return (
 <div style={{textAlign: 'center'}}>
   <h1>Hello World - this is rendered by react</h1>
 <div>);
 }
}

The above two files aren’t doing a lot. The JSX file is declaring our “App” class, and the JS is telling react DOM to push our app JSX into a html element with the id “react-app”.

Updating our webpack config

Now we should complete our webpack config to reflect the location of our React files. Update webpack.config.js so that the entire file looks like this:

const path = require('path');
module.exports = {
  entry:'./client/admin.js',
  output: {
    path:path.resolve('dist'),
    filename:'admin_bundle.js'
  },
  module: {
    loaders: [
      { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ },
      { test: /\.jsx$/, loader: 'babel-loader', exclude: /node_modules/ }
    ]
  }
}

We’re telling webpack where then entry point of our React application is, and where to output the bundled code. (dist/admin_bundle.js).

Note we’re also leaning on a new package (path) to help us with some directory walking, go ahead and add that using yarn add path.

Now, let’s go ahead and ask webpack to bundle our React app:


yarn run build-react

Now if everything as worked as expected, webpack would have generated a bundle for us in dist/admin_bundle.js. Go ahead and take a peek at this file – it contains our code as well as all of the various library code from React that is needed to actually run our application.

Step 6 – Plugging the bundled react code into our application

As this SPA is only going to run in the admin section, we need to add two things to the main page in the admin tool that we want this to run on.

In my case, I’m using the awesome pug view engine to render my server side html. We need to add a reference to our bundled Javascript, and a div with the id “react-app”, which is what we coded our react app to look for.

Here’s a snippet of the updated admin pug view:

div.container
  div.row
    div.col-xs-12
      h1 Welcome to the admin site. This is rendered by node
  #react-app

script(type="text/javascript", src="/dist/admin_bundle.js")

Alternatively, here’s how it looks in plain old html:

<div class=&quot;container&quot;>
  <div class=&quot;row&quot;>
    <div class=&quot;col-xs-12&quot;>
        <h1>Welcome to the admin site. This is rendered by node</h1>
    </div>
  </div>
  <div id=&quot;react-app&quot;></div>
</div>

<script type=&quot;text/javascript&quot; src=&quot;/dist/admin_bundle.js&quot;>

Now all we need to do is to run our application and see if it works:

Et voila!

Step 8 – A little more build cleanup

As previously stated, I’ve got an existing set of gulp tasks for building my application, that I know work and that I’d like to not have to re-write.

We’ve kept out webpack build separate, so let’s now tie it up with our existing gulp task.

Previously, I would run a gulp task that I had written that would bundle and minify all Js and SCSS:

gulp build

But now we also need to run webpack. So, let’s update our package.json file to include some more custom scripts:

"scripts": {
  "build-react": "webpack",
  "build": "gulp build && webpack"
}

Now, we can simply run:

yarn run build

Now we just need to update our build process to run the above yarn command instead of gulp build. In my case, this was a simple update to a dockerfile.

Conclusion

The above setup has got us to a place where we can actually start learning and writing some React code.

This setup is a lot more involved than with other frameworks I’ve worked  with. This is because the React is a lot less opinionated on how you do things, and the tooling that you use. The disadvantage to this is that it makes the setup a bit trickier, but the advantage is that you have more freedom elsewhere in the stack.

I do however think that there may be a bit too much freedom, and a little more convention and opinion wouldn’t hurt. For example some sensible install defaults that could be changed later would cover this off. Whilst there may be some yeoman generators out there, they wont help with integration into existing web applications.

What is interesting is noting the differences between how you build your React applications to how you build your AngularJs applications. With AngularJs, you reference one lib and off you go. You may only use a small part of AngularJs, but you’ve referenced the whole thing, whereas with React, and Angular, you have a build process which outputs you a file containing exactly what you need.

I think this overall is a good thing, but we should still put this into context and be careful to not quibble over kilobytes.

We also saw how we can integrate the React toolchain into our existing toolchain without throwing everything out.

Next up, I’ll be writing a blog post on what you should do in order to run React in production.

DevOps

Lessons learned from a server outage

At just gone midnight this morning when I was nicely tucked up in bed, Uptime Robot sent me an email telling me that this very website, was not reachable.

I was fast asleep, and my phone was on do not disturb, so that situation stayed as it was until I woke up.

After waking up and seeing the message, I assumed that the docker container running the UI bit of this website had stopped running. Easy, I thought, I’ll just start up the container again, or I’ll just trigger another deployment and let my awesome GitLab CI pipeline do it’s thing.

Except this wasn’t the case. After realising, in a mild panic, that I could not even SSH onto the server that hosts this site, I got into contact with the hosting company (OneProvider) for some support.

I sent off a support message and sat back in my chair and reflected for a minute. Had I been a fool for rejecting the cloud? If this website was being hosted in a cloud service somewhere, would this have happened? Maybe I was stupid to dare to run my own server.

But as I gathered my thoughts, I calmed down and told myself I was being unreasonable. Over the last few years, I’ve seen cloud based solutions in various formats, also fail. One of the worst I experienced was with Amazon Redshift, when Amazon changed an obscure setup requirement meaning that you essentially needed some other Amazon service in order to be able to use Redshift. I’ve also been using a paid BitBucket service for a while with a client, which has suffered from around 5 outages of varying severity in the last 12 months. In a weird co-incidence, one of the worst outages was today. In comparison, my self hosted GitlLab instance has never gone down outside of running updates in the year and a half that I’ve had it.

Cloud based or on my own server, if an application went down I would still go through the same support process:

  • Do some investigation and try to fix
  • Send off a support request if I determined the problem was to do with the service or hosting provider

Check your SLA

An SLA or a Service Level Agreement basically outlines a service provider’s responsibilities for you. OneProvider’s SLA states that they will aim to resolve network issues within an hour, and hardware issues within two hours for their Paris data centre. Incidentally, other data centres don’t have any agreed time because of their location – like the Cairo datacenter. If they miss these SLAs, they have self imposed compensation penalties.

Two hours had long gone, so whatever the problem was, I was going to get some money back.

Getting back online

I have two main services running off of this box:

I could live with the link archive going offline for a few hours, but I really didn’t want my website going down. It has content that I’ve been putting on here for years, and believe it or not, it makes me a little beer money each month though some carefully selected and relevant affiliate links.

Here’s where docker helped. I got the site back online pretty quickly simply by starting the containers up on one of my other servers. Luckily the nginx instance on that webserver was already configured to handle any requests for edspencer.me.uk, so once the containers were started, I just needed to update the A record to point at the other server (short TTL FTW).

Within about 20 minutes, I got another email from Uptime Robot telling me that my website was back online, and that it had been down for 9 hours (yikes).

Check your backups

I use a WordPress plugin (updraft) to automate backups of this website. It works great, but the only problem is that I had set this up to take a backup weekly. Frustratingly, the last backup cycle had happened before I had penned two of my most recent posts.

I started to panic again a little. What if the hard drive in my server had died and I had lost that data forever? I was a fool for not reviewing my backup policy more. What if I lost all of the data in my link archive? I was angry at myself.

Back from the dead

At about 2pm, I got an email response from OneProvider. This was about 4 hours after I created the support ticket.

The email stated that a router in the Paris data centre that this server lives in, had died, and that they were replacing it and it would be done in 30 minutes.

This raises some questions about OneProvider’s ability to provide hosting.

  • Was the problem  highlighted by their monitoring, or did they need me to point it out?
  • Why did it take nearly 14 hours from the problem occurring to it being resolved?

I’ll be keeping my eyes on their service for the next two months.

Sure enough, the server was back and available, so I switched the DNS records back over to to point to this server.

Broken backups

Now it was time to sort out all of those backups. I logged into all of my servers and did a quick audit of my backups. It turns out there were numerous problems.

Firstly, my GitLab instance was being backup up, but the backup files were not being shipped off of the server. Casting my memory back, I can blame this on my Raspberry Pi that corrupted itself a few months ago that was previously responsible for pulling backups into my home network. I’ve now setup Rsync to pull the backups onto my RAIDed NAS.

Secondly, as previously mentioned, my WordPress backups were happening too infrequently. This has now been changed to a daily backup, and Updraft is shunting the backup files into Dropbox.

Thirdly – my link archive backups. These weren’t even running! The database is now backed up daily using Postgres’ awesome pg_dump feature in a cron job. These backups are then also pulled using Rsync down to my NAS.

It didn’t take long to audit and fix the backups – it’s something I should have done a while ago.

Lessons Learned

  • Build some resilience in. Deploy your applications to multiple servers as part of your CI pipeline. You could even have a load balancer do the legwork of directing your traffic to servers that it knows are responding correctly.
  • Those servers should be in different data centres, possibly with different hosting providers.
  • Make sure your backups are working. You should check these regularly or even setup an alerting system for when they fail.
  • Make sure your backups are sensible. Should they be running more frequently?
  • Pull your backups onto different servers so that you can get at them if the server goes offline
JavaScript, Performance

Improving performance through function caching in JavaScript

I was recently profiling a single page application using Chrome’s dev tools looking for areas of slowness. This particular app did lots of work with moment.js as it had some complex custom calendar logic. The profiling revealed that the application was spending a lot of time in moment.js, and that the calls to moment.js were coming from the same few functions.

After debugging the functions that were calling into moment.js, it became apparent that:

  • These functions were getting called a lot
  • They were frequently getting called with the same parameters

So with that in mind, we don’t really shouldn’t be asking moment.js (or any function) to do the same calculations over and over again – instead we should hold onto the results of the function calls, and store them in a cache. We can then hit our cache first before doing the calculation, which will be much cheaper than running the calculation again.

How

So, here is the function that we’re going to optimise by introducing some caching logic into.  All code in this post is written in ES5 style JavaScript and leans on underscore for some utility functions.


function calendarService() {
  function getCalendarSettings(month, year) {

    var calendar = getCurrentCalendar();

    // Just a call to underscore to do some filtering
    var year = _.findWhere(calendar.years, { year: year});

    var month = _.findWhere(year.months, { month: month});

    return month;
  }
}

The above function calls out to another function to get some calendar settings (which was itself fairly expensive) before doing some filtering on the returned object to return something useful.

Creating the cache

Firstly, we need to have a place to store our cache.  In our case, storing the results of the functions in memory was sufficient – so lets initialise an empty, service wide object to store our cached data:


function calendarService() {
  var cache = {};

  function getCalendarSettings() {

     ...

  }
}

 

Pushing items into the cache

When we add an item into the cache, we need a way of uniquely identifying it. This is called a cache key, and in our situation there will be two things that will uniquely identify an item in our cache:

  1. The name of the function that pushed the item into the cache
  2. The parameters that the function was called with

With the above in mind, let’s build a function that will generate some cache keys for us:


function getCacheKey(functionName, params) {
  var cacheKey = functionName;

  _.each(params, function(param) {

    cacheKey = cacheKey + param.toString() + '.';

  });

  return cacheKey;
}

 

The above function loops through each parameter passed in as part of the params array, and converts it to a string separated by a full stop. This will currently only work with parameters that are primitive types, but you could put your own logic into handle objects that are parameters.

So, if we were to call getCacheKey like this:


getCacheKey('getCalendarSettings', [0, 2017]);

It would return:


'getCalendarSettings.0.2017'

Which is a string, and will be used as a cache key as it uniquely identifies the function called and the parameters passed to it.

We now have our in memory cache object, and a function that will create us cache keys – so we next need to glue them together so that we can populate the cache and check the cache before running any functions. Let’s create a single function to have this job:


function getResult(functionName, params, functionToRun) {
  var cacheKey = getCacheKey(functionName, params);

  var result = cache[cacheKey];

  if(!_.isUndefined(cache[cacheKey]) {
    // Successful cache hit! Return what we've got
    return result;
  }

  result = functionToRun.apply(this, params);

  cache[cacheKey] = result;

  return result;
}

Our getResult function does the job of checking the cache, and only actually executing our function if nothing is found in the cache. If it has to execute our function, it stores the result in the cache.

It parameters are:

  • functionName – just a string which is the function name
  • params – an array of parameters that will be used to build the cache key, as well as being passed to the function that may need to be run. The order of these parameters matters and should match the order in which the function that were trying to cache consumes them
  • functionToRun – this is the actual function that needs to be run,

Our getResult function is now in place. So let’s wire up getCalendarSettings with it:


function getCalendarSettings(month, year) {
  return getResult('getCalendarSettings', [month, year], runGetCalendarSettings);

  function runGetCalendarSettings(month, year) {
    var calendar = getCurrentCalendar();

    // Just a call to underscore to do some filtering
    var year = _.findWhere(calendar.years, { year: year});

    var month = _.findWhere(year.months, { month: month});

    return month;
   }
 }

 

We’ve now updated getCalendarSettings to call getResult and instead return the result of that function. We’re also exploiting JavaScript’s variable hoisting to use the runGetCalendarSettings function before it has been declared. Our function is now fully wired up with our in memory cache, and we’ll save unnecessary computation that has already been previously completed.

Further improvements

This code could be improved upon by:

  • Only storing copies of results in the cache. If a function returns an object and that gets stored in the cache, we can risk mutating the object as we we’re storing a reference to it. This can be done using underscore’s clone function.
  • Having the code evaluate what the calling function’s name is. This would get rid of the need for the functionName parameter.
  • Storing the cache elsewhere. As it’s being held in memory, it’ll get lost on the client as soon as the site is unloaded. The only real option for this is to use local storage, but even then I’d only recommend writing and reading from local storage when the application is loaded and unloaded. If this code is being used on the server, there are a lot more options for storing the cache.

Full code listing:

 


function calendarService() {
  var cache = {};

  function getCalendarSettings(month, year) {
    return getResult('getCalendarSettings', [month, year], runGetCalendarSettings);

    function runGetCalendarSettings(month, year) {
      var calendar = getCurrentCalendar();

      // Just a call to underscore to do some filtering
      var year = _.findWhere(calendar.years, { year: year});

      var month = _.findWhere(year.months, { month: month});

      return month;
    }
  }

  function getResult(functionName, params, functionToRun) {
    var cacheKey = getCacheKey(functionName, params);

    var result = cache[cacheKey];

    if(!_.isUndefined(cache[cacheKey]) {
      // Successful cache hit! Return what we've got
      return result;
    }

    result = functionToRun.apply(this, params);

    cache[cacheKey] = result;

    return result;
  }

  function getCacheKey(functionName, params) {
    var cacheKey = functionName;
 
    _.each(params, function(param) {
      cacheKey = cacheKey + param.toString() + '.';
    });
 
    return cacheKey;
  }
}

Node.Js, Performance

Running Node.js in production using all of the cores

Did you know that the JavaScript environment is single threaded? This means that in the Node.js world, the main event loop runs on a single thread. IO operations are pushed out to their own thread, but if you are doing a CPU intensive operation on the main thread (try not to do this), you can get into problems where your server stops responding to requests.

The proper way to get around this is by programming in such a way that is considerate as to how the Node.js / Javascript runtime works.

However, you should also be running your Node.js application in production with this in mind to get the absolute best performance that you can. To get around this limitation, you need to do some form of clustering.

Production Clustering

You can run your Node.js app in production by using the “node” command, but I wouldn’t recommend it. There are several Node.js wrappers out there that will manage the running of your Node.js app in a much nicer, production grade manner.

One such wrapper is PM2.

In it’s most basic form, PM2 (which stands for Process Monitor) will keep an eye on your Node.js app for any crashes, and will attempt to restart the process should it crash. Crucially, it can also be configured to run your Node.js app in a clustered mode, which will enable us to take advantage of all of the cores that are available to us.

PM2 can be installed globally via npm:


npm install pm2 -g

Helpfully, we don’t need to change any of our application code in order to have PM2 cluster it.

How many Workers?

PM2 has an optional argument – i, which is the “number of workers” argument. You can use this argument to instruct PM2 to run your Node.js app in an explicit number of workers:


pm2 start app.js -i 4

Where 4 is the number of workers that we want to launch our app in.

However, I prefer to set the “number of workers” argument to 0, which tells PM2 to run your app in as many workers as there are CPU cores:


pm2 start app.js -i 0

Et voilla!

Links

Lets Encrypt

Migrating letsencrypt SSL certificates to another server

If you’re seeing this post, you’re viewing my website from it’s new home.

Moving the code and the data needed to run this website was made easy by docker and a WordPress plugin called UpdraftPlus. For additional testing, I just tweaked my local hosts file to simulate the DNS change.

I also needed to move my SSL certs. I could have dropped the site back into plain http and requested the certs again using the certbot, but I decided against this as it would be more of a hassle.

You need to move the contents of two directories and one file in order to keep your site running SSL and so that certbot on the new server is aware of how to renew your website’s SSL certs.

1: Copy the folder;

/etc/letsencrypt/live/yoursitedomain.com

2: Copy the folder:

/etc/letsencrypt/archive/yoursitedomain.com

3: Copy the file:

/etc/letsencrypt/renewal/yoursitedomain.com.conf

4. Copy the folder:

/etc/letsencrypt/accounts/acme-v01.api.letsencrypt.org/directory/someguiddirectoryname

LetsEncrypt community discussion thread:

Moving and merging certs from server A to server B

Technical, Wordpress

Running WordPress in production – Security and Speed

WordPress gets a fair share of bad press.

Most of this bad press is centred around security concerns. Many of these concerns are valid, but need not be a concern of yours if you are intending to run WordPress in production. You just need to a responsible webmaster. In this post I’ll list out some tips that will make your WordPress install robust and fast.

1. Keep your WordPress Install up to date

This is the most important security concern that you need to have. WordPress even makes updating your install super easy. You don’t need to log into any servers, you just need to login to the admin tool and head over to the dedicated updates page. From there you can simply press a button to get all of your pluggins and WordPress itself updated. Once your install is fully updated, you’ll see a nice clean page like this, telling you that there is nothing to update:

WordPress update screen
WordPress update screen telling me I’m fully updated

In the same way that you keep your laptop or PC up to date, you should be keeping your WordPress install up to date.

2. Install WordFence

WordFence is a popular security plugin that will offer you some protection against trending attacks. It’s a plugin that you should install, but it does not absolve you of all of your security responsibilities. You should still be regularly updating your server’s OS and any libraries installed on it.

3. Use Askimet

Askimet is a very popular WordPress plugin that is essential if you allow commenting on your website through WordPress.  Askiment will block shedloads of spam posts to your site, and you won’t even need to look t them. I don’t trust 3rd party services like disqus, so this was an essential plugin for me.

4. Back your shit up

WordPress has a few moving parts. Some of those parts are held in files, others in a MySQL database. You could periodically back these two up manually, or you could take advantage of one of the great WordPress plugins that you can use to automate your backups and make it super easy. An excellent one is Updraft Plus. This plugin can be set to regularly backup your entire WordPress site and can even store the backups in a cloud file service like Dropbox.

5. Install a caching plugin

A cache plugin will improve the load speed of your site. It will save database calls, and will instead pull data directly from memory. A popular plugin is WP Super Cache. And remember, a quick load time can mean that search engines rank you higher, and your visitors will love you.

6. Install an image compressing plugin

Again, this will give you a speed advantage and will save on your bandwidth use. A popular plugin is WP Smush. This plugin can be set to batch compress all images in your site, and can be used to compress images as and when they are added to your site.

7. Minify your JavaScript and CSS

Depending on how you’ve built your WordPress site will affect how you do this. If you have customised your WordPress templates or made your own theme, you should introduce a step in your build process to bundle and minify your JS and CSS.

If you are just using a 3rd party theme that you haven’t customised a lot, you should grab a plugin to bundle and minify your JavaScript and CSS. A plugin that I’ve had some success with is Better WordPress Minify. You may have to tweak it’s settings slightly to make sure it doesn’t break any of your other plugins that are rendered out on the UI (e.g. a source code highlighter plugin).

8. Use the latest version of jQuery

The standard install of WordPress doesn’t use the latest version of jQuery. Depending on the user’s that you’d like support, you may want to update to the latest version of jQuery.  You can do this in your build process, or you can do this with a plugin, like jQuery updater.

 

Broadband, Consumer

The current state of broadband and mobile data in the UK

I’ve always watched the Broadband and Mobile markets in the UK, largely from a consumer point of view. This has mainly to have been to get the fastest internet access at the lowest price.

Market Competition

Over the last decade or so, we saw a worrying trend in the Broadband market – we lost a lot of competition. This happened as a few bigger corporations entered the broadband market and consolidated their market share by buying up and closing smaller and often very good broadband operators.

Remember Bulldog internet? Well, they got eaten by TalkTalk. Remember BE internet? They got eaten by O2. Who then got eaten by Sky.

Bulldog and BE internet were once, very well regarded and popular internet providers. I’ll let you do your own research on what TalkTalk and Sky’s customers currently think of them.

Over the last year or so, this trend has reversed a bit, and we’ve had a few of the newer entrants trying to push themselves in, for example, EE and Vodafone.

Want a Broadband and Mobile combo? Get stuffed.

EE and Vodafone are mobile network operators and that is where they do the majority of their business. Both offer some fairly competitive broadband packages, but for some odd reason, choose not to bundle anything else in their broadband packages. So two massive companies that offer mobile phone and internet services, don’t offer any packages that link the two. Huh?

I cannot understand why they would not do this. Consumers would benefit from getting better deals, and EE and Vodafone would benefit by getting customers that were more embedded into their services. The broadband and mobile services offered by these businesses are essentially treated as two separate entities. When I couldn’t find any combined broadband and mobile deals online, I reached out to their online sales staff. Both EE’s and Vodafone’s sales responded with “You’re talking to broadband sales, I can’t help you with mobile sales”.

I eventually reached out to both companies on twitter – EE actually will throw 5 gigs of data onto your phone package, but that isn’t great for someone like me – and they don’t actually shout about that offer anywhere.

EE – you are missing a trick. Vodafone – you are missing a trick. Get some packages that link the two and train your staff on all consumer products. Don’t treat your broadband and mobile offer as two totally different things. As a potential customer, don’t bounce me between departments if I want to talk about buying broadband and mobile.

In Europe, many people use the same provider for their TV, broadband, and family mobile packages. There is no reason why this sort of offer wouldn’t be as popular in the UK.

So what about mobile data?

So, we’ve now got a pretty good 4G network up and down the country – however unlimited mobile data packages have become rare and expensive.

I’m currently on an old three unlimited data package. It costs me £23 a month. If I wanted to take out that package now, it would cost me £30. When I took my package out – it was one of the most expensive. It’s now one of the cheapest.

Worryingly, three are now traffic shaping and chipping away at net neutrality by offering up packages that have data limits, but let you access some services in an unlimited fashion. They call it “Go Binge“, and claim that if offers you access to Netflix and some other smaller TV streaming services. They are treated as an option on mobile packages:

I’d rather they were just into the business of offering up data, not offering up *some* data. Also, this is starting to look like some of the mobile phone contracts offered up in countries where there are no net neutrality laws.

Facebook, twitter and whatsapp only unlimited in certain packages

Currently no one offers up unlimited data except for three – and that’ll cost you £33 a month.

To conclude

Data has gotten more expensive on mobiles. We’ve got more big companies offering broadband, but aren’t using their significant market presence in other areas to offer up better deals.