Azure, DevOps, Node.Js, Technical

Hosting personal projects on low cost dedicated servers, not the cloud

For personal projects, the cloud is probably too expensive.

I say this as a person that previously hosted all of their personal projects on the Azure cloud platform. I had multiple app engines inside a single Azure App service, which worked pretty well. The main advantage of running anything in the cloud is that you don’t have to worry about server maintenance, and, if used correctly, can be cheaper for projects over a certain size. I had a few reasons for getting off the cloud:

Restricted Tech stack

Generally, a cloud service will restrict you to running some specific technologies. For example, a specific version of Node.Js. Even if your cloud service supports the technology you want to use, finding out if it supports the version you want to use is not easy. Right now, if you want to know which versions of Node.Js Azure supports, your best place to get that answer is StackOverflow , even though this information should be prominently displayed in the documentation.

The cloud offer is fast moving and hard to keep up with

The offer from cloud service providers changes a lot. I actually had to look back over my previous blog post detailing how I hosted my projects in the cloud to remind myself how it all worked. The problem is more acute on AWS, where the naming is so whacky that someone maintains a page where the names of AWS products are translated into useful plain English.  Elastic Beanstalk, anyone?

There is also a danger of just choosing the wrong cloud product. Like the guy that used Amazon Glacier (real name) storage and ended up racking up a $150 bill for a 60Gb download,

You need to learn the platform and not the technology

The argument put forward by the cloud guys often is that you get to focus on building your app and not deploying it or worrying about the infrastructure. This is absolutely true for a more trivial apps.

However we all know that it is rare for a useful application to be this flat. If you want to connect your application into another service to store data or even just files, you’ll need to learn what that service might be on that platform, and then you’ll need to learn that platform’s API in order to program against it. Generally, once you’ve got that service wired in, you’ll get some benefits, but the upfront effort is a cost that should be taken into account.

Run projects for less on physical servers

Newsflash – physical servers can be very cheap, especially used ones. Helpfully someone on reddit put together a megalist of cheap server providers. Below is a list of the main providers that are worth checking out in Europe:

  • Kimsufi – based in France. Cheap used servers. Limited availability.  No gigabit connections
  • – based in France, cheap line of “dediboxes”
  • OneProvider – based in France, but have servers located globally

As of writing, you can get a gigabit connected, 4 core,  4 Gb Ram server with OneProvider for €8 a month. Whilst comparing the cost of this hardware on a cheap provider to a cloud provider would be silly (you’re only supposed to deploy the hardware you need in the cloud), as soon as you run two web apps on an €8 box, you’re saving money.

I’ve got one server with Kimsufi, and one with I’ve had one very small outage on Kimsufi in the 2 years that I’ve had a box there. The box was unavailable for around 10 minutes before Kimsufi’s auto monitoring rebooted the box.

I’ve had a single lengthier outage in the 18 months I’ve used which lead to a server being unavailable for about 5 hours. This was a violation of their SLA and meant that I got some money back. I detailed this outage in a previous post – “Lessons Learned from a server outage“.

Run whatever you want with on a dedicated server

When you aren’t running inside of a neatly packaged cloud product, you have full ownership of the server and can run whatever you want. The downside of this is that there is more responsibility, but in my opinion, the pros out weigh the cons, such as not having to wait for some service to become available (Example – GCS doesn’t support http to https redirection, I’m guessing you need to buy another service for that – this is a few simple lines of config in nginx)

Being able to run whatever you want also opens the door for you to be able to play around with more technology without having to worry about an increase in cost. Running my own dedicated boxes has been fun, and I’ve learned a lot by doing so

Node.Js, Performance

Running Node.js in production using all of the cores

Did you know that the JavaScript environment is single threaded? This means that in the Node.js world, the main event loop runs on a single thread. IO operations are pushed out to their own thread, but if you are doing a CPU intensive operation on the main thread (try not to do this), you can get into problems where your server stops responding to requests.

The proper way to get around this is by programming in such a way that is considerate as to how the Node.js / Javascript runtime works.

However, you should also be running your Node.js application in production with this in mind to get the absolute best performance that you can. To get around this limitation, you need to do some form of clustering.

Production Clustering

You can run your Node.js app in production by using the “node” command, but I wouldn’t recommend it. There are several Node.js wrappers out there that will manage the running of your Node.js app in a much nicer, production grade manner.

One such wrapper is PM2.

In it’s most basic form, PM2 (which stands for Process Monitor) will keep an eye on your Node.js app for any crashes, and will attempt to restart the process should it crash. Crucially, it can also be configured to run your Node.js app in a clustered mode, which will enable us to take advantage of all of the cores that are available to us.

PM2 can be installed globally via npm:

npm install pm2 -g

Helpfully, we don’t need to change any of our application code in order to have PM2 cluster it.

How many Workers?

PM2 has an optional argument – i, which is the “number of workers” argument. You can use this argument to instruct PM2 to run your Node.js app in an explicit number of workers:

pm2 start app.js -i 4

Where 4 is the number of workers that we want to launch our app in.

However, I prefer to set the “number of workers” argument to 0, which tells PM2 to run your app in as many workers as there are CPU cores:

pm2 start app.js -i 0

Et voilla!