Technical

Now serving over http/2

http/2 is well and truly here, and is even partially supported by Internet Explorer 11. Check out the can I use page for http/2. It’s time to stop bundling your JavaScript and Stylesheets into single files without consideration.

Web server implementation for http/2 is also good, with NGINX having supported it from it’s core install for a few years now.

A big http/2 advantage is that you will not run into http 1.1’s limits on the maximum number of parallel connections to a single domain. These vary between browser with Chrome by default supporting 6 connections (it’s worth noting that each browser install can be manually configured to change this number, although I doubt many people do).

The http 1.1 way

Let’s have a think about what happens when we request an imaginary webpage – edspencer.me.uk/test-page.html – over http 1.1.

Looking at a browser that supports a maximum of 6 connections at a time, imagine our test-page.html contains references to 9 external assets, referenced in the header, all hosted on the same domain.

  • edspencer.me.uk/stylesheet1.css
  • edspencer.me.uk/stylesheet2.css
  • edspencer.me.uk/stylesheet3.css
  • edspencer.me.uk/scripts1.js
  • edspencer.me.uk/scripts2.js
  • edspencer.me.uk/scripts3.js
  • edspencer.me.uk/image1.png
  • edspencer.me.uk/image2.png
  • edspencer.me.uk/font1.woff2

What is going to happen here? Well, assuming that our cache is empty, the first 6 referenced files will be downloaded first. Anything after the first 6 will be queued until one of the 6 download slots frees up. This will happen when a download completes.

A simplified analogy would be you’re queuing at a checkout, and there are 6 tills staffed by 6 operators. A maximum of 6 people can be served at the same time.

This also happens for Ajax requests to the same domain, which must also form an orderly queue, with a maximum of 6 going over the wire at the same time.

There were a few workarounds for this in the http 1.1 world. One was to combine your assets into bundles. So in our above example, our 3 stylesheets become one single stylesheet, and our 3 JavaScript files would become a single .js file. This reduces the number of request slots needed by 4.

Another way would be to serve your assets from different domains in order to bypass the 6 connections per domain limit. For example, having your images served from images.edspencer.me.uk and your stylesheets from styles.edspencer.me.uk.

Both of these techniques worked well under http 1.1, but had their downsides.

Downside 1 – Serving styles and JavaScript for entire areas of a website that a client may never access

Imagine I have a whole section in my web application that is only for users with admin access. If I’m bundling all application styles and scripts into two respective files, I’m burdening the clients that will never access the admin tools of my application with the code for my admin tool. Their experience of my website would improve if I served them a smaller set of assets that did not include the code needed to run the admin tool that they will never access.

Downside 2 – Maintaining web infrastructure for serving from multiple subdomains

Setting up subdomains requires webserver and DNS configuration. I also then need to work out how I’m going to get the web application’s static assets onto their relevant subdomains. It’s a lot of effort.

The http/2 way

With http/2, you don’t need to bundle anymore, and you can instead split and serve your web app using multiple files without worrying about blocking one of those limited download slots. This is largely because of the improvements in the protocol transport, which have resulted in the recommended minimum limit for browsers implementing http/2 being much greater, at 100 concurrent connections.

Splitting your bundles more logically, instead of bundling into one will result in more, smaller bundles being sent to the client. For example you could have a bundle that contains the JavaScript for the admin pages of a web app, and it’ll only get served to the client should they land on an admin page.

If someone visits your web app and only lands on the homepage, you don’t need to serve them with the code needed to run the admin pages of your web app.

Enabling http/2 on NGINX

Setting up http/2 on NGINX is trivial, with the only change needed being the inclusion of the characters “h2” into the listen definition for your site:

server {
  listen 443 ssl http2;
  ...
}

All you then need to do is restart NGINX, and you’re good to go. You can test this out by taking a peek at the network tab in the developer tools of your preferred browser. Note the “h2” in the Protocol column:

The gains

This is a WordPress blog, and whilst I’ve taken steps to improve it’s performance, I have struggled to get the raw number of assets needed by the site down. After enabling http/2, I got an immediate and significant performance score improvement from Google Lighthouse:

I’ll be migrating this blog to a headless CMS soon which will give me much more control and will hopefully give me a nice score of 100! Watch this space.

Notes

  • http/2 is only supported over SSL
  • AWS s3 does not support http/2. You will need to put Cloudfront in front of it in order to get http/2 support
  • Google Cloud CDN supports http/2 out of the box, without any additional services
Technical, webpack

webpack 4 – How to have separate build outputs for production and development

A pretty common use case is wanting to have different configs for development and production for front end builds. For example, when running in development mode, I want sourcemaps generated, and I don’t want any of my JavaScript or CSS to be minified to help with my debugging.

Webpack 4 introduced a new build flag that is supposed to make this easier – “mode”. Here’s how it’s supposed to be used:

webpack --mode production

This actually won’t be much help if you want to get different build outputs. Whilst this flag will be picked up by Webpack’s internals and will produced minified JavaScript, it won’t help if you want to do something like conditionally include a plugin.

For example, just say I have a plugin to build my SCSS, if I want to minify and bundle my generated CSS into a single file, the best way to do it is to use a plugin – OptimizeCssAssetsPlugin. It would be great if I could detect this mode flag at build time, and conditionally include this plugin if I’m building in production mode. The goal is that in production mode, my generated CSS gets bundled and minified, and in development mode, it doesn’t.

In your webpack.config.js file, it’s not possible to detect this mode flag, and to then conditionally add the plugin based on this flag. This is because the mode flag can only be used in the DefinePlugin phase, where it is mapped to the NODE_ENV variable. The configuration below will make a global variable “ENV” available to my JavaScript code, but not to any of my Webpack configuration code:

module.exports = {
  ...
  plugins: [
    new webpack.DefinePlugin({
      ENV: JSON.stringify(process.env.NODE_ENV),
      VERSION: JSON.stringify('5fa3b9')
    })
  ]
}

Tying to access process.env.NODE_ENV outside of the DefinePlugin phase will return undefined, so we can’t use it. In my application JavaScript, I can use the “ENV” and the “VERSION” global variables, but not in my webpack config files themselves.

The Solution

The best solution even with webpack 4, is to split your config files. I now have 3:

  • webpack.common.js – contains common webpack configs for all environments
  • webpack.dev.js – contains only the dev config settings (e.g. source map gens)
  • webpack.prod.js – contains only prod config, like the OptimizeCssAssetsPlugin

The configs are merged together using the webpack-merge package. For example, here’s my webpack.prod.js file:

const merge = require('webpack-merge');
const common = require('./webpack.common.js');
const OptimizeCssAssetsPlugin = require('optimize-css-assets-webpack-plugin');

module.exports = merge(common, {
  plugins: [
    new OptimizeCssAssetsPlugin({
      assetNameRegExp: /\.css$/g,
      cssProcessor: require('cssnano'),
      cssProcessorPluginOptions: {
        preset: ['default', { discardComments: { removeAll: true } }],
      },
      canPrint: true
    })
  ]
});

You can then specify which config file to use when you call webpack. You can do something like this in your package.json file:

{
  ...
  "scripts": {
    "build": "webpack --mode development --config webpack.dev.js",
    "build-dist": "webpack --mode production --config webpack.prod.js"
  }
}

Some further information on using the config splitting approach can be found on the production page in the webpack documentation.

hardware, Technical

Keyboard: Cherry KC 6000 review

Cherry KC 6000 Keyboard

I prefer chiclet keyboards. I haven’t done any scientific analysis, but I’m confident that my words per minute typing is higher wen I’m using a keyboard that has chiclet keys. Amongst developers this in an unpopular opinion, with many developers preferring mechanical keyboards (don’t be that guy smashing a mechanical keyboard in an open plan office!)

Chiclet keyboards have keys that do not need to depress as far in order to register. They also have an evenly sized gap between each key, making it more difficult for you to fumble the keyboard and hit the wrong key. They are typically found on laptop keyboards.

With that in mind, I needed a new keyboard to replace the one I take into client offices and leave there whilst on a contract. I was previously using an old, fairly standard Dell USB keyboard that was becoming embarrassingly tatty – most of the key letters were completely worn off. It also seemed to be forever caked in a layer of dirt that no amount of cleaning could remove.

My requirements were fairly simple:

  • It must be comfortable. This will be used for long periods of time (6+ hours a day) and I’ve found myself experiencing some discomfort in the later hours when using my bog standard keyboard.
  • It must be USB – I understand the need for a wireless mouse, but a wireless keyboard is an unnecessary luxury for my day to day desk based work. Also, the less interacting with bluetooth, the better.
  • It must not be garish – I don’t want to demo things to clients and my keyboard be a huge distraction because it is letting off a luminous glow.
  • It must have chiclet keys for the productivity and preferences outlined above.
  • It must have a numeric keypad (it does make me wonder how developers and other creatives work for large amounts of time without a numeric keypad)

The keyboard I found that fits the above is the Cherry KC 6000, which is priced nicely at £35 on Amazon (The linked product incorrectly claims the product is the Cherry KC 600, but this is just a typo – the Cherry KC 600 does not exist and having taken delivery of this item, can confirm that it is indeed the Cherry KC 6000).

On the whole, I am very happy with this keyboard, and would give it 4 out 5 stars:

Pro: Super comfortable to type on

Cherry KC 6000 Keyboard

This is by far the most important factor on any keyboard! The keys have a really nice weight and feel to them, and typing for a long amount of time on this keyboard is comfortable and does not result in any straining pains that I would sometimes get on my previous bog standard keyboard.

Cherry KC 6000 Keyboard Chiclet keys
Cherry KC 6000 Keyboard Chiclet keys

Pro: It is aesthetically pleasing and not too garish

This keyboard has no crazy backlighting and comes in two fairly neutral colours – a silver body with white keys, or a black body with black keys. Some may view the lack of backlighting as a negative, but this isn’t a problem for me. I don’t type in the dark as I don’t have the eyes for it, and I touchtype.

Pro: It has a slim, low profile

This helps with having a general feel of neatness on your desk. The keyboard has only moderate bezels and has no ridges where dust and other crap can get stuck.

Con: The F11 and F12 keys are not directly above the backspace button

Cherry KC 6000 Keyboard function keys

This is a small irritation as it just takes some getting used to. On most keyboards, the function keys are laid out in banks of 4, with a bigger space between every 4th function key. This space is gone on the Cherry KC 6000, and the saved space is given to two additional buttons – one to open your default browser, and another to lock your machine. I don’t mind having these extra buttons, but annoyingly they are right above the backspace key, so it will take some getting used to not being able to naturally travel to the F11 key to go fullscreen, or the F12 key to open my Guake terminal.

Con: There is a backspace key in the numeric keypad

Cherry KC 6000 Keyboard numeric pad

Again, this is another one of those small things that will take you a day or so to get used to. You’d normally only find one backspace key on a keyboard and would not expect to have one on the numeric pad. This one is positioned where the minus key normally is, so I’ve found myself accidentally deleting characters rather than putting in the minus character a few times.

Other reviews online think that the keyboard is not banked enough towards the user (the way most keyboards have legs that you can have up or down. The keyboard did initially look a little flat on my desk when I first set it up, but I’ve found that it has not impacted my typing at all.

Conclusion – a productivity win!

To conclude, I’m happy with the Cherry KC 6000 keyboard. I t has made me more productive, and is comfortable to use for long typing stints (think 6+ hours of programming!)

Azure, DevOps, Node.Js, Technical

Hosting personal projects on low cost dedicated servers, not the cloud

For personal projects, the cloud is probably too expensive.

I say this as a person that previously hosted all of their personal projects on the Azure cloud platform. I had multiple app engines inside a single Azure App service, which worked pretty well. The main advantage of running anything in the cloud is that you don’t have to worry about server maintenance, and, if used correctly, can be cheaper for projects over a certain size. I had a few reasons for getting off the cloud:

Restricted Tech stack

Generally, a cloud service will restrict you to running some specific technologies. For example, a specific version of Node.Js. Even if your cloud service supports the technology you want to use, finding out if it supports the version you want to use is not easy. Right now, if you want to know which versions of Node.Js Azure supports, your best place to get that answer is StackOverflow , even though this information should be prominently displayed in the documentation.

The cloud offer is fast moving and hard to keep up with

The offer from cloud service providers changes a lot. I actually had to look back over my previous blog post detailing how I hosted my projects in the cloud to remind myself how it all worked. The problem is more acute on AWS, where the naming is so whacky that someone maintains a page where the names of AWS products are translated into useful plain English.  Elastic Beanstalk, anyone?

There is also a danger of just choosing the wrong cloud product. Like the guy that used Amazon Glacier (real name) storage and ended up racking up a $150 bill for a 60Gb download,

You need to learn the platform and not the technology

The argument put forward by the cloud guys often is that you get to focus on building your app and not deploying it or worrying about the infrastructure. This is absolutely true for a more trivial apps.

However we all know that it is rare for a useful application to be this flat. If you want to connect your application into another service to store data or even just files, you’ll need to learn what that service might be on that platform, and then you’ll need to learn that platform’s API in order to program against it. Generally, once you’ve got that service wired in, you’ll get some benefits, but the upfront effort is a cost that should be taken into account.

Run projects for less on physical servers

Newsflash – physical servers can be very cheap, especially used ones. Helpfully someone on reddit put together a megalist of cheap server providers. Below is a list of the main providers that are worth checking out in Europe:

  • Kimsufi – based in France. Cheap used servers. Limited availability.  No gigabit connections
  • Online.net – based in France, cheap line of “dediboxes”
  • OneProvider – based in France, but have servers located globally

As of writing, you can get a gigabit connected, 4 core,  4 Gb Ram server with OneProvider for €8 a month. Whilst comparing the cost of this hardware on a cheap provider to a cloud provider would be silly (you’re only supposed to deploy the hardware you need in the cloud), as soon as you run two web apps on an €8 box, you’re saving money.

I’ve got one server with Kimsufi, and one with Online.net. I’ve had one very small outage on Kimsufi in the 2 years that I’ve had a box there. The box was unavailable for around 10 minutes before Kimsufi’s auto monitoring rebooted the box.

I’ve had a single lengthier outage in the 18 months I’ve used Online.net which lead to a server being unavailable for about 5 hours. This was a violation of their SLA and meant that I got some money back. I detailed this outage in a previous post – “Lessons Learned from a server outage“.

Run whatever you want with on a dedicated server

When you aren’t running inside of a neatly packaged cloud product, you have full ownership of the server and can run whatever you want. The downside of this is that there is more responsibility, but in my opinion, the pros out weigh the cons, such as not having to wait for some service to become available (Example – GCS doesn’t support http to https redirection, I’m guessing you need to buy another service for that – this is a few simple lines of config in nginx)

Being able to run whatever you want also opens the door for you to be able to play around with more technology without having to worry about an increase in cost. Running my own dedicated boxes has been fun, and I’ve learned a lot by doing so

Technical, Wordpress

Running WordPress in production – Security and Speed

WordPress gets a fair share of bad press.

Most of this bad press is centred around security concerns. Many of these concerns are valid, but need not be a concern of yours if you are intending to run WordPress in production. You just need to a responsible webmaster. In this post I’ll list out some tips that will make your WordPress install robust and fast.

1. Keep your WordPress Install up to date

This is the most important security concern that you need to have. WordPress even makes updating your install super easy. You don’t need to log into any servers, you just need to login to the admin tool and head over to the dedicated updates page. From there you can simply press a button to get all of your pluggins and WordPress itself updated. Once your install is fully updated, you’ll see a nice clean page like this, telling you that there is nothing to update:

WordPress update screen
WordPress update screen telling me I’m fully updated

In the same way that you keep your laptop or PC up to date, you should be keeping your WordPress install up to date.

2. Install WordFence

WordFence is a popular security plugin that will offer you some protection against trending attacks. It’s a plugin that you should install, but it does not absolve you of all of your security responsibilities. You should still be regularly updating your server’s OS and any libraries installed on it.

3. Use Askimet

Askimet is a very popular WordPress plugin that is essential if you allow commenting on your website through WordPress.  Askiment will block shedloads of spam posts to your site, and you won’t even need to look t them. I don’t trust 3rd party services like disqus, so this was an essential plugin for me.

4. Back your shit up

WordPress has a few moving parts. Some of those parts are held in files, others in a MySQL database. You could periodically back these two up manually, or you could take advantage of one of the great WordPress plugins that you can use to automate your backups and make it super easy. An excellent one is Updraft Plus. This plugin can be set to regularly backup your entire WordPress site and can even store the backups in a cloud file service like Dropbox.

5. Install a caching plugin

A cache plugin will improve the load speed of your site. It will save database calls, and will instead pull data directly from memory. A popular plugin is WP Super Cache. And remember, a quick load time can mean that search engines rank you higher, and your visitors will love you.

6. Install an image compressing plugin

Again, this will give you a speed advantage and will save on your bandwidth use. A popular plugin is WP Smush. This plugin can be set to batch compress all images in your site, and can be used to compress images as and when they are added to your site.

7. Minify your JavaScript and CSS

Depending on how you’ve built your WordPress site will affect how you do this. If you have customised your WordPress templates or made your own theme, you should introduce a step in your build process to bundle and minify your JS and CSS.

If you are just using a 3rd party theme that you haven’t customised a lot, you should grab a plugin to bundle and minify your JavaScript and CSS. A plugin that I’ve had some success with is Better WordPress Minify. You may have to tweak it’s settings slightly to make sure it doesn’t break any of your other plugins that are rendered out on the UI (e.g. a source code highlighter plugin).

8. Use the latest version of jQuery

The standard install of WordPress doesn’t use the latest version of jQuery. Depending on the user’s that you’d like support, you may want to update to the latest version of jQuery.  You can do this in your build process, or you can do this with a plugin, like jQuery updater.

 

gitlab, Technical

Reducing the amount of memory used by gitlab

Gitlab is a fantastic tool. Rather than going with a saas solution for source control and for continuous integration, I’d thoroughly recommend hosting your own gitlab instance. Don’t be scared!

Anyway, I run my own gitlab instance on a box that only has 4 gigs of ram. Gitlab also has to share these limited resources with a few other webapps.

I noticed that gitlab was one of the biggest consumers of the ram on my box, and did some research into reducing it’s memory footprint.

Open the gitlab config file, which should be located at /etc/gitlab/gitlab.rb.

Reduce the postgres database cache

##! **recommend value is 1/4 of total RAM, up to 14GB.**
postgresql['shared_buffers'] = "256MB"

Reduce the concurrency level in sidekiq

I set this at 15 instead of 25 as I don’t have that many commits going on.

sidekiq['concurrency'] = 15 #25 is the default 

Disable prometheus monitoring

prometheus_monitoring['enable'] = false

Restart gitlab and test it out:

Run:

gitlab-ctl reconfigure

You should then run through a few commits and check gitlab is running smoothly.

Technical, Wordpress

Running WordPress behind a reverse SSL proxy

Newer versions of WordPress really don’t need much to get working behind an SSL proxy.

I currently have an NGINX webserver running infront of this blog. The job of NGINX here is to handle the SSL traffic, decrypt it, and forward it onto the docker container that runs this blog in plain old http.

If you’re going to do this, you need to make sure your NGINX config is setup to send the right headers through to wordpress, so that wordpress knows about the scheme the traffic came in on. So, in your NGINX config file, you’ll need the following:

 location / {
   proxy_pass http://127.0.0.1:5030;
   proxy_http_version 1.1;
   proxy_set_header X-Forwarded-Host $host;
   proxy_set_header X-Forwarded-Server $host;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header Host $host;
 }

That should be all you need. WordPress has been around, and older blog posts seem to indicate that you may need some additional plugins. I didn’t find that this was the case. Hope this helps.

AngularJs, Technical

Performance Tuning AngularJS Apps

For existing AngularJs apps, there are a few things that you can do in order to try and improve performance.

Destroy no longer needed javascript plugin elements

This should help prevent your Angular App from running into memory problems in the browser. There are basically two approaches to this.

In directives that wrap some sort of plugin (e.g. Slick Slider), you need to listen out for the “$destroy” event and call that particular plugin’s cleanup methods. In the case of Slick Slider, it’s the unslick() method, but it could simply be a call to jQuery’s remove() method, or you could just set the value of the html element to an empty string:


$scope.$on('$destroy', function() {

 // Call the plugin's own api
 $('.slick-slider').unslick();

 // or call jQuery's remove function
 $(element).remove();

 // or, if you aren't using jQuery

 element.html("");
});

Unbind any watches when the current scope is destroyed

When you create a watch on a scoped variable, or on an event in angular, the $watch function returns a function that when called, will remove the watch. Call this returned function when your scope is destroyed as it is no longer needed:

var unbindCompanyIdWatch = $scope.$watch('companyId',() => {
 // Do Something...
});

$scope.$on('$destroy', function() {
 unbindCompanyIdWatch();
});

Use One-Time Binding where possible

The less binding going on, the less watchers there are.

If you render values in your dom using angular that you know are only going to be set once and will never change during the lifecycle of the page, it is a candidate for using one-time binding. The One-Time binding syntax is basically two colons – “::”, and can be used a few ways in your html templates:

<!-- Basic One-Time Binding -->
<p>{{::SomeText}}</p>

<!-- Within ng-repeat -->
<ul>
 <li ng-repeat="item in ::items">
 {{::item.name}}
 </li>
</ul>

<!-- Within ng-show -->
<p ng-show="::showContent">
 Some Content
</p>

<!-- Within ng-if -->
<p ng-if="::showContent">
 Some Content
</p>

 

Use “track by” when using ng-repeat where possible

By specifying a property for angular to track an item within a collection by, you will prevent angular from rebuilding entire chunks of the dom unnecessarily. This will give you a performance boost which will be noticeable when dealing with large collections:

<ul>
 <li ng-repeat="item in items track by item.itemId">{{item.name}} </li>
</ul>

Ben Nadel has an excellent post on track by that you should checkout.

Of course, you shouldn’t need to tie this up with some one-way binding, as track by would be pointless with a collection that does not need to change.

Cancel no longer required Http requests

If some action happens that means that data that is loading is no longer needed (e.g. a navigation change), you should cancel the no longer required http requests. Most browsers limit the number of concurrent requests to a single domain. So, if your requests are no longer required, cancel them and free up those request slots.

You can do this by simply resolving the promise. Your requirements of when this cancellation needs to happen will be different for every app, so I would recommend that you write a httpRequestManagerService, and marshal any http requests through it that you deem necessary. You can then resolve your promises based on some event – e.g. a navigation change event. Ben Nadel has a good post on cancelling angular requests.

Interchange ng-if and ng-show where appropriate

On the surface, ng-show and ng-if produce the same result. However, under the hood, they behave slightly differently.

ng-show still renders your html no matter what. If it does not need to be shown, the html element with the ng-show directive will simply be marked with a display:none css style.

ng-if will completely remove the html and all children that contain the directive.

There is obviously a cost in completely removing and adding entire chunks of html, however if you are dealing with a lot of html, or if your html within your ng-if contains a lot of angular expressions, I have found it to be more performant than ng-show.

My advice is to evaluate both in each case before making a decision.

Please feel free to throw in any more performance tips in the comments

Technical

The 2015 PC build for Gaming and Programming

Having build my last desktop in 2011 and noticing that some things were starting to run a little slowly, I’ve gone for a desktop refresh. Here is what I have gone for:

CPU:

Intel Core i5 i5-4690K
This is one of the best “bang for buck” CPUs that you can get at the moment. I was previously running an i7 but this new Haswell architecture i5 beats my old i5 comfortably across the board, and it also runs cooler than my old i7.

RAM:

Corsair CMY16GX3M2A1866C9R Vengeance Pro Series 16GB (2x8GB) DDR3 1866Mhz CL9 XMP Performance Desktop Memory
I’ve been burnt in the past by cheaper RAM becoming unstable, so now I will never scrimp on RAM. This RAM supports Intel’s XMP for overclocking, and has been enabled since day one without any issues.

Motherboard:

MSI Z97 Gaming 5 Intel LGA1150 Z97 ATX Motherboard
One of the cheapest parts of this build. I was very skeptical about getting a mainboard that does not have an integrated Intel NIC (this board instead opts for a Killer Networks NIC). My last mainboard had a Bigfoot Networks E2100 NIC, which out of the box was incredibly buggy and unstable. It was actually so unusable that I ended up disabling the TCIP capabilities of the card and letting the tested and reliable TCIP stack in Windows do it’s thing. The Killer Networks E2100 card is now basically abandonware, and the card does not work with newer games online, and until recently wasn’t compatible with the iTunes store. However, the E2200 is current and is still getting plenty of attention from Killer Networks, and I haven’t had any issues with it online so far. My advice would still be to go for a tried an tested Intel NIC if you can, although I’m yet to experience any problems with the E2200 Killer Networks card on this mainboard.

One of the best things about this mainboard is the BIOS, which has fantastic user interface and give you plenty of control over overclocking features, both simple and advanced. This piece of kit was fantastic value for money.

Graphics:

MSI NVIDIA GTX 970 Gaming Twin Frozr HDMI DVI-I DP Graphics Card
The more graphics memory, the better. This card lets me comfortably play the newest games (including GTA5) with the graphics settings all maxed out. It also runs quietly.

Cooling:

Corsair Hydro Series H55 All-In-One Liquid Cooler for CPU
This was a surprise win for me. I previously used a Be Quiet CPU fan, which was nice and silent and kept my CPU nice and cool. However, this ready-to-rock water cooler from Corsair really impressed me, not just on the noise levels, but also on the cooling capabilities. For the first time in years, my CPU will happily idle at 25°C.

Main OS Hard Drive:

Crucial CT512MX100SSD1 2.5 inch 512GB SATA III Solid State Drive
The OS hard drive caused me great pain originally. I started this build off running the OCZ Arc 100. A 480 Gigabyte SSD priced very cheaply at £120. However, this was simply too good to be true, and within a week of the new build, this SSD suffered some serious file corruption and required a reinstall of Windows, which would only go on after a hard SSD wipe (a Windows installer format was not enough). However, I decided not to proceed with the OCZ Arc 100 as a quick bit of research revealed that it was too much of an unreliable drive and has a few problems. Have a read about all of the other problems that other people had with this drive over at newegg.com. You pay for what you get, and I have sent the defective OCZ Arc 100 back for a refund, and am instead running a more highly rated but more costly Crucial SSD.

Having a hard drive fail on you is bad enough, but it’s that much more hassle when it’s the hard drive that contains the operating system for your battlestation on it. The Arc 100 was the only let down of this build, and it did come as a surprise as I had previously run a smaller OCZ SSD without any problems.

Operating System:

I’m running Windows 8.1, which on the 31st of July will become Windows 10 🙂

I also run a Xubuntu VM within VM Ware Player for my golang playtime. If you’re an Ubuntu user, I recommend that you give Xubuntu a try. You might just prefer XFCE, like I do.

Technical

Stop Bashing Angular

I appreciate that I’m little late to this discussion. I’m not sure if you may have noticed, but I don’t blog as much as some of the other better known developers out there. Why? I’m too busy working contracts and building real world applications, that have real world problems and real world requirements.

So I’ve spoken about this several times in the past, and I’ll speak about it again, but I firmly believe that software development suffers from trend hype lifecycles in a massive way:

Technology Trend Hype Lifecyles
Technology Trend Hype Lifecyles

I do however think that there is one key thing missing from the above diagram – the “Anti Trend”. Sometimes a technology will come a long that is genuinely popular and useful for a real reason – it actually does a job, is well received and supported and gets pretty popular. In software development, it’s something that can make our difficult job easier. The “Anti Trend” refers to the detractors, who I suspect, want to be the first to “bail out” on a technology.

I’m all for critique of a technology or a way of solving a problem, but your arguments need to stand up.

I had a look into this in my post “That CSS / Javascript library isn’t as big as you think“, where I pointed out that it was odd that those criticising jQuery and Twitter Bootstrap complained about the size of these libraries, but seemed to be ignorant of the basics – free cdns and gzipping.

I also had a look into the Anti Trend in my post “In defence of the RDBMS: Development is not slow. You just need better tooling“, where I pointed out that one of the key criticisms of relation databases was that development against them was slow. This is only the case if you don’t bother using any tooling.

Different sections of the of the development community run through the technology trend hype lifecycle at different speeds, and the Javascript community runs through the trend lifecycle at breakneck speeds.

So, right now, the Anti Trenders are having a pop at Angular (a few months ago it was jQuery). The arguments are usually the same and genuinely do not stand up in a real world situation. I’ve seen blog posts from people that I previously had respected quite a lot, critiquing angular in seriously misplaced manner:

Misplaced critique 1 – Blaming the framework when you should be blaming the problem that the framework is trying to solve

If you don’t like how Angular can be used, you probably wont like how Ember, React, and Knockout can be used either. These front end frameworks exist and are used for a reason – to solve the problem of getting a client and their data around an application nicely and seamlessly.

What shocked me about the Anti Trend blog posts was that they revealed a level of ignorance on the side of the authors. For me, someone with over a decade’s worth of experience of publishing material on the web and developing real web applications (you know, ones that have requirements, costs and deadlines, and need to actually work and do something), front end frameworks like Angular and Knockout solved a very real problem. Both technologies have helped me to provide a richer client experience that were more testable, and they helped me get there quickly and keep the customers and the users happy.

Misplaced critique 2 – Getting tribal and then blaming the other tribe

It’s an age old technique that can be applied to just about any argument. “I’m over here with the cool kids, you’re over there with the weirdos”. You might be wondering what I’m on about, but it’s actually an argument that I’ve seen in an anti Angular blog post:

I’d say Angular is mostly being used by people from a Java background because its coding style is aimed at them

The above is 100% verbatim from an anti Angular blog post. “People that like Angular must be uncool enterprise developers”. Sorry dude, some of us work in the enterprise and build real world line of business applications. We have bills to pay and don’t live rent free working on speculative up startup ideas.

LifeinvaderOffice-GTAV
The LifeInvader social network HQ from GTA5. Your speculative start up also could get this big

Misplaced criticism 3 – I don’t like where it could potentially go, therefore it’s wrong now

If you hadn’t heard, Angular 2 will bring in some changes. If you didn’t like Angular in the first place, why are you crying for the loss of version 1? And why is this such a big issue in the Javascript community? Did you know that ASP.Net will undergo significant changes with the next release (called vNext)? The .net community is generally excited for this release, and isn’t mourning the loss of the old versions.

This reddit user summed up this argument nicely:

Screen Shot 2015-03-07 at 14.28.13

Misplaced criticism 4 – Pointing out the problems but offering no solutions

One of the best things about being a developer is getting challenges thrown at you every day, and thinking of solutions to those problems. Can you imagine having someone on your team that was 100% negative, and constantly stopped their work to call you over to tell you that they had discovered some problem, and that it therefore was no other way around the problem without throwing the whole thing out and starting again? It would be pretty annoying, right?

Well, if you’re going to criticise front end frameworks and offer no alternatives or other solution, I’m going to assume that you are advocating the use of A LOT of jQuery instead (which I’m guessing you think is too bloated, and that you’d tell me to write bare metal javascript).

It’s silly isn’t it? I’m not saying it couldn’t be done, I’m saying it would be hard, your code would suck, would be difficult to test, and would take an eternity to deliver.

Conclusion

Make your own discussion. Talk do devs that you know in your network that may have used the technology in a real world situation. If you don’t know any, find your local Web Developer meetup and get talking to people. Build a small prototype and form your own opinion. Don’t just follow the trend, or the anti trend. What is your project priority? Delivery? Or something else?

It’s not unreasonable to consider blog posts on the subject, but please consider if the author has a valid opinion. Do they actually build and deliver real world apps, or, do they now make their money from blogging, podcasting and running a training company on the side? Some good places to go for some real world insight (as in real actual code problems) into Angular Js are:

In the above links you will quickly discover real world Angular challenges and how they were overcome. It will also give you an indication of how well trodden the road is before you decide to set off down it.

TL;DR

If you’re going to bash Angular, think:

  • Is what I dislike about Angular a fault of the framework, or of the problem that I am trying to solve?
  • Does my criticism apply to all other front end frameworks?
  • Am I criticising an Angular antipattern that could be resolved by coding a little more sensibly?
  • Can I offer a better alternative solution?
  • Am I being an “Anti-Trender”? If you’re not sure, recall if you denounced one of the following on Facebook: Kony 2012, the no makeup selfie, the Ice Bucket Challenge.

Form your own opinion from real world experiences.