Broadband, Consumer

The current state of broadband and mobile data in the UK

I’ve always watched the Broadband and Mobile markets in the UK, largely from a consumer point of view. This has mainly to have been to get the fastest internet access at the lowest price.

Market Competition

Over the last decade or so, we saw a worrying trend in the Broadband market – we lost a lot of competition. This happened as a few bigger corporations entered the broadband market and consolidated their market share by buying up and closing smaller and often very good broadband operators.

Remember Bulldog internet? Well, they got eaten by TalkTalk. Remember BE internet? They got eaten by O2. Who then got eaten by Sky.

Bulldog and BE internet were once, very well regarded and popular internet providers. I’ll let you do your own research on what TalkTalk and Sky’s customers currently think of them.

Over the last year or so, this trend has reversed a bit, and we’ve had a few of the newer entrants trying to push themselves in, for example, EE and Vodafone.

Want a Broadband and Mobile combo? Get stuffed.

EE and Vodafone are mobile network operators and that is where they do the majority of their business. Both offer some fairly competitive broadband packages, but for some odd reason, choose not to bundle anything else in their broadband packages. So two massive companies that offer mobile phone and internet services, don’t offer any packages that link the two. Huh?

I cannot understand why they would not do this. Consumers would benefit from getting better deals, and EE and Vodafone would benefit by getting customers that were more embedded into their services. The broadband and mobile services offered by these businesses are essentially treated as two separate entities. When I couldn’t find any combined broadband and mobile deals online, I reached out to their online sales staff. Both EE’s and Vodafone’s sales responded with “You’re talking to broadband sales, I can’t help you with mobile sales”.

I eventually reached out to both companies on twitter – EE actually will throw 5 gigs of data onto your phone package, but that isn’t great for someone like me – and they don’t actually shout about that offer anywhere.

EE – you are missing a trick. Vodafone – you are missing a trick. Get some packages that link the two and train your staff on all consumer products. Don’t treat your broadband and mobile offer as two totally different things. As a potential customer, don’t bounce me between departments if I want to talk about buying broadband and mobile.

In Europe, many people use the same provider for their TV, broadband, and family mobile packages. There is no reason why this sort of offer wouldn’t be as popular in the UK.

So what about mobile data?

So, we’ve now got a pretty good 4G network up and down the country – however unlimited mobile data packages have become rare and expensive.

I’m currently on an old three unlimited data package. It costs me £23 a month. If I wanted to take out that package now, it would cost me £30. When I took my package out – it was one of the most expensive. It’s now one of the cheapest.

Worryingly, three are now traffic shaping and chipping away at net neutrality by offering up packages that have data limits, but let you access some services in an unlimited fashion. They call it “Go Binge“, and claim that if offers you access to Netflix and some other smaller TV streaming services. They are treated as an option on mobile packages:

I’d rather they were just into the business of offering up data, not offering up *some* data. Also, this is starting to look like some of the mobile phone contracts offered up in countries where there are no net neutrality laws.

Facebook, twitter and whatsapp only unlimited in certain packages

Currently no one offers up unlimited data except for three – and that’ll cost you £33 a month.

To conclude

Data has gotten more expensive on mobiles. We’ve got more big companies offering broadband, but aren’t using their significant market presence in other areas to offer up better deals.

 

gitlab, Technical

Reducing the amount of memory used by gitlab

Gitlab is a fantastic tool. Rather than going with a saas solution for source control and for continuous integration, I’d thoroughly recommend hosting your own gitlab instance. Don’t be scared!

Anyway, I run my own gitlab instance on a box that only has 4 gigs of ram. Gitlab also has to share these limited resources with a few other webapps.

I noticed that gitlab was one of the biggest consumers of the ram on my box, and did some research into reducing it’s memory footprint.

Open the gitlab config file, which should be located at /etc/gitlab/gitlab.rb.

Reduce the postgres database cache

##! **recommend value is 1/4 of total RAM, up to 14GB.**
postgresql['shared_buffers'] = "256MB"

Reduce the concurrency level in sidekiq

I set this at 15 instead of 25 as I don’t have that many commits going on.

sidekiq['concurrency'] = 15 #25 is the default 

Disable prometheus monitoring

prometheus_monitoring['enable'] = false

Restart gitlab and test it out:

Run:

gitlab-ctl reconfigure

You should then run through a few commits and check gitlab is running smoothly.

Wordpress

Self hosted wordpress vs free wordpress

I’ve maintained this blog since 2008. Since 2008, it had been hosted on wordpress.com, and I was paying around £12 a year for the domain mapping. That allowed me to point my domain (edspencer.me.uk) at my wordpress.com hosted site.

I was reasonably happy with the service I got.

  1. It was cheap
  2. I didn’t have to worry about hosting (backups, uptime)
  3. I was quick to get going

However, there are some downsides when you don’t host yourself:

No full administrative control over WordPress

One of the awesome things about WordPress is the amount of themes and plugins that are out there. When using the hosted platform at wordpress.com, you do not have full administrative control over wordpress, so you can’t just install some of the plugins as you wish. And those that use wordpress a lot, know that there are some essential plugins, like WP Smush.

Additional features that are free when you self host, cost money on wordpress.com

If you want to install a non standard theme on a hosted wordpress.com site, you can’t. You can however, pay for the option to install one of their premium themes. So you can’t really style your site in the way you want, without getting your wallet out.

Also – ads. wordpress.com hosted sites “occasionally” show ads to users. Here’s the thing – I really, really distrust ad networks. Aside to opening your site up to becoming a vector for Malvertising attacks and the creepy level of ubiquitous tracking,  I also really dislike just how invasive ads on the web have become. I understand the need to monetise content on the web, but there are better ways of doing it rather than just indiscriminately littering ads around content.

In fact, this site is itself monetised where appropriate. Some articles contain useful and relevant affiliate links – but this may actually have contravened wordpress.com’s terms and conditions. So I was also risking my site randomly getting yanked offline.

Performance on wordpress.com isn’t great

I’m a web developer. It’s what I do, day in, day out. I want everything that I do to follow web best practices – and a site hosted on wordpress.com will not. Opening up the developer tools network tab in Chrome, and hitting a wordpress.com hosted site, will reveal a few things. Aside from A LOT of requests for tracking assets, there are several requests for unminified javascript files. Like this.

The alternatives

Other wordpress.com hosts

There are a few of these about, but I’ve really gone off cloud based solutions and didn’t want to spend hours researching other providers.

Other blogging engines

I looked at a few, but saw that the migration path would be painful, especially if self hosted.

medium.com isn’t self hosted. Ghost can be self hosted but isn’t anywhere as easy as self hosting wordpress. It’s also funny that the ghost vs wordpress page says “Ghost is simple!”, and the ghost vs medium page says “Ghost is powerful!”.

I do not trust a paid blog site to keep it’s pricing structure as is. I really don’t want to be in the position where I need to suddenly pay up more money to host or to frantically have to migrate because some company decided to change their pricing structure.

So here we are, still running on wordpress, but this time we’re self hosted. The migration was easy, and took me about 2 hours.

But wordpress isn’t secure!

I hear you, along with everyone else that has been sucked up by the technology hype lifecycle. WordPress does indeed get bashed a bit because there is an unfair perception of security problems around it.  There are some things you should be doing if you are running a wordpress site in production to make it more secure. I’ll address these things in a later blog post, but many of them will just be standard web security best practices.

docker, postgres

Docker Postgres cheatsheet

Connecting to a Postgres database running inside a container:

From the box running the Postgres container, you first need to get into a terminal inside the container:

docker exec -i -t running_container_name /bin/bash

Opening the PSQL terminal once connected:

psql -U database_username_here

Listing all databases in the Postgres instance:

 \list 

Connect to a database:

 \connect database_name 

List all tables in a database:

 \dt  
Technical, Wordpress

Running WordPress behind a reverse SSL proxy

Newer versions of WordPress really don’t need much to get working behind an SSL proxy.

I currently have an NGINX webserver running infront of this blog. The job of NGINX here is to handle the SSL traffic, decrypt it, and forward it onto the docker container that runs this blog in plain old http.

If you’re going to do this, you need to make sure your NGINX config is setup to send the right headers through to wordpress, so that wordpress knows about the scheme the traffic came in on. So, in your NGINX config file, you’ll need the following:

 location / {
   proxy_pass http://127.0.0.1:5030;
   proxy_http_version 1.1;
   proxy_set_header X-Forwarded-Host $host;
   proxy_set_header X-Forwarded-Server $host;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header Host $host;
 }

That should be all you need. WordPress has been around, and older blog posts seem to indicate that you may need some additional plugins. I didn’t find that this was the case. Hope this helps.

Surface Book

Adjusting screen brightness on the Surface Book from the Keyboard

I’ve purchased a Microsoft Surface Book to replace my Mac Book Pro. I didn’t get on very well with the Mac Book Pro for reasons that I will list out in a future blog post, but so far I am very happy with the Surface Book. It’s build quality feels fantastic and it is a lovely machine to use.

It did however take a me a while to workout how to adjust the screen brightness from the keyboard after noticing that none of the function keys double up as a screen brightness adjustment.

To make your screen brighter:

Fn + Del

To make your screen darker:

Fn + Backspace

Enjoy!

Azure

Mapping naked domains and www. domains to Azure web apps

Azure web apps can be mapped to multiple domains, as well as naked domains.

To do this, you will need access to your domain name’s DNS settings.

Go Naked

A naked domain is the domain without the “www.” that you often see on websites. There are various reasons for using a naked url, that I won’t go into in this post.

Jump into your domain name’s DNS settings. Create a CNAME entry for awverify.yourwebsite.com, and point it to your azure domain (e.g. awverify.yourwebsite.azurewebsites.net). This tells azure that you are the owner of the domain.

Now, go into your Azure control panel and locate your web app.

Select “Buy Domains” and then “Bring External Domains”:

You will then be shown a dialogue on the right with a text box where you can enter you naked domain name (e.g. yoursite.com – no www):

After you enter the naked domain, azure will load for a minute whilst is checks for your awverify CNAME dns entry.

Once verified, you can then point your actual domain to your Azure website.

Note: You can use a CNAME or an A record DNS entry to resolve the naked domain of your site. Both methods are listed below:

Method 1. Using an A record DNS entry pointed to the IP shown in the azure portal

Once verified, you Azure will reveal an IP address. This should show up just below the text box where you entered the domain name. If it doesn’t show, wait a few minutes and refresh the entire page. The IP address should then be displayed.

Head over to your DNS settings and enter an a record for “*”resolving to your ip address listed in Azure. You should now have a working naked domain name.

Method 2. Using a CNAME DNS entry pointed to the azure alias

Head over to your DNS settings and enter a CNAME record for “*” resolving to yourwebsite.azurewebsites.net. You should now have a working naked domain name.

The merits of using an A record vs a CNAME entry are not something that I will go into in this post. You can read more about the two DNS entry types here.

Pointing a www. to your azure application as well (or any other subdomain)

As well as having a naked domain work, you will probably also want your www to work as well. This can be done using the same methods above, but crucially you will need to tell Azure that you also have ownership of the subdomain as well:

e.g. In order to verify www.yourwebiste.com, you need to create a CNAME dns entry for awverify.www.yourwebsite.com that resolves to awverify.yourwebiste.azurewebsites.net

In order to verify blog.yourwebiste.com, you need to create a CNAME dns entry for awverify.blog.yourwebsite.com that resolves to awverify.yourwebiste.azurewebsites.net

Again, once verified, you are free to setup and A record or CNAME record DNS entry to point to your Azure Web App.

 

Azure

Hosting multiple websites inside an azure app service

This information is up to date as of November 2015. The Azure offer changes a lot, so this information may become quickly out of date.

A few years back, Scott Hanselman wrote a blog post on how you could utilize the “Standard” tier of Azure websites to save money by hosting multiple sites. Well, that was back in 2013 and the azure offer has significantly changed.

Firstly, Azure Websites has now been merged under Azure App Service, along with a few other services. Here is a 5 minute video from channel 9 explaining what exactly is in Azure App Services.

So, how do I get that “Shared” tier multiple websites setup that Scott Hanselman originally blogged about? Well, the Azure App Service pricing details page looks like you can get there with the “Basic” tier, which is cheaper than “Standard”:

azure-app-service-pricing-nov-2105
Azure App Service offer. Screenshot grabbed in November 2015

And how do I actually set this up in my Azure portal?

Confusingly, what is priced as an Azure App Service basically means everything under the “Web + Mobile” under the “new” option in the azure portal:

azure-portal-Web-Mobile

Why? Because:

An App Service plan is the container for your app. The App Service plan settings will determine the location, features, cost and compute resources associated with your app.

More info here.

You get told this information when setting up your app service plan and location (not sure why it defaults to Brazil…)

azure-portal-app-service-plan

So, an App Service Plan is basically the billable container for your apps.

So if you want to create a new web app under the same App Service, simply select it when setting up a new Web App.

AngularJs, Technical

Performance Tuning AngularJS Apps

For existing AngularJs apps, there are a few things that you can do in order to try and improve performance.

Destroy no longer needed javascript plugin elements

This should help prevent your Angular App from running into memory problems in the browser. There are basically two approaches to this.

In directives that wrap some sort of plugin (e.g. Slick Slider), you need to listen out for the “$destroy” event and call that particular plugin’s cleanup methods. In the case of Slick Slider, it’s the unslick() method, but it could simply be a call to jQuery’s remove() method, or you could just set the value of the html element to an empty string:


$scope.$on('$destroy', function() {

 // Call the plugin's own api
 $('.slick-slider').unslick();

 // or call jQuery's remove function
 $(element).remove();

 // or, if you aren't using jQuery

 element.html("");
});

Unbind any watches when the current scope is destroyed

When you create a watch on a scoped variable, or on an event in angular, the $watch function returns a function that when called, will remove the watch. Call this returned function when your scope is destroyed as it is no longer needed:

var unbindCompanyIdWatch = $scope.$watch('companyId',() => {
 // Do Something...
});

$scope.$on('$destroy', function() {
 unbindCompanyIdWatch();
});

Use One-Time Binding where possible

The less binding going on, the less watchers there are.

If you render values in your dom using angular that you know are only going to be set once and will never change during the lifecycle of the page, it is a candidate for using one-time binding. The One-Time binding syntax is basically two colons – “::”, and can be used a few ways in your html templates:

<!-- Basic One-Time Binding -->
<p>{{::SomeText}}</p>

<!-- Within ng-repeat -->
<ul>
 <li ng-repeat="item in ::items">
 {{::item.name}}
 </li>
</ul>

<!-- Within ng-show -->
<p ng-show="::showContent">
 Some Content
</p>

<!-- Within ng-if -->
<p ng-if="::showContent">
 Some Content
</p>

 

Use “track by” when using ng-repeat where possible

By specifying a property for angular to track an item within a collection by, you will prevent angular from rebuilding entire chunks of the dom unnecessarily. This will give you a performance boost which will be noticeable when dealing with large collections:

<ul>
 <li ng-repeat="item in items track by item.itemId">{{item.name}} </li>
</ul>

Ben Nadel has an excellent post on track by that you should checkout.

Of course, you shouldn’t need to tie this up with some one-way binding, as track by would be pointless with a collection that does not need to change.

Cancel no longer required Http requests

If some action happens that means that data that is loading is no longer needed (e.g. a navigation change), you should cancel the no longer required http requests. Most browsers limit the number of concurrent requests to a single domain. So, if your requests are no longer required, cancel them and free up those request slots.

You can do this by simply resolving the promise. Your requirements of when this cancellation needs to happen will be different for every app, so I would recommend that you write a httpRequestManagerService, and marshal any http requests through it that you deem necessary. You can then resolve your promises based on some event – e.g. a navigation change event. Ben Nadel has a good post on cancelling angular requests.

Interchange ng-if and ng-show where appropriate

On the surface, ng-show and ng-if produce the same result. However, under the hood, they behave slightly differently.

ng-show still renders your html no matter what. If it does not need to be shown, the html element with the ng-show directive will simply be marked with a display:none css style.

ng-if will completely remove the html and all children that contain the directive.

There is obviously a cost in completely removing and adding entire chunks of html, however if you are dealing with a lot of html, or if your html within your ng-if contains a lot of angular expressions, I have found it to be more performant than ng-show.

My advice is to evaluate both in each case before making a decision.

Please feel free to throw in any more performance tips in the comments

Technical

The 2015 PC build for Gaming and Programming

Having build my last desktop in 2011 and noticing that some things were starting to run a little slowly, I’ve gone for a desktop refresh. Here is what I have gone for:

CPU:

Intel Core i5 i5-4690K
This is one of the best “bang for buck” CPUs that you can get at the moment. I was previously running an i7 but this new Haswell architecture i5 beats my old i5 comfortably across the board, and it also runs cooler than my old i7.

RAM:

Corsair CMY16GX3M2A1866C9R Vengeance Pro Series 16GB (2x8GB) DDR3 1866Mhz CL9 XMP Performance Desktop Memory
I’ve been burnt in the past by cheaper RAM becoming unstable, so now I will never scrimp on RAM. This RAM supports Intel’s XMP for overclocking, and has been enabled since day one without any issues.

Motherboard:

MSI Z97 Gaming 5 Intel LGA1150 Z97 ATX Motherboard
One of the cheapest parts of this build. I was very skeptical about getting a mainboard that does not have an integrated Intel NIC (this board instead opts for a Killer Networks NIC). My last mainboard had a Bigfoot Networks E2100 NIC, which out of the box was incredibly buggy and unstable. It was actually so unusable that I ended up disabling the TCIP capabilities of the card and letting the tested and reliable TCIP stack in Windows do it’s thing. The Killer Networks E2100 card is now basically abandonware, and the card does not work with newer games online, and until recently wasn’t compatible with the iTunes store. However, the E2200 is current and is still getting plenty of attention from Killer Networks, and I haven’t had any issues with it online so far. My advice would still be to go for a tried an tested Intel NIC if you can, although I’m yet to experience any problems with the E2200 Killer Networks card on this mainboard.

One of the best things about this mainboard is the BIOS, which has fantastic user interface and give you plenty of control over overclocking features, both simple and advanced. This piece of kit was fantastic value for money.

Graphics:

MSI NVIDIA GTX 970 Gaming Twin Frozr HDMI DVI-I DP Graphics Card
The more graphics memory, the better. This card lets me comfortably play the newest games (including GTA5) with the graphics settings all maxed out. It also runs quietly.

Cooling:

Corsair Hydro Series H55 All-In-One Liquid Cooler for CPU
This was a surprise win for me. I previously used a Be Quiet CPU fan, which was nice and silent and kept my CPU nice and cool. However, this ready-to-rock water cooler from Corsair really impressed me, not just on the noise levels, but also on the cooling capabilities. For the first time in years, my CPU will happily idle at 25°C.

Main OS Hard Drive:

Crucial CT512MX100SSD1 2.5 inch 512GB SATA III Solid State Drive
The OS hard drive caused me great pain originally. I started this build off running the OCZ Arc 100. A 480 Gigabyte SSD priced very cheaply at £120. However, this was simply too good to be true, and within a week of the new build, this SSD suffered some serious file corruption and required a reinstall of Windows, which would only go on after a hard SSD wipe (a Windows installer format was not enough). However, I decided not to proceed with the OCZ Arc 100 as a quick bit of research revealed that it was too much of an unreliable drive and has a few problems. Have a read about all of the other problems that other people had with this drive over at newegg.com. You pay for what you get, and I have sent the defective OCZ Arc 100 back for a refund, and am instead running a more highly rated but more costly Crucial SSD.

Having a hard drive fail on you is bad enough, but it’s that much more hassle when it’s the hard drive that contains the operating system for your battlestation on it. The Arc 100 was the only let down of this build, and it did come as a surprise as I had previously run a smaller OCZ SSD without any problems.

Operating System:

I’m running Windows 8.1, which on the 31st of July will become Windows 10 🙂

I also run a Xubuntu VM within VM Ware Player for my golang playtime. If you’re an Ubuntu user, I recommend that you give Xubuntu a try. You might just prefer XFCE, like I do.