Technical, Wordpress

Running WordPress in production – Security and Speed

WordPress gets a fair share of bad press.

Most of this bad press is centred around security concerns. Many of these concerns are valid, but need not be a concern of yours if you are intending to run WordPress in production. You just need to a responsible webmaster. In this post I’ll list out some tips that will make your WordPress install robust and fast.

1. Keep your WordPress Install up to date

This is the most important security concern that you need to have. WordPress even makes updating your install super easy. You don’t need to log into any servers, you just need to login to the admin tool and head over to the dedicated updates page. From there you can simply press a button to get all of your pluggins and WordPress itself updated. Once your install is fully updated, you’ll see a nice clean page like this, telling you that there is nothing to update:

WordPress update screen
WordPress update screen telling me I’m fully updated

In the same way that you keep your laptop or PC up to date, you should be keeping your WordPress install up to date.

2. Install WordFence

WordFence is a popular security plugin that will offer you some protection against trending attacks. It’s a plugin that you should install, but it does not absolve you of all of your security responsibilities. You should still be regularly updating your server’s OS and any libraries installed on it.

3. Use Askimet

Askimet is a very popular WordPress plugin that is essential if you allow commenting on your website through WordPress.  Askiment will block shedloads of spam posts to your site, and you won’t even need to look t them. I don’t trust 3rd party services like disqus, so this was an essential plugin for me.

4. Back your shit up

WordPress has a few moving parts. Some of those parts are held in files, others in a MySQL database. You could periodically back these two up manually, or you could take advantage of one of the great WordPress plugins that you can use to automate your backups and make it super easy. An excellent one is Updraft Plus. This plugin can be set to regularly backup your entire WordPress site and can even store the backups in a cloud file service like Dropbox.

5. Install a caching plugin

A cache plugin will improve the load speed of your site. It will save database calls, and will instead pull data directly from memory. A popular plugin is WP Super Cache. And remember, a quick load time can mean that search engines rank you higher, and your visitors will love you.

6. Install an image compressing plugin

Again, this will give you a speed advantage and will save on your bandwidth use. A popular plugin is WP Smush. This plugin can be set to batch compress all images in your site, and can be used to compress images as and when they are added to your site.

7. Minify your JavaScript and CSS

Depending on how you’ve built your WordPress site will affect how you do this. If you have customised your WordPress templates or made your own theme, you should introduce a step in your build process to bundle and minify your JS and CSS.

If you are just using a 3rd party theme that you haven’t customised a lot, you should grab a plugin to bundle and minify your JavaScript and CSS. A plugin that I’ve had some success with is Better WordPress Minify. You may have to tweak it’s settings slightly to make sure it doesn’t break any of your other plugins that are rendered out on the UI (e.g. a source code highlighter plugin).

8. Use the latest version of jQuery

The standard install of WordPress doesn’t use the latest version of jQuery. Depending on the user’s that you’d like support, you may want to update to the latest version of jQuery.  You can do this in your build process, or you can do this with a plugin, like jQuery updater.

 

gitlab, Technical

Reducing the amount of memory used by gitlab

Gitlab is a fantastic tool. Rather than going with a saas solution for source control and for continuous integration, I’d thoroughly recommend hosting your own gitlab instance. Don’t be scared!

Anyway, I run my own gitlab instance on a box that only has 4 gigs of ram. Gitlab also has to share these limited resources with a few other webapps.

I noticed that gitlab was one of the biggest consumers of the ram on my box, and did some research into reducing it’s memory footprint.

Open the gitlab config file, which should be located at /etc/gitlab/gitlab.rb.

Reduce the postgres database cache

##! **recommend value is 1/4 of total RAM, up to 14GB.**
postgresql['shared_buffers'] = "256MB"

Reduce the concurrency level in sidekiq

I set this at 15 instead of 25 as I don’t have that many commits going on.

sidekiq['concurrency'] = 15 #25 is the default 

Disable prometheus monitoring

prometheus_monitoring['enable'] = false

Restart gitlab and test it out:

Run:

gitlab-ctl reconfigure

You should then run through a few commits and check gitlab is running smoothly.

Technical, Wordpress

Running WordPress behind a reverse SSL proxy

Newer versions of WordPress really don’t need much to get working behind an SSL proxy.

I currently have an NGINX webserver running infront of this blog. The job of NGINX here is to handle the SSL traffic, decrypt it, and forward it onto the docker container that runs this blog in plain old http.

If you’re going to do this, you need to make sure your NGINX config is setup to send the right headers through to wordpress, so that wordpress knows about the scheme the traffic came in on. So, in your NGINX config file, you’ll need the following:

 location / {
   proxy_pass http://127.0.0.1:5030;
   proxy_http_version 1.1;
   proxy_set_header X-Forwarded-Host $host;
   proxy_set_header X-Forwarded-Server $host;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header Host $host;
 }

That should be all you need. WordPress has been around, and older blog posts seem to indicate that you may need some additional plugins. I didn’t find that this was the case. Hope this helps.

AngularJs, Technical

Performance Tuning AngularJS Apps

For existing AngularJs apps, there are a few things that you can do in order to try and improve performance.

Destroy no longer needed javascript plugin elements

This should help prevent your Angular App from running into memory problems in the browser. There are basically two approaches to this.

In directives that wrap some sort of plugin (e.g. Slick Slider), you need to listen out for the “$destroy” event and call that particular plugin’s cleanup methods. In the case of Slick Slider, it’s the unslick() method, but it could simply be a call to jQuery’s remove() method, or you could just set the value of the html element to an empty string:


$scope.$on('$destroy', function() {

 // Call the plugin's own api
 $('.slick-slider').unslick();

 // or call jQuery's remove function
 $(element).remove();

 // or, if you aren't using jQuery

 element.html("");
});

Unbind any watches when the current scope is destroyed

When you create a watch on a scoped variable, or on an event in angular, the $watch function returns a function that when called, will remove the watch. Call this returned function when your scope is destroyed as it is no longer needed:

var unbindCompanyIdWatch = $scope.$watch('companyId',() => {
 // Do Something...
});

$scope.$on('$destroy', function() {
 unbindCompanyIdWatch();
});

Use One-Time Binding where possible

The less binding going on, the less watchers there are.

If you render values in your dom using angular that you know are only going to be set once and will never change during the lifecycle of the page, it is a candidate for using one-time binding. The One-Time binding syntax is basically two colons – “::”, and can be used a few ways in your html templates:

<!-- Basic One-Time Binding -->
<p>{{::SomeText}}</p>

<!-- Within ng-repeat -->
<ul>
 <li ng-repeat="item in ::items">
 {{::item.name}}
 </li>
</ul>

<!-- Within ng-show -->
<p ng-show="::showContent">
 Some Content
</p>

<!-- Within ng-if -->
<p ng-if="::showContent">
 Some Content
</p>

 

Use “track by” when using ng-repeat where possible

By specifying a property for angular to track an item within a collection by, you will prevent angular from rebuilding entire chunks of the dom unnecessarily. This will give you a performance boost which will be noticeable when dealing with large collections:

<ul>
 <li ng-repeat="item in items track by item.itemId">{{item.name}} </li>
</ul>

Ben Nadel has an excellent post on track by that you should checkout.

Of course, you shouldn’t need to tie this up with some one-way binding, as track by would be pointless with a collection that does not need to change.

Cancel no longer required Http requests

If some action happens that means that data that is loading is no longer needed (e.g. a navigation change), you should cancel the no longer required http requests. Most browsers limit the number of concurrent requests to a single domain. So, if your requests are no longer required, cancel them and free up those request slots.

You can do this by simply resolving the promise. Your requirements of when this cancellation needs to happen will be different for every app, so I would recommend that you write a httpRequestManagerService, and marshal any http requests through it that you deem necessary. You can then resolve your promises based on some event – e.g. a navigation change event. Ben Nadel has a good post on cancelling angular requests.

Interchange ng-if and ng-show where appropriate

On the surface, ng-show and ng-if produce the same result. However, under the hood, they behave slightly differently.

ng-show still renders your html no matter what. If it does not need to be shown, the html element with the ng-show directive will simply be marked with a display:none css style.

ng-if will completely remove the html and all children that contain the directive.

There is obviously a cost in completely removing and adding entire chunks of html, however if you are dealing with a lot of html, or if your html within your ng-if contains a lot of angular expressions, I have found it to be more performant than ng-show.

My advice is to evaluate both in each case before making a decision.

Please feel free to throw in any more performance tips in the comments

Technical

The 2015 PC build for Gaming and Programming

Having build my last desktop in 2011 and noticing that some things were starting to run a little slowly, I’ve gone for a desktop refresh. Here is what I have gone for:

CPU:

Intel Core i5 i5-4690K
This is one of the best “bang for buck” CPUs that you can get at the moment. I was previously running an i7 but this new Haswell architecture i5 beats my old i5 comfortably across the board, and it also runs cooler than my old i7.

RAM:

Corsair CMY16GX3M2A1866C9R Vengeance Pro Series 16GB (2x8GB) DDR3 1866Mhz CL9 XMP Performance Desktop Memory
I’ve been burnt in the past by cheaper RAM becoming unstable, so now I will never scrimp on RAM. This RAM supports Intel’s XMP for overclocking, and has been enabled since day one without any issues.

Motherboard:

MSI Z97 Gaming 5 Intel LGA1150 Z97 ATX Motherboard
One of the cheapest parts of this build. I was very skeptical about getting a mainboard that does not have an integrated Intel NIC (this board instead opts for a Killer Networks NIC). My last mainboard had a Bigfoot Networks E2100 NIC, which out of the box was incredibly buggy and unstable. It was actually so unusable that I ended up disabling the TCIP capabilities of the card and letting the tested and reliable TCIP stack in Windows do it’s thing. The Killer Networks E2100 card is now basically abandonware, and the card does not work with newer games online, and until recently wasn’t compatible with the iTunes store. However, the E2200 is current and is still getting plenty of attention from Killer Networks, and I haven’t had any issues with it online so far. My advice would still be to go for a tried an tested Intel NIC if you can, although I’m yet to experience any problems with the E2200 Killer Networks card on this mainboard.

One of the best things about this mainboard is the BIOS, which has fantastic user interface and give you plenty of control over overclocking features, both simple and advanced. This piece of kit was fantastic value for money.

Graphics:

MSI NVIDIA GTX 970 Gaming Twin Frozr HDMI DVI-I DP Graphics Card
The more graphics memory, the better. This card lets me comfortably play the newest games (including GTA5) with the graphics settings all maxed out. It also runs quietly.

Cooling:

Corsair Hydro Series H55 All-In-One Liquid Cooler for CPU
This was a surprise win for me. I previously used a Be Quiet CPU fan, which was nice and silent and kept my CPU nice and cool. However, this ready-to-rock water cooler from Corsair really impressed me, not just on the noise levels, but also on the cooling capabilities. For the first time in years, my CPU will happily idle at 25°C.

Main OS Hard Drive:

Crucial CT512MX100SSD1 2.5 inch 512GB SATA III Solid State Drive
The OS hard drive caused me great pain originally. I started this build off running the OCZ Arc 100. A 480 Gigabyte SSD priced very cheaply at £120. However, this was simply too good to be true, and within a week of the new build, this SSD suffered some serious file corruption and required a reinstall of Windows, which would only go on after a hard SSD wipe (a Windows installer format was not enough). However, I decided not to proceed with the OCZ Arc 100 as a quick bit of research revealed that it was too much of an unreliable drive and has a few problems. Have a read about all of the other problems that other people had with this drive over at newegg.com. You pay for what you get, and I have sent the defective OCZ Arc 100 back for a refund, and am instead running a more highly rated but more costly Crucial SSD.

Having a hard drive fail on you is bad enough, but it’s that much more hassle when it’s the hard drive that contains the operating system for your battlestation on it. The Arc 100 was the only let down of this build, and it did come as a surprise as I had previously run a smaller OCZ SSD without any problems.

Operating System:

I’m running Windows 8.1, which on the 31st of July will become Windows 10 🙂

I also run a Xubuntu VM within VM Ware Player for my golang playtime. If you’re an Ubuntu user, I recommend that you give Xubuntu a try. You might just prefer XFCE, like I do.

Technical

Stop Bashing Angular

I appreciate that I’m little late to this discussion. I’m not sure if you may have noticed, but I don’t blog as much as some of the other better known developers out there. Why? I’m too busy working contracts and building real world applications, that have real world problems and real world requirements.

So I’ve spoken about this several times in the past, and I’ll speak about it again, but I firmly believe that software development suffers from trend hype lifecycles in a massive way:

Technology Trend Hype Lifecyles
Technology Trend Hype Lifecyles

I do however think that there is one key thing missing from the above diagram – the “Anti Trend”. Sometimes a technology will come a long that is genuinely popular and useful for a real reason – it actually does a job, is well received and supported and gets pretty popular. In software development, it’s something that can make our difficult job easier. The “Anti Trend” refers to the detractors, who I suspect, want to be the first to “bail out” on a technology.

I’m all for critique of a technology or a way of solving a problem, but your arguments need to stand up.

I had a look into this in my post “That CSS / Javascript library isn’t as big as you think“, where I pointed out that it was odd that those criticising jQuery and Twitter Bootstrap complained about the size of these libraries, but seemed to be ignorant of the basics – free cdns and gzipping.

I also had a look into the Anti Trend in my post “In defence of the RDBMS: Development is not slow. You just need better tooling“, where I pointed out that one of the key criticisms of relation databases was that development against them was slow. This is only the case if you don’t bother using any tooling.

Different sections of the of the development community run through the technology trend hype lifecycle at different speeds, and the Javascript community runs through the trend lifecycle at breakneck speeds.

So, right now, the Anti Trenders are having a pop at Angular (a few months ago it was jQuery). The arguments are usually the same and genuinely do not stand up in a real world situation. I’ve seen blog posts from people that I previously had respected quite a lot, critiquing angular in seriously misplaced manner:

Misplaced critique 1 – Blaming the framework when you should be blaming the problem that the framework is trying to solve

If you don’t like how Angular can be used, you probably wont like how Ember, React, and Knockout can be used either. These front end frameworks exist and are used for a reason – to solve the problem of getting a client and their data around an application nicely and seamlessly.

What shocked me about the Anti Trend blog posts was that they revealed a level of ignorance on the side of the authors. For me, someone with over a decade’s worth of experience of publishing material on the web and developing real web applications (you know, ones that have requirements, costs and deadlines, and need to actually work and do something), front end frameworks like Angular and Knockout solved a very real problem. Both technologies have helped me to provide a richer client experience that were more testable, and they helped me get there quickly and keep the customers and the users happy.

Misplaced critique 2 – Getting tribal and then blaming the other tribe

It’s an age old technique that can be applied to just about any argument. “I’m over here with the cool kids, you’re over there with the weirdos”. You might be wondering what I’m on about, but it’s actually an argument that I’ve seen in an anti Angular blog post:

I’d say Angular is mostly being used by people from a Java background because its coding style is aimed at them

The above is 100% verbatim from an anti Angular blog post. “People that like Angular must be uncool enterprise developers”. Sorry dude, some of us work in the enterprise and build real world line of business applications. We have bills to pay and don’t live rent free working on speculative up startup ideas.

LifeinvaderOffice-GTAV
The LifeInvader social network HQ from GTA5. Your speculative start up also could get this big

Misplaced criticism 3 – I don’t like where it could potentially go, therefore it’s wrong now

If you hadn’t heard, Angular 2 will bring in some changes. If you didn’t like Angular in the first place, why are you crying for the loss of version 1? And why is this such a big issue in the Javascript community? Did you know that ASP.Net will undergo significant changes with the next release (called vNext)? The .net community is generally excited for this release, and isn’t mourning the loss of the old versions.

This reddit user summed up this argument nicely:

Screen Shot 2015-03-07 at 14.28.13

Misplaced criticism 4 – Pointing out the problems but offering no solutions

One of the best things about being a developer is getting challenges thrown at you every day, and thinking of solutions to those problems. Can you imagine having someone on your team that was 100% negative, and constantly stopped their work to call you over to tell you that they had discovered some problem, and that it therefore was no other way around the problem without throwing the whole thing out and starting again? It would be pretty annoying, right?

Well, if you’re going to criticise front end frameworks and offer no alternatives or other solution, I’m going to assume that you are advocating the use of A LOT of jQuery instead (which I’m guessing you think is too bloated, and that you’d tell me to write bare metal javascript).

It’s silly isn’t it? I’m not saying it couldn’t be done, I’m saying it would be hard, your code would suck, would be difficult to test, and would take an eternity to deliver.

Conclusion

Make your own discussion. Talk do devs that you know in your network that may have used the technology in a real world situation. If you don’t know any, find your local Web Developer meetup and get talking to people. Build a small prototype and form your own opinion. Don’t just follow the trend, or the anti trend. What is your project priority? Delivery? Or something else?

It’s not unreasonable to consider blog posts on the subject, but please consider if the author has a valid opinion. Do they actually build and deliver real world apps, or, do they now make their money from blogging, podcasting and running a training company on the side? Some good places to go for some real world insight (as in real actual code problems) into Angular Js are:

In the above links you will quickly discover real world Angular challenges and how they were overcome. It will also give you an indication of how well trodden the road is before you decide to set off down it.

TL;DR

If you’re going to bash Angular, think:

  • Is what I dislike about Angular a fault of the framework, or of the problem that I am trying to solve?
  • Does my criticism apply to all other front end frameworks?
  • Am I criticising an Angular antipattern that could be resolved by coding a little more sensibly?
  • Can I offer a better alternative solution?
  • Am I being an “Anti-Trender”? If you’re not sure, recall if you denounced one of the following on Facebook: Kony 2012, the no makeup selfie, the Ice Bucket Challenge.

Form your own opinion from real world experiences.

Technical

Using your own router with BT Infinity

You don’t need to use BT’s supplied home hub with BT Internet.

BT used to supply Infinity customers with two pieces of equipment.

  1. A BT Home Hub (router)
  2. A VDSL (Very-high-bit-rate digital subscriber line) Modem

However, as the technology has improved, BT now supply Infinity customers with a single combined BT Home Hub (5 onwards), which contains both a router and a modem.

The nice thing about the old approach is that it meant it was easier to use your own hardware, which means more control.

Well, the good news is that you can still ditch the BT Home Hub, and use whatever router and modem combo that you like.

You essentially have two options:

Option 1 – A separate VDSL router and modem

This is the option I’ve gone for, and here’s what I did.

Search on ebay for one of these:

bt-openreach-modem

A BT Openreach VDSL modem. This link should get you what you are after.

Then, get yourself the best router that you can afford that supports PPPoE. I’ve gone for the Netgear R6300-100UKS AC1750 Dual Band Wireless Cable Router. It’s awesomely powerful and has killerWiFi. Infact, the WiFi is so good that I’ve ditched my homeplug powerline WiFi boosters (5G ftw).

To get your internet working, you then just need to do the following.

  1. Connect your VDSL modem into your phone line, via a filter of course
  2. Connect your router to your modem using an ethernet cable
  3. Jump onto your router’s admin page. Enable PPPoE, and set your username to “bthomehub@btbroadband.com”
  4. Browse the internet!

It’s essentially the setup that is described in this diagram, just replace the BT Home Hub with a PPPoE router of your choice:

BT Infinity Setup

Option 2 – A combined VDSL router and modem

You can now get VDSL router and modems. Infact, they are easier to buy than VDSL modems. Here’s a good search listing. The setup will be the same as the above, just without steps 1 & 2, and you will obviously connect the router directly to the phone line.

Conclusion

I would personally go with option 1.

Option 1 gives you much more flexibilty in terms of the physical location of your router. So, rather than he being stuck near your master socket, you can now have the router next to your master socket, and instead have your router somewhere more useful, like right in the middle of the house. All you then need to do is connect the router to the modem using an ethernet cable.

If you have any other tips, please post them in the comments.

Technical

Not doing code reviews? NDepend can help you

In a previous post about NDepend, I looked at how it can be used to clean up codebases by easily and quickly isolating blocks of code or classes that violate a set of rules. Helpfully, NDepend ships with a sensible set of default rules, so if we like we can stick to NDepend’s defaults, or tweak them or add our own as we see fit.

Having just finished off a sprint with a fairly large team, I decided to install the NDepend 5 Visual Studio 2013 extension and do some digging around the solution. Whilst the information that NDepend 5 has shown to me has been a little depressing, it’s also compulsive reading and does break out the “engineer” within you, as you find yourself questioning the point of classes or methods, and thinking how you could better refactor NDepend code violations. It’s a great exercise to pair up with another Dev on your team and run through the code violations together – I found I could bounce code refactoring ideas off of a colleague in my team. NDepend could even be used here to help run code reviews as many of it’s out of the box rules follow the the SOLID principles, and not some member in your team’s opinion.

NDepend 5… What’s new?

Once you’ve installed NDepend, get ready to let it lose on your codebase. Brace yourself for the results – remember, it isn’t personal…

Dashboards, Dashboards, Dashboards

Dashboards add a huge value to any application (I know because I have built a few), and what can be particularly valuable is a set of visual indicators that quickly summarise some metric. Aside from NDepend’s Red / Green light that appears in the bottom right of Visual Studio (which is a very high level indicator of the state of your codebase and is a nagging reminder that you’ve got some critical code violations to sort out), the dashboard gives you a more detailed information. You can get to the dashboard by clicking on NDepend’s indicator icon, or by selecting “Dashboard” from the window that is shown after NDepend has had a good old inspection of your codebase.

 

dashboard1

The top half of the new Dashboard shows you some cool high level stats about the dll’s that you just analysed. If you are feeling brave, you could even share some of these stats with the project sponsors, but naturally only after you’ve got your code coverage up. The dashboard also shows you the current state of codebase against NDepend’s code rules, and any custom rules that you may have added, with the most critical violations shown at the top.

Analyse regularly to get the most out of NDepend 5

Before I continue looking into the new features of NDepend 5, it’s worth noting one of the key things that you need to do in order to unlock the full power of NDepend. Each time you analyse your codebase, NDepend captures information about the current state of it. This capture is a timecapsule to how it looked, and can be used as a “baseline” for comparative analysis. The more you analyse, the more baselines you will have.

By default, the data gathered from each analysis is stored in an “NDependOut” folder that lives at the same level as your solution file. I would recommend committing this folder to source control so that other members of your team can make use of these files, and so that they are treated with the same importance as your actual codebase.

ndepend-out-folder

Is my team getting better or worse at writing code?

A new addition to NDepend 5 is the ability to restrict code analysis to recent code only. This will help you to hide the noise and focus on the code that your team is writing right now. On a codebase that has got a few years behind it, and is of a significant size, it would be a little unrealistic to refactor all of the older code. Even if you are using other coding standards add-ins such as Style Cop, it’s easy to get lost in a sea of broken rules when you are looking at a codebase that has existed for several years.

recent-code

 

In order to define what your recent code actually is, you will need to provide NDepend with a “baseline”, if it cannot find the data on its own. This will tell NDepend about the state of the code at a previous time. The state of the codebase will be captured by NDepend every time you press the big “Analyse” button, so remember, the more you analyse, the more data you will have. You will be prompted for this information, if needed, when you check the “Recent Violations Only” checkbox.

In my case, I found that it was best to analyse the codebase at the start of a sprint, and then again as the sprint approached its final days, in order for there to be enough time for the team to do any refactoring that we felt was necessary on any code written during the sprint.

You could even analyse after each feature request or bug was closed off in order to get more finely grained data.

Is the new code that is being written of a good standard?

As the baselines offer an insight into the state of the code, NDepend 5 comes bundled with a set of graphs that detail how your codebase is performing between analyses. The default graphs will give you an insight into the how your codebase is changing in terms of the number of lines of code (I personally enjoy simplifying and deleting code, so this metric is quite an important one for me), the number of rules violated, code complexity, and reliance on third party code. You ideally want to see your codebase quality getting better – these graphs give you an instant insight into this.

v5Trend

It’s time to pay down your technical debt…

Or at least stop it from growing.

The best way to get a feel for NDepend 5 is to have a play with the 14 day trial version. On a team of any size, NDepend really does show it’s value, and will push you and your team into writing better code.

Technical

That CSS / Javascript library isn’t as big as you think

I take some pride in going against the current tide of thought in most things. This is especially true when it comes to such a trend based industry, such as software development. We suffer from Hype Cycles in a big way.

Currently, in the world of Web Development, the community seems to be uber conscious about the potential over use of Javascript and CSS libraries, with the fear that this overuse is bloating the web. I naturally agree with the core sentiment – having a page that requires several different Javascript libraries makes me wince a little, and other things would need to be considered (what clients will be using this web app, etc) before a decision could be made about these libraries.

However, lots of developers in the community are taking this Kilobyte conscious attitude and using it to put off other devs from using popular and well established libraries.

jQuery

A few months ago it was jQuery, as someone developed a tool that attempted to evaluate if your use of jQuery was necessary – youmightnotneedjquery.com. Unfortunately, this view caught on, and presumably a few poor souls got sucked in and attempted to re-invent a heavily proved and well tested wheel, only to hit those quirky cross browser pot holes that jQuery lets you normally ignore.

Right now, it’s Twitter Bootstrap that is getting the attention of the devs that claim to be Kilobyte concious.

Here’s the problem. The size argument doesn’t stand up in the case of jQuery, or in the case of Twitter Bootstrap. Nearly all arguments that you will see complaining about the size of these two libraries are bogus.

gzip and essential web optimisation

Well, one of the best trends to have ever swept the world of Web Development, was the front end optimisation push a few years ago, driven largely by Steve Souders‘ fantastic book, High Performance Websites.

Did we forget what we learnt in this book? Remember gzip? The bogus size arguments will always look at the size of the raw minified files. If you are sending these files over the wire in a production environment, you’re doing it wrong. All production Web Servers should be sending as much content to the client gzipped. This will significantly reduce the amount of data that needs to go between the server and the client (or “over the wire”):

 

jquery-gzip

 

The above screengrab is from the network tab in Chrome developer tools, when accessing jQuery 2.1.1. The darker number in the “Size” column is what actually came down over the wire – 34KB of gzipped data. The grey number (82.3KB) is the size of the uncompressed file. Gzipping has saved the the server and the client nearly 50 Kilobytes in data transfer.

If, for whatever reason, you can’t use gzipping in your production environment, then use the CDN. This will make your site even quicker as visitors to your site will likely have their cache already primed, saving you even more of those precious Kilobytes. And, it will also be gzipped.

Twitter Bootstrap

What’s worse is when the size argument is incorrectly used to pit one library against another. I’ve even seen blog posts blindly claiming that Web Developers should use Foundation over Twitter Bootstrap, because “Foundation is more lightweight”. In fact, Foundation’s core CSS file is 120KB gzipped, whilst Twitter Bootstrap’s is 98KB.

Given that it is so easy to debunk any size arguments, I actually think that our old friend, the Hype Cycle, is playing a part. Could it be that the popularity of jQuery and Twitter Bootstrap has made them uncool? I think it’s very possible.

“I don’t use Bootstrap. I use Foundation”.

250px-FuddOnTap

Well, in Shelbyville they drink Fudd.

Straplines

My point here is that we seem to have forgotten the point of these libraries – to aid us reduce our biggest cost – development.

Just take the jQuery strapline into consideration:

“Write less, do more”.

And one of the Twitter Bootstrap straplines:

“Bootstrap makes front-end web development faster and easier. It’s made for folks of all skill levels, devices of all shapes, and projects of all sizes”.

Everyone is correct to consider the impact of library sizes, but please take a measured approach in a world where bandwidth is increasing for most clients, and development costs remain high. Real world projects (that are my bread and butter) have deadlines and costs.

Like Hanselman said – “The Internet is not a closed box. Look inside.” Look inside and make your own decision, don’t just follow the sentiment.

Entity Framework, Technical

In defence of the RDBMS: Development is not slow. You just need better tooling.

I pride myself on being a real world software developer.

Anything that I publish in this blog is a result of a some real world coding. I don’t just play around with small, unrealistic demoware applications. I write real world applications that get used by lots of users.

With experience comes wisdom, and in software part of that wisdom is knowing when to employ a certain technology. There is almost never a right or wrong answer on what technologies you should be using on a project, as there are plenty of factors to consider. They usually include:

  • Is technology x supposed to be the right tool for this job?
  • Do we have the knowledge to work with and become productive in technology x within the scope of the budget?
  • How mature is technology x?

The above questions will even have varying importance depending on where you are currently working. If you are working at an agency, where budgets must be stuck to in order for the company to make a profit, and long term support must be considered, then the last two points from the above have a greater importance.

If you are working in house for a company that has it’s own product, then the last two points become less important.

Just about the only wrong reason would be something like:

  • Is technology x the latest and greatest thing, getting plenty of buzz on podcasts, blogs and twitter?

This doesn’t mean you should be using it on all projects right now. This just means that technology x is something that you ought to look into and make your own assessment (I’ll let you guess where it might be on the technology hype life cycle). But is this project the right place to make that assessment? Perhaps, but it will certainly increase your project’s risk if you think that it is.

One of the technologies that we are hearing more and more about are NoSQL databases. I won’t go into explanation details here, but you should be able to get a good background from this Wikipedia article.

Whilst I have no issues with NoSQL databases, I do take issue with one of the arguments against RDBMS’s – that development against them is slow. I have now seen several blog posts that have argued that developing with a NoSQL databases makes your development faster, which would imply that developing against a traditional RDBMS is slow. This isn’t true. You just need to improve your tooling.

Here’s a snippet from the mongo db “NoSQL Databases Explained” page:

NoSQL databases are built to allow the insertion of data without a predefined schema. That makes it easy to make significant application changes in real-time, without worrying about service interruptions – which means development is faster, code integration is more reliable, and less database administrator time is needed.

Well I don’t know about you guys, but I haven’t had to call upon the services of a DBA for a long time. And nearly all of my applications are backed by a SQL Server database. Snippets like the above totally miss a key thing out – the tooling.

Looking specifically at .net development, the tooling for database development has advanced massively in the last 6 years or so. And that is largely down to the plethora of ORMs that are available. We now no longer need to spend time writing stored procedures and data access code. We now no longer need to manually put together SQL scripts to handle schema changes (migrations). Those are real world huge development time savers – and they come at a much smaller cost as at the core is a well understood technology (unless your development team is packed with NoSQL experts).

Let’s look specifically at the real world use. Most new applications that I create will be built using Entity Framework code first. This gives me object mapping and schema migration with no difficulty.

It also gives me total control over the migration of my application’s database:

  1. I can ask Entity Framework to generate a script of the migrations that need to be run on the database. This can then be run manually against the application, and the application will even warn you at runtime if the schema is missing a migration
  2. I can have my deployment process migrate the database. The Entity Framework team bundle a console app that can be packaged up and called from another process – migrate.exe
  3. I can have my application migrate itself. That’s right – Entity Framework even allows me to run migrations programatically. It’s not exactly hard either.

My point is this: whilst the migrating of a schema may not be something that you would need to do in a NoSQL database (although you would need to handle these changes further up your application’s stack), making schema changes to an RDBMS’s schema just isn’t as costly or painful as is being made out.

Think again before you dismiss good ol’ RDBMS development as slow – because it hasn’t been slow for me for a long time.