gitlab, Technical

Reducing the amount of memory used by gitlab

Gitlab is a fantastic tool. Rather than going with a saas solution for source control and for continuous integration, I’d thoroughly recommend hosting your own gitlab instance. Don’t be scared!

Anyway, I run my own gitlab instance on a box that only has 4 gigs of ram. Gitlab also has to share these limited resources with a few other webapps.

I noticed that gitlab was one of the biggest consumers of the ram on my box, and did some research into reducing it’s memory footprint.

Open the gitlab config file, which should be located at /etc/gitlab/gitlab.rb.

Reduce the postgres database cache

##! **recommend value is 1/4 of total RAM, up to 14GB.**
postgresql['shared_buffers'] = "256MB"

Reduce the concurrency level in sidekiq

I set this at 15 instead of 25 as I don’t have that many commits going on.

sidekiq['concurrency'] = 15 #25 is the default 

Disable prometheus monitoring

prometheus_monitoring['enable'] = false

Restart gitlab and test it out:


gitlab-ctl reconfigure

You should then run through a few commits and check gitlab is running smoothly.

Technical, Wordpress

Running WordPress behind a reverse SSL proxy

Newer versions of WordPress really don’t need much to get working behind an SSL proxy.

I currently have an NGINX webserver running infront of this blog. The job of NGINX here is to handle the SSL traffic, decrypt it, and forward it onto the docker container that runs this blog in plain old http.

If you’re going to do this, you need to make sure your NGINX config is setup to send the right headers through to wordpress, so that wordpress knows about the scheme the traffic came in on. So, in your NGINX config file, you’ll need the following:

 location / {
   proxy_http_version 1.1;
   proxy_set_header X-Forwarded-Host $host;
   proxy_set_header X-Forwarded-Server $host;
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
   proxy_set_header X-Forwarded-Proto $scheme;
   proxy_set_header X-Real-IP $remote_addr;
   proxy_set_header Host $host;

That should be all you need. WordPress has been around, and older blog posts seem to indicate that you may need some additional plugins. I didn’t find that this was the case. Hope this helps.

AngularJs, Technical

Performance Tuning AngularJS Apps

For existing AngularJs apps, there are a few things that you can do in order to try and improve performance.

Destroy no longer needed javascript plugin elements

This should help prevent your Angular App from running into memory problems in the browser. There are basically two approaches to this.

In directives that wrap some sort of plugin (e.g. Slick Slider), you need to listen out for the “$destroy” event and call that particular plugin’s cleanup methods. In the case of Slick Slider, it’s the unslick() method, but it could simply be a call to jQuery’s remove() method, or you could just set the value of the html element to an empty string:

$scope.$on('$destroy', function() {

 // Call the plugin's own api

 // or call jQuery's remove function

 // or, if you aren't using jQuery


Unbind any watches when the current scope is destroyed

When you create a watch on a scoped variable, or on an event in angular, the $watch function returns a function that when called, will remove the watch. Call this returned function when your scope is destroyed as it is no longer needed:

var unbindCompanyIdWatch = $scope.$watch('companyId',() => {
 // Do Something...

$scope.$on('$destroy', function() {

Use One-Time Binding where possible

The less binding going on, the less watchers there are.

If you render values in your dom using angular that you know are only going to be set once and will never change during the lifecycle of the page, it is a candidate for using one-time binding. The One-Time binding syntax is basically two colons – “::”, and can be used a few ways in your html templates:

<!-- Basic One-Time Binding -->

<!-- Within ng-repeat -->
 <li ng-repeat="item in ::items">

<!-- Within ng-show -->
<p ng-show="::showContent">
 Some Content

<!-- Within ng-if -->
<p ng-if="::showContent">
 Some Content


Use “track by” when using ng-repeat where possible

By specifying a property for angular to track an item within a collection by, you will prevent angular from rebuilding entire chunks of the dom unnecessarily. This will give you a performance boost which will be noticeable when dealing with large collections:

 <li ng-repeat="item in items track by item.itemId">{{}} </li>

Ben Nadel has an excellent post on track by that you should checkout.

Of course, you shouldn’t need to tie this up with some one-way binding, as track by would be pointless with a collection that does not need to change.

Cancel no longer required Http requests

If some action happens that means that data that is loading is no longer needed (e.g. a navigation change), you should cancel the no longer required http requests. Most browsers limit the number of concurrent requests to a single domain. So, if your requests are no longer required, cancel them and free up those request slots.

You can do this by simply resolving the promise. Your requirements of when this cancellation needs to happen will be different for every app, so I would recommend that you write a httpRequestManagerService, and marshal any http requests through it that you deem necessary. You can then resolve your promises based on some event – e.g. a navigation change event. Ben Nadel has a good post on cancelling angular requests.

Interchange ng-if and ng-show where appropriate

On the surface, ng-show and ng-if produce the same result. However, under the hood, they behave slightly differently.

ng-show still renders your html no matter what. If it does not need to be shown, the html element with the ng-show directive will simply be marked with a display:none css style.

ng-if will completely remove the html and all children that contain the directive.

There is obviously a cost in completely removing and adding entire chunks of html, however if you are dealing with a lot of html, or if your html within your ng-if contains a lot of angular expressions, I have found it to be more performant than ng-show.

My advice is to evaluate both in each case before making a decision.

Please feel free to throw in any more performance tips in the comments


The 2015 PC build for Gaming and Programming

Having build my last desktop in 2011 and noticing that some things were starting to run a little slowly, I’ve gone for a desktop refresh. Here is what I have gone for:


Intel Core i5 i5-4690K
This is one of the best “bang for buck” CPUs that you can get at the moment. I was previously running an i7 but this new Haswell architecture i5 beats my old i5 comfortably across the board, and it also runs cooler than my old i7.


Corsair CMY16GX3M2A1866C9R Vengeance Pro Series 16GB (2x8GB) DDR3 1866Mhz CL9 XMP Performance Desktop Memory
I’ve been burnt in the past by cheaper RAM becoming unstable, so now I will never scrimp on RAM. This RAM supports Intel’s XMP for overclocking, and has been enabled since day one without any issues.


MSI Z97 Gaming 5 Intel LGA1150 Z97 ATX Motherboard
One of the cheapest parts of this build. I was very skeptical about getting a mainboard that does not have an integrated Intel NIC (this board instead opts for a Killer Networks NIC). My last mainboard had a Bigfoot Networks E2100 NIC, which out of the box was incredibly buggy and unstable. It was actually so unusable that I ended up disabling the TCIP capabilities of the card and letting the tested and reliable TCIP stack in Windows do it’s thing. The Killer Networks E2100 card is now basically abandonware, and the card does not work with newer games online, and until recently wasn’t compatible with the iTunes store. However, the E2200 is current and is still getting plenty of attention from Killer Networks, and I haven’t had any issues with it online so far. My advice would still be to go for a tried an tested Intel NIC if you can, although I’m yet to experience any problems with the E2200 Killer Networks card on this mainboard.

One of the best things about this mainboard is the BIOS, which has fantastic user interface and give you plenty of control over overclocking features, both simple and advanced. This piece of kit was fantastic value for money.


MSI NVIDIA GTX 970 Gaming Twin Frozr HDMI DVI-I DP Graphics Card
The more graphics memory, the better. This card lets me comfortably play the newest games (including GTA5) with the graphics settings all maxed out. It also runs quietly.


Corsair Hydro Series H55 All-In-One Liquid Cooler for CPU
This was a surprise win for me. I previously used a Be Quiet CPU fan, which was nice and silent and kept my CPU nice and cool. However, this ready-to-rock water cooler from Corsair really impressed me, not just on the noise levels, but also on the cooling capabilities. For the first time in years, my CPU will happily idle at 25°C.

Main OS Hard Drive:

Crucial CT512MX100SSD1 2.5 inch 512GB SATA III Solid State Drive
The OS hard drive caused me great pain originally. I started this build off running the OCZ Arc 100. A 480 Gigabyte SSD priced very cheaply at £120. However, this was simply too good to be true, and within a week of the new build, this SSD suffered some serious file corruption and required a reinstall of Windows, which would only go on after a hard SSD wipe (a Windows installer format was not enough). However, I decided not to proceed with the OCZ Arc 100 as a quick bit of research revealed that it was too much of an unreliable drive and has a few problems. Have a read about all of the other problems that other people had with this drive over at You pay for what you get, and I have sent the defective OCZ Arc 100 back for a refund, and am instead running a more highly rated but more costly Crucial SSD.

Having a hard drive fail on you is bad enough, but it’s that much more hassle when it’s the hard drive that contains the operating system for your battlestation on it. The Arc 100 was the only let down of this build, and it did come as a surprise as I had previously run a smaller OCZ SSD without any problems.

Operating System:

I’m running Windows 8.1, which on the 31st of July will become Windows 10 🙂

I also run a Xubuntu VM within VM Ware Player for my golang playtime. If you’re an Ubuntu user, I recommend that you give Xubuntu a try. You might just prefer XFCE, like I do.


Stop Bashing Angular

I appreciate that I’m little late to this discussion. I’m not sure if you may have noticed, but I don’t blog as much as some of the other better known developers out there. Why? I’m too busy working contracts and building real world applications, that have real world problems and real world requirements.

So I’ve spoken about this several times in the past, and I’ll speak about it again, but I firmly believe that software development suffers from trend hype lifecycles in a massive way:

Technology Trend Hype Lifecyles
Technology Trend Hype Lifecyles

I do however think that there is one key thing missing from the above diagram – the “Anti Trend”. Sometimes a technology will come a long that is genuinely popular and useful for a real reason – it actually does a job, is well received and supported and gets pretty popular. In software development, it’s something that can make our difficult job easier. The “Anti Trend” refers to the detractors, who I suspect, want to be the first to “bail out” on a technology.

I’m all for critique of a technology or a way of solving a problem, but your arguments need to stand up.

I had a look into this in my post “That CSS / Javascript library isn’t as big as you think“, where I pointed out that it was odd that those criticising jQuery and Twitter Bootstrap complained about the size of these libraries, but seemed to be ignorant of the basics – free cdns and gzipping.

I also had a look into the Anti Trend in my post “In defence of the RDBMS: Development is not slow. You just need better tooling“, where I pointed out that one of the key criticisms of relation databases was that development against them was slow. This is only the case if you don’t bother using any tooling.

Different sections of the of the development community run through the technology trend hype lifecycle at different speeds, and the Javascript community runs through the trend lifecycle at breakneck speeds.

So, right now, the Anti Trenders are having a pop at Angular (a few months ago it was jQuery). The arguments are usually the same and genuinely do not stand up in a real world situation. I’ve seen blog posts from people that I previously had respected quite a lot, critiquing angular in seriously misplaced manner:

Misplaced critique 1 – Blaming the framework when you should be blaming the problem that the framework is trying to solve

If you don’t like how Angular can be used, you probably wont like how Ember, React, and Knockout can be used either. These front end frameworks exist and are used for a reason – to solve the problem of getting a client and their data around an application nicely and seamlessly.

What shocked me about the Anti Trend blog posts was that they revealed a level of ignorance on the side of the authors. For me, someone with over a decade’s worth of experience of publishing material on the web and developing real web applications (you know, ones that have requirements, costs and deadlines, and need to actually work and do something), front end frameworks like Angular and Knockout solved a very real problem. Both technologies have helped me to provide a richer client experience that were more testable, and they helped me get there quickly and keep the customers and the users happy.

Misplaced critique 2 – Getting tribal and then blaming the other tribe

It’s an age old technique that can be applied to just about any argument. “I’m over here with the cool kids, you’re over there with the weirdos”. You might be wondering what I’m on about, but it’s actually an argument that I’ve seen in an anti Angular blog post:

I’d say Angular is mostly being used by people from a Java background because its coding style is aimed at them

The above is 100% verbatim from an anti Angular blog post. “People that like Angular must be uncool enterprise developers”. Sorry dude, some of us work in the enterprise and build real world line of business applications. We have bills to pay and don’t live rent free working on speculative up startup ideas.

The LifeInvader social network HQ from GTA5. Your speculative start up also could get this big

Misplaced criticism 3 – I don’t like where it could potentially go, therefore it’s wrong now

If you hadn’t heard, Angular 2 will bring in some changes. If you didn’t like Angular in the first place, why are you crying for the loss of version 1? And why is this such a big issue in the Javascript community? Did you know that ASP.Net will undergo significant changes with the next release (called vNext)? The .net community is generally excited for this release, and isn’t mourning the loss of the old versions.

This reddit user summed up this argument nicely:

Screen Shot 2015-03-07 at 14.28.13

Misplaced criticism 4 – Pointing out the problems but offering no solutions

One of the best things about being a developer is getting challenges thrown at you every day, and thinking of solutions to those problems. Can you imagine having someone on your team that was 100% negative, and constantly stopped their work to call you over to tell you that they had discovered some problem, and that it therefore was no other way around the problem without throwing the whole thing out and starting again? It would be pretty annoying, right?

Well, if you’re going to criticise front end frameworks and offer no alternatives or other solution, I’m going to assume that you are advocating the use of A LOT of jQuery instead (which I’m guessing you think is too bloated, and that you’d tell me to write bare metal javascript).

It’s silly isn’t it? I’m not saying it couldn’t be done, I’m saying it would be hard, your code would suck, would be difficult to test, and would take an eternity to deliver.


Make your own discussion. Talk do devs that you know in your network that may have used the technology in a real world situation. If you don’t know any, find your local Web Developer meetup and get talking to people. Build a small prototype and form your own opinion. Don’t just follow the trend, or the anti trend. What is your project priority? Delivery? Or something else?

It’s not unreasonable to consider blog posts on the subject, but please consider if the author has a valid opinion. Do they actually build and deliver real world apps, or, do they now make their money from blogging, podcasting and running a training company on the side? Some good places to go for some real world insight (as in real actual code problems) into Angular Js are:

In the above links you will quickly discover real world Angular challenges and how they were overcome. It will also give you an indication of how well trodden the road is before you decide to set off down it.


If you’re going to bash Angular, think:

  • Is what I dislike about Angular a fault of the framework, or of the problem that I am trying to solve?
  • Does my criticism apply to all other front end frameworks?
  • Am I criticising an Angular antipattern that could be resolved by coding a little more sensibly?
  • Can I offer a better alternative solution?
  • Am I being an “Anti-Trender”? If you’re not sure, recall if you denounced one of the following on Facebook: Kony 2012, the no makeup selfie, the Ice Bucket Challenge.

Form your own opinion from real world experiences.


Using your own router with BT Infinity

You don’t need to use BT’s supplied home hub with BT Internet.

BT used to supply Infinity customers with two pieces of equipment.

  1. A BT Home Hub (router)
  2. A VDSL (Very-high-bit-rate digital subscriber line) Modem

However, as the technology has improved, BT now supply Infinity customers with a single combined BT Home Hub (5 onwards), which contains both a router and a modem.

The nice thing about the old approach is that it meant it was easier to use your own hardware, which means more control.

Well, the good news is that you can still ditch the BT Home Hub, and use whatever router and modem combo that you like.

You essentially have two options:

Option 1 – A separate VDSL router and modem

This is the option I’ve gone for, and here’s what I did.

Search on ebay for one of these:


A BT Openreach VDSL modem. This link should get you what you are after.

Then, get yourself the best router that you can afford that supports PPPoE. I’ve gone for the Netgear R6300-100UKS AC1750 Dual Band Wireless Cable Router. It’s awesomely powerful and has killerWiFi. Infact, the WiFi is so good that I’ve ditched my homeplug powerline WiFi boosters (5G ftw).

To get your internet working, you then just need to do the following.

  1. Connect your VDSL modem into your phone line, via a filter of course
  2. Connect your router to your modem using an ethernet cable
  3. Jump onto your router’s admin page. Enable PPPoE, and set your username to “”
  4. Browse the internet!

It’s essentially the setup that is described in this diagram, just replace the BT Home Hub with a PPPoE router of your choice:

BT Infinity Setup

Option 2 – A combined VDSL router and modem

You can now get VDSL router and modems. Infact, they are easier to buy than VDSL modems. Here’s a good search listing. The setup will be the same as the above, just without steps 1 & 2, and you will obviously connect the router directly to the phone line.


I would personally go with option 1.

Option 1 gives you much more flexibilty in terms of the physical location of your router. So, rather than he being stuck near your master socket, you can now have the router next to your master socket, and instead have your router somewhere more useful, like right in the middle of the house. All you then need to do is connect the router to the modem using an ethernet cable.

If you have any other tips, please post them in the comments.


Not doing code reviews? NDepend can help you

In a previous post about NDepend, I looked at how it can be used to clean up codebases by easily and quickly isolating blocks of code or classes that violate a set of rules. Helpfully, NDepend ships with a sensible set of default rules, so if we like we can stick to NDepend’s defaults, or tweak them or add our own as we see fit.

Having just finished off a sprint with a fairly large team, I decided to install the NDepend 5 Visual Studio 2013 extension and do some digging around the solution. Whilst the information that NDepend 5 has shown to me has been a little depressing, it’s also compulsive reading and does break out the “engineer” within you, as you find yourself questioning the point of classes or methods, and thinking how you could better refactor NDepend code violations. It’s a great exercise to pair up with another Dev on your team and run through the code violations together – I found I could bounce code refactoring ideas off of a colleague in my team. NDepend could even be used here to help run code reviews as many of it’s out of the box rules follow the the SOLID principles, and not some member in your team’s opinion.

NDepend 5… What’s new?

Once you’ve installed NDepend, get ready to let it lose on your codebase. Brace yourself for the results – remember, it isn’t personal…

Dashboards, Dashboards, Dashboards

Dashboards add a huge value to any application (I know because I have built a few), and what can be particularly valuable is a set of visual indicators that quickly summarise some metric. Aside from NDepend’s Red / Green light that appears in the bottom right of Visual Studio (which is a very high level indicator of the state of your codebase and is a nagging reminder that you’ve got some critical code violations to sort out), the dashboard gives you a more detailed information. You can get to the dashboard by clicking on NDepend’s indicator icon, or by selecting “Dashboard” from the window that is shown after NDepend has had a good old inspection of your codebase.



The top half of the new Dashboard shows you some cool high level stats about the dll’s that you just analysed. If you are feeling brave, you could even share some of these stats with the project sponsors, but naturally only after you’ve got your code coverage up. The dashboard also shows you the current state of codebase against NDepend’s code rules, and any custom rules that you may have added, with the most critical violations shown at the top.

Analyse regularly to get the most out of NDepend 5

Before I continue looking into the new features of NDepend 5, it’s worth noting one of the key things that you need to do in order to unlock the full power of NDepend. Each time you analyse your codebase, NDepend captures information about the current state of it. This capture is a timecapsule to how it looked, and can be used as a “baseline” for comparative analysis. The more you analyse, the more baselines you will have.

By default, the data gathered from each analysis is stored in an “NDependOut” folder that lives at the same level as your solution file. I would recommend committing this folder to source control so that other members of your team can make use of these files, and so that they are treated with the same importance as your actual codebase.


Is my team getting better or worse at writing code?

A new addition to NDepend 5 is the ability to restrict code analysis to recent code only. This will help you to hide the noise and focus on the code that your team is writing right now. On a codebase that has got a few years behind it, and is of a significant size, it would be a little unrealistic to refactor all of the older code. Even if you are using other coding standards add-ins such as Style Cop, it’s easy to get lost in a sea of broken rules when you are looking at a codebase that has existed for several years.



In order to define what your recent code actually is, you will need to provide NDepend with a “baseline”, if it cannot find the data on its own. This will tell NDepend about the state of the code at a previous time. The state of the codebase will be captured by NDepend every time you press the big “Analyse” button, so remember, the more you analyse, the more data you will have. You will be prompted for this information, if needed, when you check the “Recent Violations Only” checkbox.

In my case, I found that it was best to analyse the codebase at the start of a sprint, and then again as the sprint approached its final days, in order for there to be enough time for the team to do any refactoring that we felt was necessary on any code written during the sprint.

You could even analyse after each feature request or bug was closed off in order to get more finely grained data.

Is the new code that is being written of a good standard?

As the baselines offer an insight into the state of the code, NDepend 5 comes bundled with a set of graphs that detail how your codebase is performing between analyses. The default graphs will give you an insight into the how your codebase is changing in terms of the number of lines of code (I personally enjoy simplifying and deleting code, so this metric is quite an important one for me), the number of rules violated, code complexity, and reliance on third party code. You ideally want to see your codebase quality getting better – these graphs give you an instant insight into this.


It’s time to pay down your technical debt…

Or at least stop it from growing.

The best way to get a feel for NDepend 5 is to have a play with the 14 day trial version. On a team of any size, NDepend really does show it’s value, and will push you and your team into writing better code.


That CSS / Javascript library isn’t as big as you think

I take some pride in going against the current tide of thought in most things. This is especially true when it comes to such a trend based industry, such as software development. We suffer from Hype Cycles in a big way.

Currently, in the world of Web Development, the community seems to be uber conscious about the potential over use of Javascript and CSS libraries, with the fear that this overuse is bloating the web. I naturally agree with the core sentiment – having a page that requires several different Javascript libraries makes me wince a little, and other things would need to be considered (what clients will be using this web app, etc) before a decision could be made about these libraries.

However, lots of developers in the community are taking this Kilobyte conscious attitude and using it to put off other devs from using popular and well established libraries.


A few months ago it was jQuery, as someone developed a tool that attempted to evaluate if your use of jQuery was necessary – Unfortunately, this view caught on, and presumably a few poor souls got sucked in and attempted to re-invent a heavily proved and well tested wheel, only to hit those quirky cross browser pot holes that jQuery lets you normally ignore.

Right now, it’s Twitter Bootstrap that is getting the attention of the devs that claim to be Kilobyte concious.

Here’s the problem. The size argument doesn’t stand up in the case of jQuery, or in the case of Twitter Bootstrap. Nearly all arguments that you will see complaining about the size of these two libraries are bogus.

gzip and essential web optimisation

Well, one of the best trends to have ever swept the world of Web Development, was the front end optimisation push a few years ago, driven largely by Steve Souders‘ fantastic book, High Performance Websites.

Did we forget what we learnt in this book? Remember gzip? The bogus size arguments will always look at the size of the raw minified files. If you are sending these files over the wire in a production environment, you’re doing it wrong. All production Web Servers should be sending as much content to the client gzipped. This will significantly reduce the amount of data that needs to go between the server and the client (or “over the wire”):




The above screengrab is from the network tab in Chrome developer tools, when accessing jQuery 2.1.1. The darker number in the “Size” column is what actually came down over the wire – 34KB of gzipped data. The grey number (82.3KB) is the size of the uncompressed file. Gzipping has saved the the server and the client nearly 50 Kilobytes in data transfer.

If, for whatever reason, you can’t use gzipping in your production environment, then use the CDN. This will make your site even quicker as visitors to your site will likely have their cache already primed, saving you even more of those precious Kilobytes. And, it will also be gzipped.

Twitter Bootstrap

What’s worse is when the size argument is incorrectly used to pit one library against another. I’ve even seen blog posts blindly claiming that Web Developers should use Foundation over Twitter Bootstrap, because “Foundation is more lightweight”. In fact, Foundation’s core CSS file is 120KB gzipped, whilst Twitter Bootstrap’s is 98KB.

Given that it is so easy to debunk any size arguments, I actually think that our old friend, the Hype Cycle, is playing a part. Could it be that the popularity of jQuery and Twitter Bootstrap has made them uncool? I think it’s very possible.

“I don’t use Bootstrap. I use Foundation”.


Well, in Shelbyville they drink Fudd.


My point here is that we seem to have forgotten the point of these libraries – to aid us reduce our biggest cost – development.

Just take the jQuery strapline into consideration:

“Write less, do more”.

And one of the Twitter Bootstrap straplines:

“Bootstrap makes front-end web development faster and easier. It’s made for folks of all skill levels, devices of all shapes, and projects of all sizes”.

Everyone is correct to consider the impact of library sizes, but please take a measured approach in a world where bandwidth is increasing for most clients, and development costs remain high. Real world projects (that are my bread and butter) have deadlines and costs.

Like Hanselman said – “The Internet is not a closed box. Look inside.” Look inside and make your own decision, don’t just follow the sentiment.

Entity Framework, Technical

In defence of the RDBMS: Development is not slow. You just need better tooling.

I pride myself on being a real world software developer.

Anything that I publish in this blog is a result of a some real world coding. I don’t just play around with small, unrealistic demoware applications. I write real world applications that get used by lots of users.

With experience comes wisdom, and in software part of that wisdom is knowing when to employ a certain technology. There is almost never a right or wrong answer on what technologies you should be using on a project, as there are plenty of factors to consider. They usually include:

  • Is technology x supposed to be the right tool for this job?
  • Do we have the knowledge to work with and become productive in technology x within the scope of the budget?
  • How mature is technology x?

The above questions will even have varying importance depending on where you are currently working. If you are working at an agency, where budgets must be stuck to in order for the company to make a profit, and long term support must be considered, then the last two points from the above have a greater importance.

If you are working in house for a company that has it’s own product, then the last two points become less important.

Just about the only wrong reason would be something like:

  • Is technology x the latest and greatest thing, getting plenty of buzz on podcasts, blogs and twitter?

This doesn’t mean you should be using it on all projects right now. This just means that technology x is something that you ought to look into and make your own assessment (I’ll let you guess where it might be on the technology hype life cycle). But is this project the right place to make that assessment? Perhaps, but it will certainly increase your project’s risk if you think that it is.

One of the technologies that we are hearing more and more about are NoSQL databases. I won’t go into explanation details here, but you should be able to get a good background from this Wikipedia article.

Whilst I have no issues with NoSQL databases, I do take issue with one of the arguments against RDBMS’s – that development against them is slow. I have now seen several blog posts that have argued that developing with a NoSQL databases makes your development faster, which would imply that developing against a traditional RDBMS is slow. This isn’t true. You just need to improve your tooling.

Here’s a snippet from the mongo db “NoSQL Databases Explained” page:

NoSQL databases are built to allow the insertion of data without a predefined schema. That makes it easy to make significant application changes in real-time, without worrying about service interruptions – which means development is faster, code integration is more reliable, and less database administrator time is needed.

Well I don’t know about you guys, but I haven’t had to call upon the services of a DBA for a long time. And nearly all of my applications are backed by a SQL Server database. Snippets like the above totally miss a key thing out – the tooling.

Looking specifically at .net development, the tooling for database development has advanced massively in the last 6 years or so. And that is largely down to the plethora of ORMs that are available. We now no longer need to spend time writing stored procedures and data access code. We now no longer need to manually put together SQL scripts to handle schema changes (migrations). Those are real world huge development time savers – and they come at a much smaller cost as at the core is a well understood technology (unless your development team is packed with NoSQL experts).

Let’s look specifically at the real world use. Most new applications that I create will be built using Entity Framework code first. This gives me object mapping and schema migration with no difficulty.

It also gives me total control over the migration of my application’s database:

  1. I can ask Entity Framework to generate a script of the migrations that need to be run on the database. This can then be run manually against the application, and the application will even warn you at runtime if the schema is missing a migration
  2. I can have my deployment process migrate the database. The Entity Framework team bundle a console app that can be packaged up and called from another process – migrate.exe
  3. I can have my application migrate itself. That’s right – Entity Framework even allows me to run migrations programatically. It’s not exactly hard either.

My point is this: whilst the migrating of a schema may not be something that you would need to do in a NoSQL database (although you would need to handle these changes further up your application’s stack), making schema changes to an RDBMS’s schema just isn’t as costly or painful as is being made out.

Think again before you dismiss good ol’ RDBMS development as slow – because it hasn’t been slow for me for a long time.

Technical, Visual Studio

Using NDepend to clean up code and remove smells

At some point in your development career, you would have had an existing project dumped on you that you will have problems understanding and generally getting around the code. Those difficulties can be the result of some undocumented domain reasons, but could also be because of code smells. The code smells will also make the domain difficult to understand. This, I’m sure, will have been experienced by nearly every developer.

When this happens, the project that you are working on contains a large amount of technical debt. Every new developer on the project loses time trying to navigate their way around the confusing and smelly code. The project becomes infamous within your team and nobody wants to work on it. The code becomes unloved with no real owner. You need to repay some of the technical debt.

In the agile world, we should be refactoring and reviewing code bravely and regularly to improve it’s quality and to reduce the number of code smells. This however can be difficult for a number of reasons:

  • Confidence – “can I really change this code without breaking xy and z section of this application?”
  • Reasoning – “Is renaming this method from “xyz” to “Xyz” really the correct thing to do?”

It’s safe to assume that nobody is going to be completely sure of the above to questions in all circumstances. This is why it’s becoming more and more common to use a code analysis tool to help you find any potential code smells, and advise you on how to fix them. A code analysis tool can be that advisor telling you what you can do to reduce your technical debt. It can also stop you from racking up a technical debt in the first place.

In this post, I’ll be exploring NDepend, a powerful static code analysis tool that can highlight any smells in your code, and give me some good metrics about my code. I’ll be running it against the latest a version of NerdDinner, which can be downloaded from here.

You can run through this walk-through as you are reading this post with NDepend. You can find installation instructions here.

NDepend examines compiled code to help you find any potential issues in the generated IL code – so if you are running NDepend outside of Visual Studio, make sure you build your project first.

nDepend 101 – Red / Yellow / Green code

So let’s start with the basics. One of the coolest things about NDepend is the metrics that you can get at so quickly, without really doing much. After downloading and installing NDepend, you will see a new icon in your Visual Studio notification area indicating that NDepend is installed:


Now, we can fire up NDepend simply by clicking on this new icon and selecting the assemblies that we want to analyse:


We want to see some results straight away, so lets check “Run Analysis Now!”. Go ahead and click ok when you are ready. This will then generate a html page with detailed results of the code analysis. The first time you run NDepend you will be presented with a dialogue advising you to view the NDepend Interactive UI Graph. We’ll get to that in a moment – but first lets just see what NDepend’s default rules thought of NerdDinner:


Yellow! This means that NerdDinner has actually done ok – we have some warnings but no critical code rule violations. If we had some serious code issues, this icon would turn red. These rules can be customised and new rules can be added, but we’ll cover this later. So we now have a nice quick to view metric about the current state of our code.

This is a really basic measure, but it lets us know that something in our code, in it’s current state, either passes or fails analysis by NDepend. You may be questioning the usefulness of this, but if your team knows that their code must pass analysis by NDepend, a little red / yellow / green icon becomes a useful and quick to see signal. Are my changes good or bad?

Dependency Graph

The Dependency graph allows you to visually see which libraries are reliant on each other. Useful if you want to know what will be effected if you change a library or swap it out for something else (you should be programming against interfaces anyway!):


By default, the graph is also showing you which library has the most number of lines of code. The bigger the box, the greater the lines of code. This sizing can also be changed to represent other factors of a library, such as the number of intermediate instructions. This lets you easily visualise information about your codebase.

Queries and Rules

Out of the box, NDepend will check all of your code against a set of pre-defined rules which can be customised. Violations of these rules can be treated as a warning, or as a critical error.

So NerdDinner has thrown up a few warnings from nDepend. Let’s have a look at what these potential code smells are, and see how they can be actioned:



So, within our Code Quality rule group, NerdDinner has thrown up warnings against 3 rules. NDepend’s rules are defined by using a linq query to analyse your code. Let’s take a look at the query to find any methods that are too big:



It’s quite self explanatory. We can easily alter this linq query if we want to change our rules – e.g. alter the number of lines of code necessary to trigger the warning. Looking at the results from the query:



NDepend has directed us to 2 methods that violate this rule. We can get to them by double clicking on them, so that we can start re-factoring them. It’s worth stating here that a method that is too big potentially violates the single responsibility principle as it must be doing too much. It needs breaking up.

Using NDepend to enforce future change rules

A stand out feature of NDepend that I haven’t seen anywhere else before is it’s ability to execute rules against changes. I.e, you can tell NDepend to look at two different versions of the same dll, and check that only changes made meet a set of rules. This can be handy in situations where you cannot realistically go back and fix all of the previous warnings and critical violations. Whilst you can’t expect your team to fix the old code smells, you at least expect them to be putting good clean code into the application from now onwards.

Again, NDepend comes with some really good sensible rules for this kind of situation:




Whilst there are plenty of tools out there to help you write clean, maintainable, non smelly code, NDepend does strike as a very powerful and highly customisable tool. It also stands out as it can be run within Visual Studio itself or as a standalone executable. This opens up the potential to it being run on a build server as part of a build process. I certainly have not done NDepend justice in this post as there is heaps more to it, so I would recommend downloading it and running it against that huge scary project that you hate making changes to.