Adjusting screen brightness on the Surface Book from the Keyboard

I’ve purchased a Microsoft Surface Book to replace my Mac Book Pro. I didn’t get on very well with the Mac Book Pro for reasons that I will list out in a future blog post, but so far I am very happy with the Surface Book. It’s build quality feels fantastic and it is a lovely machine to use.

It did however take a me a while to workout how to adjust the screen brightness from the keyboard after noticing that none of the function keys double up as a screen brightness adjustment.

To make your screen brighter:

Fn + Del

To make your screen darker:

Fn + Backspace

Enjoy!

Mapping naked domains and www. domains to Azure web apps

Azure web apps can be mapped to multiple domains, as well as naked domains.

To do this, you will need access to your domain name’s DNS settings.

Go Naked

A naked domain is the domain without the “www.” that you often see on websites. There are various reasons for using a naked url, that I won’t go into in this post.

Jump into your domain name’s DNS settings. Create a CNAME entry for awverify.yourwebsite.com, and point it to your azure domain (e.g. awverify.yourwebsite.azurewebsites.net). This tells azure that you are the owner of the domain.

Now, go into your Azure control panel and locate your web app.

Select “Buy Domains” and then “Bring External Domains”:

You will then be shown a dialogue on the right with a text box where you can enter you naked domain name (e.g. yoursite.com – no www):

After you enter the naked domain, azure will load for a minute whilst is checks for your awverify CNAME dns entry.

Once verified, you can then point your actual domain to your Azure website.

Note: You can use a CNAME or an A record DNS entry to resolve the naked domain of your site. Both methods are listed below:

Method 1. Using an A record DNS entry pointed to the IP shown in the azure portal

Once verified, you Azure will reveal an IP address. This should show up just below the text box where you entered the domain name. If it doesn’t show, wait a few minutes and refresh the entire page. The IP address should then be displayed.

Head over to your DNS settings and enter an a record for “*”resolving to your ip address listed in Azure. You should now have a working naked domain name.

Method 2. Using a CNAME DNS entry pointed to the azure alias

Head over to your DNS settings and enter a CNAME record for “*” resolving to yourwebsite.azurewebsites.net. You should now have a working naked domain name.

The merits of using an A record vs a CNAME entry are not something that I will go into in this post. You can read more about the two DNS entry types here.

Pointing a www. to your azure application as well (or any other subdomain)

As well as having a naked domain work, you will probably also want your www to work as well. This can be done using the same methods above, but crucially you will need to tell Azure that you also have ownership of the subdomain as well:

e.g. In order to verify http://www.yourwebiste.com, you need to create a CNAME dns entry for awverify.www.yourwebsite.com that resolves to awverify.yourwebiste.azurewebsites.net

In order to verify blog.yourwebiste.com, you need to create a CNAME dns entry for awverify.blog.yourwebsite.com that resolves to awverify.yourwebiste.azurewebsites.net

Again, once verified, you are free to setup and A record or CNAME record DNS entry to point to your Azure Web App.

 

Hosting multiple websites inside an azure app service

This information is up to date as of November 2015. The Azure offer changes a lot, so this information may become quickly out of date.

A few years back, Scott Hanselman wrote a blog post on how you could utilize the “Standard” tier of Azure websites to save money by hosting multiple sites. Well, that was back in 2013 and the azure offer has significantly changed.

Firstly, Azure Websites has now been merged under Azure App Service, along with a few other services. Here is a 5 minute video from channel 9 explaining what exactly is in Azure App Services.

So, how do I get that “Shared” tier multiple websites setup that Scott Hanselman originally blogged about? Well, the Azure App Service pricing details page looks like you can get there with the “Basic” tier, which is cheaper than “Standard”:

azure-app-service-pricing-nov-2105

Azure App Service offer. Screenshot grabbed in November 2015

And how do I actually set this up in my Azure portal?

Confusingly, what is priced as an Azure App Service basically means everything under the “Web + Mobile” under the “new” option in the azure portal:

azure-portal-Web-Mobile

Why? Because:

An App Service plan is the container for your app. The App Service plan settings will determine the location, features, cost and compute resources associated with your app.

More info here.

You get told this information when setting up your app service plan and location (not sure why it defaults to Brazil…)

azure-portal-app-service-plan

So, an App Service Plan is basically the billable container for your apps.

So if you want to create a new web app under the same App Service, simply select it when setting up a new Web App.

Performance Tuning AngularJS Apps

For existing AngularJs apps, there are a few things that you can do in order to try and improve performance.

Destroy no longer needed javascript plugin elements

This should help prevent your Angular App from running into memory problems in the browser. There are basically two approaches to this.

In directives that wrap some sort of plugin (e.g. Slick Slider), you need to listen out for the “$destroy” event and call that particular plugin’s cleanup methods. In the case of Slick Slider, it’s the unslick() method, but it could simply be a call to jQuery’s remove() method, or you could just set the value of the html element to an empty string:


$scope.$on('$destroy', function() {

 // Call the plugin's own api
 $('.slick-slider').unslick();

 // or call jQuery's remove function
 $(element).remove();

 // or, if you aren't using jQuery

 element.html("");
});

Unbind any watches when the current scope is destroyed

When you create a watch on a scoped variable, or on an event in angular, the $watch function returns a function that when called, will remove the watch. Call this returned function when your scope is destroyed as it is no longer needed:

var unbindCompanyIdWatch = $scope.$watch('companyId',() => {
 // Do Something...
});

$scope.$on('$destroy', function() {
 unbindCompanyIdWatch();
});

Use One-Time Binding where possible

The less binding going on, the less watchers there are.

If you render values in your dom using angular that you know are only going to be set once and will never change during the lifecycle of the page, it is a candidate for using one-time binding. The One-Time binding syntax is basically two colons – “::”, and can be used a few ways in your html templates:

<!-- Basic One-Time Binding -->
<p>{{::SomeText}}</p>

<!-- Within ng-repeat -->
<ul>
 <li ng-repeat="item in ::items">
 {{::item.name}}
 </li>
</ul>

<!-- Within ng-show -->
<p ng-show="::showContent">
 Some Content
</p>

<!-- Within ng-if -->
<p ng-if="::showContent">
 Some Content
</p></pre>
<pre>

 

Use “track by” when using ng-repeat where possible

By specifying a property for angular to track an item within a collection by, you will prevent angular from rebuilding entire chunks of the dom unnecessarily. This will give you a performance boost which will be noticeable when dealing with large collections:

<ul>
 <li ng-repeat="item in items track by item.itemId">{{item.name}} </li>
</ul>

Ben Nadel has an excellent post on track by that you should checkout.

Of course, you shouldn’t need to tie this up with some one-way binding, as track by would be pointless with a collection that does not need to change.

Cancel no longer required Http requests

If some action happens that means that data that is loading is no longer needed (e.g. a navigation change), you should cancel the no longer required http requests. Most browsers limit the number of concurrent requests to a single domain. So, if your requests are no longer required, cancel them and free up those request slots.

You can do this by simply resolving the promise. Your requirements of when this cancellation needs to happen will be different for every app, so I would recommend that you write a httpRequestManagerService, and marshal any http requests through it that you deem necessary. You can then resolve your promises based on some event – e.g. a navigation change event. Ben Nadel has a good post on cancelling angular requests.

Interchange ng-if and ng-show where appropriate

On the surface, ng-show and ng-if produce the same result. However, under the hood, they behave slightly differently.

ng-show still renders your html no matter what. If it does not need to be shown, the html element with the ng-show directive will simply be marked with a display:none css style.

ng-if will completely remove the html and all children that contain the directive.

There is obviously a cost in completely removing and adding entire chunks of html, however if you are dealing with a lot of html, or if your html within your ng-if contains a lot of angular expressions, I have found it to be more performant than ng-show.

My advice is to evaluate both in each case before making a decision.

Please feel free to throw in any more performance tips in the comments

The 2015 PC build for Gaming and Programming

Having build my last desktop in 2011 and noticing that some things were starting to run a little slowly, I’ve gone for a desktop refresh. Here is what I have gone for:

CPU:

Intel Core i5 i5-4690K
This is one of the best “bang for buck” CPUs that you can get at the moment. I was previously running an i7 but this new Haswell architecture i5 beats my old i5 comfortably across the board, and it also runs cooler than my old i7.

RAM:

Corsair CMY16GX3M2A1866C9R Vengeance Pro Series 16GB (2x8GB) DDR3 1866Mhz CL9 XMP Performance Desktop Memory
I’ve been burnt in the past by cheaper RAM becoming unstable, so now I will never scrimp on RAM. This RAM supports Intel’s XMP for overclocking, and has been enabled since day one without any issues.

Motherboard:

MSI Z97 Gaming 5 Intel LGA1150 Z97 ATX Motherboard
One of the cheapest parts of this build. I was very skeptical about getting a mainboard that does not have an integrated Intel NIC (this board instead opts for a Killer Networks NIC). My last mainboard had a Bigfoot Networks E2100 NIC, which out of the box was incredibly buggy and unstable. It was actually so unusable that I ended up disabling the TCIP capabilities of the card and letting the tested and reliable TCIP stack in Windows do it’s thing. The Killer Networks E2100 card is now basically abandonware, and the card does not work with newer games online, and until recently wasn’t compatible with the iTunes store. However, the E2200 is current and is still getting plenty of attention from Killer Networks, and I haven’t had any issues with it online so far. My advice would still be to go for a tried an tested Intel NIC if you can, although I’m yet to experience any problems with the E2200 Killer Networks card on this mainboard.

One of the best things about this mainboard is the BIOS, which has fantastic user interface and give you plenty of control over overclocking features, both simple and advanced. This piece of kit was fantastic value for money.

Graphics:

MSI NVIDIA GTX 970 Gaming Twin Frozr HDMI DVI-I DP Graphics Card
The more graphics memory, the better. This card lets me comfortably play the newest games (including GTA5) with the graphics settings all maxed out. It also runs quietly.

Cooling:

Corsair Hydro Series H55 All-In-One Liquid Cooler for CPU
This was a surprise win for me. I previously used a Be Quiet CPU fan, which was nice and silent and kept my CPU nice and cool. However, this ready-to-rock water cooler from Corsair really impressed me, not just on the noise levels, but also on the cooling capabilities. For the first time in years, my CPU will happily idle at 25°C.

Main OS Hard Drive:

Crucial CT512MX100SSD1 2.5 inch 512GB SATA III Solid State Drive
The OS hard drive caused me great pain originally. I started this build off running the OCZ Arc 100. A 480 Gigabyte SSD priced very cheaply at £120. However, this was simply too good to be true, and within a week of the new build, this SSD suffered some serious file corruption and required a reinstall of Windows, which would only go on after a hard SSD wipe (a Windows installer format was not enough). However, I decided not to proceed with the OCZ Arc 100 as a quick bit of research revealed that it was too much of an unreliable drive and has a few problems. Have a read about all of the other problems that other people had with this drive over at newegg.com. You pay for what you get, and I have sent the defective OCZ Arc 100 back for a refund, and am instead running a more highly rated but more costly Crucial SSD.

Having a hard drive fail on you is bad enough, but it’s that much more hassle when it’s the hard drive that contains the operating system for your battlestation on it. The Arc 100 was the only let down of this build, and it did come as a surprise as I had previously run a smaller OCZ SSD without any problems.

Operating System:

I’m running Windows 8.1, which on the 31st of July will become Windows 10:-)

I also run a Xubuntu VM within VM Ware Player for my golang playtime. If you’re an Ubuntu user, I recommend that you give Xubuntu a try. You might just prefer XFCE, like I do.

Stop Bashing Angular

I appreciate that I’m little late to this discussion. I’m not sure if you may have noticed, but I don’t blog as much as some of the other better known developers out there. Why? I’m too busy working contracts and building real world applications, that have real world problems and real world requirements.

So I’ve spoken about this several times in the past, and I’ll speak about it again, but I firmly believe that software development suffers from trend hype lifecycles in a massive way:

Technology Trend Hype Lifecyles

Technology Trend Hype Lifecyles

I do however think that there is one key thing missing from the above diagram – the “Anti Trend”. Sometimes a technology will come a long that is genuinely popular and useful for a real reason – it actually does a job, is well received and supported and gets pretty popular. In software development, it’s something that can make our difficult job easier. The “Anti Trend” refers to the detractors, who I suspect, want to be the first to “bail out” on a technology.

I’m all for critique of a technology or a way of solving a problem, but your arguments need to stand up.

I had a look into this in my post “That CSS / Javascript library isn’t as big as you think“, where I pointed out that it was odd that those criticising jQuery and Twitter Bootstrap complained about the size of these libraries, but seemed to be ignorant of the basics – free cdns and gzipping.

I also had a look into the Anti Trend in my post “In defence of the RDBMS: Development is not slow. You just need better tooling“, where I pointed out that one of the key criticisms of relation databases was that development against them was slow. This is only the case if you don’t bother using any tooling.

Different sections of the of the development community run through the technology trend hype lifecycle at different speeds, and the Javascript community runs through the trend lifecycle at breakneck speeds.

So, right now, the Anti Trenders are having a pop at Angular (a few months ago it was jQuery). The arguments are usually the same and genuinely do not stand up in a real world situation. I’ve seen blog posts from people that I previously had respected quite a lot, critiquing angular in seriously misplaced manner:

Misplaced critique 1 – Blaming the framework when you should be blaming the problem that the framework is trying to solve

If you don’t like how Angular can be used, you probably wont like how Ember, React, and Knockout can be used either. These front end frameworks exist and are used for a reason – to solve the problem of getting a client and their data around an application nicely and seamlessly.

What shocked me about the Anti Trend blog posts was that they revealed a level of ignorance on the side of the authors. For me, someone with over a decade’s worth of experience of publishing material on the web and developing real web applications (you know, ones that have requirements, costs and deadlines, and need to actually work and do something), front end frameworks like Angular and Knockout solved a very real problem. Both technologies have helped me to provide a richer client experience that were more testable, and they helped me get there quickly and keep the customers and the users happy.

Misplaced critique 2 – Getting tribal and then blaming the other tribe

It’s an age old technique that can be applied to just about any argument. “I’m over here with the cool kids, you’re over there with the weirdos”. You might be wondering what I’m on about, but it’s actually an argument that I’ve seen in an anti Angular blog post:

I’d say Angular is mostly being used by people from a Java background because its coding style is aimed at them

The above is 100% verbatim from an anti Angular blog post. “People that like Angular must be uncool enterprise developers”. Sorry dude, some of us work in the enterprise and build real world line of business applications. We have bills to pay and don’t live rent free working on speculative up startup ideas.

LifeinvaderOffice-GTAV

The LifeInvader social network HQ from GTA5. Your speculative start up also could get this big

Misplaced criticism 3 – I don’t like where it could potentially go, therefore it’s wrong now

If you hadn’t heard, Angular 2 will bring in some changes. If you didn’t like Angular in the first place, why are you crying for the loss of version 1? And why is this such a big issue in the Javascript community? Did you know that ASP.Net will undergo significant changes with the next release (called vNext)? The .net community is generally excited for this release, and isn’t mourning the loss of the old versions.

This reddit user summed up this argument nicely:

Screen Shot 2015-03-07 at 14.28.13

Misplaced criticism 4 – Pointing out the problems but offering no solutions

One of the best things about being a developer is getting challenges thrown at you every day, and thinking of solutions to those problems. Can you imagine having someone on your team that was 100% negative, and constantly stopped their work to call you over to tell you that they had discovered some problem, and that it therefore was no other way around the problem without throwing the whole thing out and starting again? It would be pretty annoying, right?

Well, if you’re going to criticise front end frameworks and offer no alternatives or other solution, I’m going to assume that you are advocating the use of A LOT of jQuery instead (which I’m guessing you think is too bloated, and that you’d tell me to write bare metal javascript).

It’s silly isn’t it? I’m not saying it couldn’t be done, I’m saying it would be hard, your code would suck, would be difficult to test, and would take an eternity to deliver.

Conclusion

Make your own discussion. Talk do devs that you know in your network that may have used the technology in a real world situation. If you don’t know any, find your local Web Developer meetup and get talking to people. Build a small prototype and form your own opinion. Don’t just follow the trend, or the anti trend. What is your project priority? Delivery? Or something else?

It’s not unreasonable to consider blog posts on the subject, but please consider if the author has a valid opinion. Do they actually build and deliver real world apps, or, do they now make their money from blogging, podcasting and running a training company on the side? Some good places to go for some real world insight (as in real actual code problems) into Angular Js are:

In the above links you will quickly discover real world Angular challenges and how they were overcome. It will also give you an indication of how well trodden the road is before you decide to set off down it.

TL;DR

If you’re going to bash Angular, think:

  • Is what I dislike about Angular a fault of the framework, or of the problem that I am trying to solve?
  • Does my criticism apply to all other front end frameworks?
  • Am I criticising an Angular antipattern that could be resolved by coding a little more sensibly?
  • Can I offer a better alternative solution?
  • Am I being an “Anti-Trender”? If you’re not sure, recall if you denounced one of the following on Facebook: Kony 2012, the no makeup selfie, the Ice Bucket Challenge.

Form your own opinion from real world experiences.

Using your own router with BT Infinity

You don’t need to use BT’s supplied home hub with BT Internet.

BT used to supply Infinity customers with two pieces of equipment.

  1. A BT Home Hub (router)
  2. A VDSL (Very-high-bit-rate digital subscriber line) Modem

However, as the technology has improved, BT now supply Infinity customers with a single combined BT Home Hub (5 onwards), which contains both a router and a modem.

The nice thing about the old approach is that it meant it was easier to use your own hardware, which means more control.

Well, the good news is that you can still ditch the BT Home Hub, and use whatever router and modem combo that you like.

You essentially have two options:

Option 1 – A separate VDSL router and modem

This is the option I’ve gone for, and here’s what I did.

Search on ebay for one of these:

bt-openreach-modem

A BT Openreach VDSL modem. This link should get you what you are after.

Then, get yourself the best router that you can afford that supports PPPoE. I’ve gone for the Netgear R6300-100UKS AC1750 Dual Band Wireless Cable Router. It’s awesomely powerful and has killerWiFi. Infact, the WiFi is so good that I’ve ditched my homeplug powerline WiFi boosters (5G ftw).

To get your internet working, you then just need to do the following.

  1. Connect your VDSL modem into your phone line, via a filter of course
  2. Connect your router to your modem using an ethernet cable
  3. Jump onto your router’s admin page. Enable PPPoE, and set your username to “bthomehub@btbroadband.com”
  4. Browse the internet!

It’s essentially the setup that is described in this diagram, just replace the BT Home Hub with a PPPoE router of your choice:

BT Infinity Setup

Option 2 – A combined VDSL router and modem

You can now get VDSL router and modems. Infact, they are easier to buy than VDSL modems. Here’s a good search listing. The setup will be the same as the above, just without steps 1 & 2, and you will obviously connect the router directly to the phone line.

Conclusion

I would personally go with option 1.

Option 1 gives you much more flexibilty in terms of the physical location of your router. So, rather than he being stuck near your master socket, you can now have the router next to your master socket, and instead have your router somewhere more useful, like right in the middle of the house. All you then need to do is connect the router to the modem using an ethernet cable.

If you have any other tips, please post them in the comments.

Upvotes and Downvotes don’t work in big internet discussions

A few years back when I first discovered Reddit, I found it to be a place full of insight and interesting content.

As someone that was used to traditional bulletin boards, the idea that most valuable content appeared at the top of a comment thread actually blew my mind.

Rather than the comments all being treated as equal, and just being displayed in chronological order, they were instead shown in the order that the community decided.

I loved the fact that someone might create a thread that linked to some content, and then the top voted comment was usually someone who was an expert in that field and could shed more light on the content. Users could then collaborate with them and garner more information.

The community decides a comment’s value through a system of upvoting and downvoting.

Here’s the problem – what one user thinks is upvote worthy, another user may not think is upvote worthy. Now this isn’t a big problem for smaller communities – e.g. a subreddit with a few hundred subscribers about something specific, like a particular model of a car. This is because it’s users all have a shared interest and will generally agree on what should be upvoted and what shouldn’t

When a community has a broader appeal (e.g. news, or funny images), this model is flipped on it’s head. There is no longer a united community present. The community has fewer common interests. So what content will appeal to most users? Something simple usually. Something that is quick to read and easy to understand. This usually is an attempt at being funny or witty, and might make you chuckle for a moment, but will not add any more insight into the subject at question. These will get the highest number of upvotes, leaving anything of genuine insight or value to sink to the bottom and drown in a sea of pointless remarks.

This can be illustrated very easily. Any thread on a popular subreddit will generally take the following format:

1. Thread posted linking to an image or article

2. Top voted comment is a statement of under 20 words attempting to be funny.

3. A deep thread of unfunny attempts at being witty are anchored to the top voted comment, as other redditors eagerly jump in an attempt at getting a few upvotes.

This breaks the upvote and downvote model. The top voted comment no longer adds any real value to the discussion, but instead distracts you. Comment voting systems are now bringing the noise back, and in a much worse way than chronological comments ever did. In wide appealing communities, they are a race to the bottom. The user that can write the most unintelligent, slapstick comment in the least amount of words wins the fake internet points.

You now need to scroll down through the comments and hunt for the points of genuine insight that actually add value to the discussion.

As the community collectively gets more and more unintelligent, this problem gets worse. Users will even downvote comments or linked articles that are factually accurate simply because they dislike it. This community is now a horrible place to be.

It isn’t just a Reddit problem. Upvoting systems have been bolted onto many discussion mediums as the internet woke up to their auto noise filtering benefits.

Have a look at this comment thread on a Guardian article about the ongoing shitstorm at Tesco PLC. The TL;DR is that Tesco have managed to overstate their profits by£250 Million, causing £2 Billion to be wiped off the company’s value, and resulting in many senior managers getting suspended.

The comment thread is a festival of eye rolling idiocy, as the idiots take over and have an idiotic circlejerk free-for-all of bullshit, probably whilst on their lunch breaks at work. We can easily put the comments into 4 categories:

1. Slaptick attempts at being funny in as few words as possible

Screen Shot 2015-01-01 at 15.45.24

Thanks – your comment is funny but adds no value. 40 Upvotes at the time of writing.

2. Idiotically wild conspiracies that miss the point by a few hundred planets

Screen Shot 2015-01-01 at 15.48.25

This comment very misinformed, claiming that the whole episode is an attempt by Tesco to get their corporation tax bill down. Let’s just go over that for a minute – Tesco’s deliberately wiped off £6 Billion from their share price and suspended lots of senior managers, so that they could avoid tax? Oh and by the way, if you make more profit you pay more corporation tax.

101 upvotes at the time of writing. So thats at least 101 people that have read the comment and had some sort of belief in it. The upvote count is telling you that this comment has some sort of value. It doesn’t. It’s worthless.

3. Comments that have nothing to do with the current discussion

Screen Shot 2015-01-01 at 15.54.37

76 upvotes for the person telling us about their food shopping bill. Thanks. This comment reminds me of the “I brought my own mic!” line from a classic Simpsons episode.I brought my own mic

Sitting there, smugly telling us irrelevant information that we don’t need to know. It’s almost spam.

4. Comments that actually contribute to the current discussion

Screen Shot 2015-01-01 at 16.03.10And there we have it – this comment spurred a valuable discussion thread related to the content of the topic. 18 measly upvotes.

How do we fix this?

1 – Turn off comments

One of the fixes is to remove comments. On a news article that states factual information about something that has happened, how much value can the community really add? Well, I think the community can always add value and insight, especially for those that will want to dig deeper into a particular subject or story, so I wouldn’t like to see this happen.

2 – Filter the noise in a better way

I’ve got a few ideas how you could filter the noise out of discussion threads.

Rather than simply upvoting or downvoting, why don’t we apply a tag instead? Upvoting just feels too broad – if you upvote something it may be because you find it funny, because you agree with it, or because you found it insightful. So what if you could drag an icon onto a comment that represented how you felt about it instead?

So looking at our first comment from above:

Screen Shot 2015-01-01 at 15.45.24

This would have 40 “funny” tags instead of 40 upvotes.

And our final comment:

Screen Shot 2015-01-01 at 16.03.10

Would have 18 “insightful” tags.

You could then even put comment threads into “funny” mode, where the comments are sorted by the highest number of “funny” tags. Likewise with “insight” mode, where the comments would be sorted by the highest number of “insight” tags. This is similar to how canvas used to work, before Christopher Poole pulled it.

I think this could work, so I’m going to see if I can create something that will use this commenting system. Watch this space.

Not doing code reviews? NDepend can help you

In a previous post about NDepend, I looked at how it can be used to clean up codebases by easily and quickly isolating blocks of code or classes that violate a set of rules. Helpfully, NDepend ships with a sensible set of default rules, so if we like we can stick to NDepend’s defaults, or tweak them or add our own as we see fit.

Having just finished off a sprint with a fairly large team, I decided to install the NDepend 5 Visual Studio 2013 extension and do some digging around the solution. Whilst the information that NDepend 5 has shown to me has been a little depressing, it’s also compulsive reading and does break out the “engineer” within you, as you find yourself questioning the point of classes or methods, and thinking how you could better refactor NDepend code violations. It’s a great exercise to pair up with another Dev on your team and run through the code violations together – I found I could bounce code refactoring ideas off of a colleague in my team. NDepend could even be used here to help run code reviews as many of it’s out of the box rules follow the the SOLID principles, and not some member in your team’s opinion.

NDepend 5… What’s new?

Once you’ve installed NDepend, get ready to let it lose on your codebase. Brace yourself for the results – remember, it isn’t personal…

Dashboards, Dashboards, Dashboards

Dashboards add a huge value to any application (I know because I have built a few), and what can be particularly valuable is a set of visual indicators that quickly summarise some metric. Aside from NDepend’s Red / Green light that appears in the bottom right of Visual Studio (which is a very high level indicator of the state of your codebase and is a nagging reminder that you’ve got some critical code violations to sort out), the dashboard gives you a more detailed information. You can get to the dashboard by clicking on NDepend’s indicator icon, or by selecting “Dashboard” from the window that is shown after NDepend has had a good old inspection of your codebase.

 

dashboard1

The top half of the new Dashboard shows you some cool high level stats about the dll’s that you just analysed. If you are feeling brave, you could even share some of these stats with the project sponsors, but naturally only after you’ve got your code coverage up. The dashboard also shows you the current state of codebase against NDepend’s code rules, and any custom rules that you may have added, with the most critical violations shown at the top.

Analyse regularly to get the most out of NDepend 5

Before I continue looking into the new features of NDepend 5, it’s worth noting one of the key things that you need to do in order to unlock the full power of NDepend. Each time you analyse your codebase, NDepend captures information about the current state of it. This capture is a timecapsule to how it looked, and can be used as a “baseline” for comparative analysis. The more you analyse, the more baselines you will have.

By default, the data gathered from each analysis is stored in an “NDependOut” folder that lives at the same level as your solution file. I would recommend committing this folder to source control so that other members of your team can make use of these files, and so that they are treated with the same importance as your actual codebase.

ndepend-out-folder

Is my team getting better or worse at writing code?

A new addition to NDepend 5 is the ability to restrict code analysis to recent code only. This will help you to hide the noise and focus on the code that your team is writing right now. On a codebase that has got a few years behind it, and is of a significant size, it would be a little unrealistic to refactor all of the older code. Even if you are using other coding standards add-ins such as Style Cop, it’s easy to get lost in a sea of broken rules when you are looking at a codebase that has existed for several years.

recent-code

 

In order to define what your recent code actually is, you will need to provide NDepend with a “baseline”, if it cannot find the data on its own. This will tell NDepend about the state of the code at a previous time. The state of the codebase will be captured by NDepend every time you press the big “Analyse” button, so remember, the more you analyse, the more data you will have. You will be prompted for this information, if needed, when you check the “Recent Violations Only” checkbox.

In my case, I found that it was best to analyse the codebase at the start of a sprint, and then again as the sprint approached its final days, in order for there to be enough time for the team to do any refactoring that we felt was necessary on any code written during the sprint.

You could even analyse after each feature request or bug was closed off in order to get more finely grained data.

Is the new code that is being written of a good standard?

As the baselines offer an insight into the state of the code, NDepend 5 comes bundled with a set of graphs that detail how your codebase is performing between analyses. The default graphs will give you an insight into the how your codebase is changing in terms of the number of lines of code (I personally enjoy simplifying and deleting code, so this metric is quite an important one for me), the number of rules violated, code complexity, and reliance on third party code. You ideally want to see your codebase quality getting better – these graphs give you an instant insight into this.

v5Trend

It’s time to pay down your technical debt…

Or at least stop it from growing.

The best way to get a feel for NDepend 5 is to have a play with the 14 day trial version. On a team of any size, NDepend really does show it’s value, and will push you and your team into writing better code.