Technical

Stop Bashing Angular

I appreciate that I’m little late to this discussion. I’m not sure if you may have noticed, but I don’t blog as much as some of the other better known developers out there. Why? I’m too busy working contracts and building real world applications, that have real world problems and real world requirements.

So I’ve spoken about this several times in the past, and I’ll speak about it again, but I firmly believe that software development suffers from trend hype lifecycles in a massive way:

Technology Trend Hype Lifecyles
Technology Trend Hype Lifecyles

I do however think that there is one key thing missing from the above diagram – the “Anti Trend”. Sometimes a technology will come a long that is genuinely popular and useful for a real reason – it actually does a job, is well received and supported and gets pretty popular. In software development, it’s something that can make our difficult job easier. The “Anti Trend” refers to the detractors, who I suspect, want to be the first to “bail out” on a technology.

I’m all for critique of a technology or a way of solving a problem, but your arguments need to stand up.

I had a look into this in my post “That CSS / Javascript library isn’t as big as you think“, where I pointed out that it was odd that those criticising jQuery and Twitter Bootstrap complained about the size of these libraries, but seemed to be ignorant of the basics – free cdns and gzipping.

I also had a look into the Anti Trend in my post “In defence of the RDBMS: Development is not slow. You just need better tooling“, where I pointed out that one of the key criticisms of relation databases was that development against them was slow. This is only the case if you don’t bother using any tooling.

Different sections of the of the development community run through the technology trend hype lifecycle at different speeds, and the Javascript community runs through the trend lifecycle at breakneck speeds.

So, right now, the Anti Trenders are having a pop at Angular (a few months ago it was jQuery). The arguments are usually the same and genuinely do not stand up in a real world situation. I’ve seen blog posts from people that I previously had respected quite a lot, critiquing angular in seriously misplaced manner:

Misplaced critique 1 – Blaming the framework when you should be blaming the problem that the framework is trying to solve

If you don’t like how Angular can be used, you probably wont like how Ember, React, and Knockout can be used either. These front end frameworks exist and are used for a reason – to solve the problem of getting a client and their data around an application nicely and seamlessly.

What shocked me about the Anti Trend blog posts was that they revealed a level of ignorance on the side of the authors. For me, someone with over a decade’s worth of experience of publishing material on the web and developing real web applications (you know, ones that have requirements, costs and deadlines, and need to actually work and do something), front end frameworks like Angular and Knockout solved a very real problem. Both technologies have helped me to provide a richer client experience that were more testable, and they helped me get there quickly and keep the customers and the users happy.

Misplaced critique 2 – Getting tribal and then blaming the other tribe

It’s an age old technique that can be applied to just about any argument. “I’m over here with the cool kids, you’re over there with the weirdos”. You might be wondering what I’m on about, but it’s actually an argument that I’ve seen in an anti Angular blog post:

I’d say Angular is mostly being used by people from a Java background because its coding style is aimed at them

The above is 100% verbatim from an anti Angular blog post. “People that like Angular must be uncool enterprise developers”. Sorry dude, some of us work in the enterprise and build real world line of business applications. We have bills to pay and don’t live rent free working on speculative up startup ideas.

LifeinvaderOffice-GTAV
The LifeInvader social network HQ from GTA5. Your speculative start up also could get this big

Misplaced criticism 3 – I don’t like where it could potentially go, therefore it’s wrong now

If you hadn’t heard, Angular 2 will bring in some changes. If you didn’t like Angular in the first place, why are you crying for the loss of version 1? And why is this such a big issue in the Javascript community? Did you know that ASP.Net will undergo significant changes with the next release (called vNext)? The .net community is generally excited for this release, and isn’t mourning the loss of the old versions.

This reddit user summed up this argument nicely:

Screen Shot 2015-03-07 at 14.28.13

Misplaced criticism 4 – Pointing out the problems but offering no solutions

One of the best things about being a developer is getting challenges thrown at you every day, and thinking of solutions to those problems. Can you imagine having someone on your team that was 100% negative, and constantly stopped their work to call you over to tell you that they had discovered some problem, and that it therefore was no other way around the problem without throwing the whole thing out and starting again? It would be pretty annoying, right?

Well, if you’re going to criticise front end frameworks and offer no alternatives or other solution, I’m going to assume that you are advocating the use of A LOT of jQuery instead (which I’m guessing you think is too bloated, and that you’d tell me to write bare metal javascript).

It’s silly isn’t it? I’m not saying it couldn’t be done, I’m saying it would be hard, your code would suck, would be difficult to test, and would take an eternity to deliver.

Conclusion

Make your own discussion. Talk do devs that you know in your network that may have used the technology in a real world situation. If you don’t know any, find your local Web Developer meetup and get talking to people. Build a small prototype and form your own opinion. Don’t just follow the trend, or the anti trend. What is your project priority? Delivery? Or something else?

It’s not unreasonable to consider blog posts on the subject, but please consider if the author has a valid opinion. Do they actually build and deliver real world apps, or, do they now make their money from blogging, podcasting and running a training company on the side? Some good places to go for some real world insight (as in real actual code problems) into Angular Js are:

In the above links you will quickly discover real world Angular challenges and how they were overcome. It will also give you an indication of how well trodden the road is before you decide to set off down it.

TL;DR

If you’re going to bash Angular, think:

  • Is what I dislike about Angular a fault of the framework, or of the problem that I am trying to solve?
  • Does my criticism apply to all other front end frameworks?
  • Am I criticising an Angular antipattern that could be resolved by coding a little more sensibly?
  • Can I offer a better alternative solution?
  • Am I being an “Anti-Trender”? If you’re not sure, recall if you denounced one of the following on Facebook: Kony 2012, the no makeup selfie, the Ice Bucket Challenge.

Form your own opinion from real world experiences.

Technical

Using your own router with BT Infinity

You don’t need to use BT’s supplied home hub with BT Internet.

BT used to supply Infinity customers with two pieces of equipment.

  1. A BT Home Hub (router)
  2. A VDSL (Very-high-bit-rate digital subscriber line) Modem

However, as the technology has improved, BT now supply Infinity customers with a single combined BT Home Hub (5 onwards), which contains both a router and a modem.

The nice thing about the old approach is that it meant it was easier to use your own hardware, which means more control.

Well, the good news is that you can still ditch the BT Home Hub, and use whatever router and modem combo that you like.

You essentially have two options:

Option 1 – A separate VDSL router and modem

This is the option I’ve gone for, and here’s what I did.

Search on ebay for one of these:

bt-openreach-modem

A BT Openreach VDSL modem. This link should get you what you are after.

Then, get yourself the best router that you can afford that supports PPPoE. I’ve gone for the Netgear R6300-100UKS AC1750 Dual Band Wireless Cable Router. It’s awesomely powerful and has killerWiFi. Infact, the WiFi is so good that I’ve ditched my homeplug powerline WiFi boosters (5G ftw).

To get your internet working, you then just need to do the following.

  1. Connect your VDSL modem into your phone line, via a filter of course
  2. Connect your router to your modem using an ethernet cable
  3. Jump onto your router’s admin page. Enable PPPoE, and set your username to “bthomehub@btbroadband.com”
  4. Browse the internet!

It’s essentially the setup that is described in this diagram, just replace the BT Home Hub with a PPPoE router of your choice:

BT Infinity Setup

Option 2 – A combined VDSL router and modem

You can now get VDSL router and modems. Infact, they are easier to buy than VDSL modems. Here’s a good search listing. The setup will be the same as the above, just without steps 1 & 2, and you will obviously connect the router directly to the phone line.

Conclusion

I would personally go with option 1.

Option 1 gives you much more flexibilty in terms of the physical location of your router. So, rather than he being stuck near your master socket, you can now have the router next to your master socket, and instead have your router somewhere more useful, like right in the middle of the house. All you then need to do is connect the router to the modem using an ethernet cable.

If you have any other tips, please post them in the comments.

Uncategorized

Upvotes and Downvotes don’t work in big internet discussions

A few years back when I first discovered Reddit, I found it to be a place full of insight and interesting content.

As someone that was used to traditional bulletin boards, the idea that most valuable content appeared at the top of a comment thread actually blew my mind.

Rather than the comments all being treated as equal, and just being displayed in chronological order, they were instead shown in the order that the community decided.

I loved the fact that someone might create a thread that linked to some content, and then the top voted comment was usually someone who was an expert in that field and could shed more light on the content. Users could then collaborate with them and garner more information.

The community decides a comment’s value through a system of upvoting and downvoting.

Here’s the problem – what one user thinks is upvote worthy, another user may not think is upvote worthy. Now this isn’t a big problem for smaller communities – e.g. a subreddit with a few hundred subscribers about something specific, like a particular model of a car. This is because it’s users all have a shared interest and will generally agree on what should be upvoted and what shouldn’t

When a community has a broader appeal (e.g. news, or funny images), this model is flipped on it’s head. There is no longer a united community present. The community has fewer common interests. So what content will appeal to most users? Something simple usually. Something that is quick to read and easy to understand. This usually is an attempt at being funny or witty, and might make you chuckle for a moment, but will not add any more insight into the subject at question. These will get the highest number of upvotes, leaving anything of genuine insight or value to sink to the bottom and drown in a sea of pointless remarks.

This can be illustrated very easily. Any thread on a popular subreddit will generally take the following format:

1. Thread posted linking to an image or article

2. Top voted comment is a statement of under 20 words attempting to be funny.

3. A deep thread of unfunny attempts at being witty are anchored to the top voted comment, as other redditors eagerly jump in an attempt at getting a few upvotes.

This breaks the upvote and downvote model. The top voted comment no longer adds any real value to the discussion, but instead distracts you. Comment voting systems are now bringing the noise back, and in a much worse way than chronological comments ever did. In wide appealing communities, they are a race to the bottom. The user that can write the most unintelligent, slapstick comment in the least amount of words wins the fake internet points.

You now need to scroll down through the comments and hunt for the points of genuine insight that actually add value to the discussion.

As the community collectively gets more and more unintelligent, this problem gets worse. Users will even downvote comments or linked articles that are factually accurate simply because they dislike it. This community is now a horrible place to be.

It isn’t just a Reddit problem. Upvoting systems have been bolted onto many discussion mediums as the internet woke up to their auto noise filtering benefits.

Have a look at this comment thread on a Guardian article about the ongoing shitstorm at Tesco PLC. The TL;DR is that Tesco have managed to overstate their profits by£250 Million, causing £2 Billion to be wiped off the company’s value, and resulting in many senior managers getting suspended.

The comment thread is a festival of eye rolling idiocy, as the idiots take over and have an idiotic circlejerk free-for-all of bullshit, probably whilst on their lunch breaks at work. We can easily put the comments into 4 categories:

1. Slaptick attempts at being funny in as few words as possible

Screen Shot 2015-01-01 at 15.45.24

Thanks – your comment is funny but adds no value. 40 Upvotes at the time of writing.

2. Idiotically wild conspiracies that miss the point by a few hundred planets

Screen Shot 2015-01-01 at 15.48.25

This comment very misinformed, claiming that the whole episode is an attempt by Tesco to get their corporation tax bill down. Let’s just go over that for a minute – Tesco’s deliberately wiped off £6 Billion from their share price and suspended lots of senior managers, so that they could avoid tax? Oh and by the way, if you make more profit you pay more corporation tax.

101 upvotes at the time of writing. So thats at least 101 people that have read the comment and had some sort of belief in it. The upvote count is telling you that this comment has some sort of value. It doesn’t. It’s worthless.

3. Comments that have nothing to do with the current discussion

Screen Shot 2015-01-01 at 15.54.37

76 upvotes for the person telling us about their food shopping bill. Thanks. This comment reminds me of the “I brought my own mic!” line from a classic Simpsons episode.I brought my own mic

Sitting there, smugly telling us irrelevant information that we don’t need to know. It’s almost spam.

4. Comments that actually contribute to the current discussion

Screen Shot 2015-01-01 at 16.03.10And there we have it – this comment spurred a valuable discussion thread related to the content of the topic. 18 measly upvotes.

How do we fix this?

1 – Turn off comments

One of the fixes is to remove comments. On a news article that states factual information about something that has happened, how much value can the community really add? Well, I think the community can always add value and insight, especially for those that will want to dig deeper into a particular subject or story, so I wouldn’t like to see this happen.

2 – Filter the noise in a better way

I’ve got a few ideas how you could filter the noise out of discussion threads.

Rather than simply upvoting or downvoting, why don’t we apply a tag instead? Upvoting just feels too broad – if you upvote something it may be because you find it funny, because you agree with it, or because you found it insightful. So what if you could drag an icon onto a comment that represented how you felt about it instead?

So looking at our first comment from above:

Screen Shot 2015-01-01 at 15.45.24

This would have 40 “funny” tags instead of 40 upvotes.

And our final comment:

Screen Shot 2015-01-01 at 16.03.10

Would have 18 “insightful” tags.

You could then even put comment threads into “funny” mode, where the comments are sorted by the highest number of “funny” tags. Likewise with “insight” mode, where the comments would be sorted by the highest number of “insight” tags. This is similar to how canvas used to work, before Christopher Poole pulled it.

I think this could work, so I’m going to see if I can create something that will use this commenting system. Watch this space.

Technical

Not doing code reviews? NDepend can help you

In a previous post about NDepend, I looked at how it can be used to clean up codebases by easily and quickly isolating blocks of code or classes that violate a set of rules. Helpfully, NDepend ships with a sensible set of default rules, so if we like we can stick to NDepend’s defaults, or tweak them or add our own as we see fit.

Having just finished off a sprint with a fairly large team, I decided to install the NDepend 5 Visual Studio 2013 extension and do some digging around the solution. Whilst the information that NDepend 5 has shown to me has been a little depressing, it’s also compulsive reading and does break out the “engineer” within you, as you find yourself questioning the point of classes or methods, and thinking how you could better refactor NDepend code violations. It’s a great exercise to pair up with another Dev on your team and run through the code violations together – I found I could bounce code refactoring ideas off of a colleague in my team. NDepend could even be used here to help run code reviews as many of it’s out of the box rules follow the the SOLID principles, and not some member in your team’s opinion.

NDepend 5… What’s new?

Once you’ve installed NDepend, get ready to let it lose on your codebase. Brace yourself for the results – remember, it isn’t personal…

Dashboards, Dashboards, Dashboards

Dashboards add a huge value to any application (I know because I have built a few), and what can be particularly valuable is a set of visual indicators that quickly summarise some metric. Aside from NDepend’s Red / Green light that appears in the bottom right of Visual Studio (which is a very high level indicator of the state of your codebase and is a nagging reminder that you’ve got some critical code violations to sort out), the dashboard gives you a more detailed information. You can get to the dashboard by clicking on NDepend’s indicator icon, or by selecting “Dashboard” from the window that is shown after NDepend has had a good old inspection of your codebase.

 

dashboard1

The top half of the new Dashboard shows you some cool high level stats about the dll’s that you just analysed. If you are feeling brave, you could even share some of these stats with the project sponsors, but naturally only after you’ve got your code coverage up. The dashboard also shows you the current state of codebase against NDepend’s code rules, and any custom rules that you may have added, with the most critical violations shown at the top.

Analyse regularly to get the most out of NDepend 5

Before I continue looking into the new features of NDepend 5, it’s worth noting one of the key things that you need to do in order to unlock the full power of NDepend. Each time you analyse your codebase, NDepend captures information about the current state of it. This capture is a timecapsule to how it looked, and can be used as a “baseline” for comparative analysis. The more you analyse, the more baselines you will have.

By default, the data gathered from each analysis is stored in an “NDependOut” folder that lives at the same level as your solution file. I would recommend committing this folder to source control so that other members of your team can make use of these files, and so that they are treated with the same importance as your actual codebase.

ndepend-out-folder

Is my team getting better or worse at writing code?

A new addition to NDepend 5 is the ability to restrict code analysis to recent code only. This will help you to hide the noise and focus on the code that your team is writing right now. On a codebase that has got a few years behind it, and is of a significant size, it would be a little unrealistic to refactor all of the older code. Even if you are using other coding standards add-ins such as Style Cop, it’s easy to get lost in a sea of broken rules when you are looking at a codebase that has existed for several years.

recent-code

 

In order to define what your recent code actually is, you will need to provide NDepend with a “baseline”, if it cannot find the data on its own. This will tell NDepend about the state of the code at a previous time. The state of the codebase will be captured by NDepend every time you press the big “Analyse” button, so remember, the more you analyse, the more data you will have. You will be prompted for this information, if needed, when you check the “Recent Violations Only” checkbox.

In my case, I found that it was best to analyse the codebase at the start of a sprint, and then again as the sprint approached its final days, in order for there to be enough time for the team to do any refactoring that we felt was necessary on any code written during the sprint.

You could even analyse after each feature request or bug was closed off in order to get more finely grained data.

Is the new code that is being written of a good standard?

As the baselines offer an insight into the state of the code, NDepend 5 comes bundled with a set of graphs that detail how your codebase is performing between analyses. The default graphs will give you an insight into the how your codebase is changing in terms of the number of lines of code (I personally enjoy simplifying and deleting code, so this metric is quite an important one for me), the number of rules violated, code complexity, and reliance on third party code. You ideally want to see your codebase quality getting better – these graphs give you an instant insight into this.

v5Trend

It’s time to pay down your technical debt…

Or at least stop it from growing.

The best way to get a feel for NDepend 5 is to have a play with the 14 day trial version. On a team of any size, NDepend really does show it’s value, and will push you and your team into writing better code.

Technical

That CSS / Javascript library isn’t as big as you think

I take some pride in going against the current tide of thought in most things. This is especially true when it comes to such a trend based industry, such as software development. We suffer from Hype Cycles in a big way.

Currently, in the world of Web Development, the community seems to be uber conscious about the potential over use of Javascript and CSS libraries, with the fear that this overuse is bloating the web. I naturally agree with the core sentiment – having a page that requires several different Javascript libraries makes me wince a little, and other things would need to be considered (what clients will be using this web app, etc) before a decision could be made about these libraries.

However, lots of developers in the community are taking this Kilobyte conscious attitude and using it to put off other devs from using popular and well established libraries.

jQuery

A few months ago it was jQuery, as someone developed a tool that attempted to evaluate if your use of jQuery was necessary – youmightnotneedjquery.com. Unfortunately, this view caught on, and presumably a few poor souls got sucked in and attempted to re-invent a heavily proved and well tested wheel, only to hit those quirky cross browser pot holes that jQuery lets you normally ignore.

Right now, it’s Twitter Bootstrap that is getting the attention of the devs that claim to be Kilobyte concious.

Here’s the problem. The size argument doesn’t stand up in the case of jQuery, or in the case of Twitter Bootstrap. Nearly all arguments that you will see complaining about the size of these two libraries are bogus.

gzip and essential web optimisation

Well, one of the best trends to have ever swept the world of Web Development, was the front end optimisation push a few years ago, driven largely by Steve Souders‘ fantastic book, High Performance Websites.

Did we forget what we learnt in this book? Remember gzip? The bogus size arguments will always look at the size of the raw minified files. If you are sending these files over the wire in a production environment, you’re doing it wrong. All production Web Servers should be sending as much content to the client gzipped. This will significantly reduce the amount of data that needs to go between the server and the client (or “over the wire”):

 

jquery-gzip

 

The above screengrab is from the network tab in Chrome developer tools, when accessing jQuery 2.1.1. The darker number in the “Size” column is what actually came down over the wire – 34KB of gzipped data. The grey number (82.3KB) is the size of the uncompressed file. Gzipping has saved the the server and the client nearly 50 Kilobytes in data transfer.

If, for whatever reason, you can’t use gzipping in your production environment, then use the CDN. This will make your site even quicker as visitors to your site will likely have their cache already primed, saving you even more of those precious Kilobytes. And, it will also be gzipped.

Twitter Bootstrap

What’s worse is when the size argument is incorrectly used to pit one library against another. I’ve even seen blog posts blindly claiming that Web Developers should use Foundation over Twitter Bootstrap, because “Foundation is more lightweight”. In fact, Foundation’s core CSS file is 120KB gzipped, whilst Twitter Bootstrap’s is 98KB.

Given that it is so easy to debunk any size arguments, I actually think that our old friend, the Hype Cycle, is playing a part. Could it be that the popularity of jQuery and Twitter Bootstrap has made them uncool? I think it’s very possible.

“I don’t use Bootstrap. I use Foundation”.

250px-FuddOnTap

Well, in Shelbyville they drink Fudd.

Straplines

My point here is that we seem to have forgotten the point of these libraries – to aid us reduce our biggest cost – development.

Just take the jQuery strapline into consideration:

“Write less, do more”.

And one of the Twitter Bootstrap straplines:

“Bootstrap makes front-end web development faster and easier. It’s made for folks of all skill levels, devices of all shapes, and projects of all sizes”.

Everyone is correct to consider the impact of library sizes, but please take a measured approach in a world where bandwidth is increasing for most clients, and development costs remain high. Real world projects (that are my bread and butter) have deadlines and costs.

Like Hanselman said – “The Internet is not a closed box. Look inside.” Look inside and make your own decision, don’t just follow the sentiment.

Entity Framework, Technical

In defence of the RDBMS: Development is not slow. You just need better tooling.

I pride myself on being a real world software developer.

Anything that I publish in this blog is a result of a some real world coding. I don’t just play around with small, unrealistic demoware applications. I write real world applications that get used by lots of users.

With experience comes wisdom, and in software part of that wisdom is knowing when to employ a certain technology. There is almost never a right or wrong answer on what technologies you should be using on a project, as there are plenty of factors to consider. They usually include:

  • Is technology x supposed to be the right tool for this job?
  • Do we have the knowledge to work with and become productive in technology x within the scope of the budget?
  • How mature is technology x?

The above questions will even have varying importance depending on where you are currently working. If you are working at an agency, where budgets must be stuck to in order for the company to make a profit, and long term support must be considered, then the last two points from the above have a greater importance.

If you are working in house for a company that has it’s own product, then the last two points become less important.

Just about the only wrong reason would be something like:

  • Is technology x the latest and greatest thing, getting plenty of buzz on podcasts, blogs and twitter?

This doesn’t mean you should be using it on all projects right now. This just means that technology x is something that you ought to look into and make your own assessment (I’ll let you guess where it might be on the technology hype life cycle). But is this project the right place to make that assessment? Perhaps, but it will certainly increase your project’s risk if you think that it is.

One of the technologies that we are hearing more and more about are NoSQL databases. I won’t go into explanation details here, but you should be able to get a good background from this Wikipedia article.

Whilst I have no issues with NoSQL databases, I do take issue with one of the arguments against RDBMS’s – that development against them is slow. I have now seen several blog posts that have argued that developing with a NoSQL databases makes your development faster, which would imply that developing against a traditional RDBMS is slow. This isn’t true. You just need to improve your tooling.

Here’s a snippet from the mongo db “NoSQL Databases Explained” page:

NoSQL databases are built to allow the insertion of data without a predefined schema. That makes it easy to make significant application changes in real-time, without worrying about service interruptions – which means development is faster, code integration is more reliable, and less database administrator time is needed.

Well I don’t know about you guys, but I haven’t had to call upon the services of a DBA for a long time. And nearly all of my applications are backed by a SQL Server database. Snippets like the above totally miss a key thing out – the tooling.

Looking specifically at .net development, the tooling for database development has advanced massively in the last 6 years or so. And that is largely down to the plethora of ORMs that are available. We now no longer need to spend time writing stored procedures and data access code. We now no longer need to manually put together SQL scripts to handle schema changes (migrations). Those are real world huge development time savers – and they come at a much smaller cost as at the core is a well understood technology (unless your development team is packed with NoSQL experts).

Let’s look specifically at the real world use. Most new applications that I create will be built using Entity Framework code first. This gives me object mapping and schema migration with no difficulty.

It also gives me total control over the migration of my application’s database:

  1. I can ask Entity Framework to generate a script of the migrations that need to be run on the database. This can then be run manually against the application, and the application will even warn you at runtime if the schema is missing a migration
  2. I can have my deployment process migrate the database. The Entity Framework team bundle a console app that can be packaged up and called from another process – migrate.exe
  3. I can have my application migrate itself. That’s right – Entity Framework even allows me to run migrations programatically. It’s not exactly hard either.

My point is this: whilst the migrating of a schema may not be something that you would need to do in a NoSQL database (although you would need to handle these changes further up your application’s stack), making schema changes to an RDBMS’s schema just isn’t as costly or painful as is being made out.

Think again before you dismiss good ol’ RDBMS development as slow – because it hasn’t been slow for me for a long time.

Android

Using a Nexus 4 in x64 Windows Land

I’ve had a horrible morning.

I’ve been dealing with shitty, unpolished crappy software all damned morning.

All because people that are too intelligent aren’t stepping back from what they are doing and running a “real world” acceptance test.

I’m now the owner of a Nexus 4. The phone is blazingly fast, has fantastic battery life and is largely free of bloat. Today I decided to try and put a few mp3s on my phone. So, I connected my phone to my desktop running Windows 7×64.

Problem 1 – No drivers

I can accept that every now and then, a device will not interface with my desktop pc. Usually the device vendors have tested this out and provided a CD or a link to download the drivers. Unfortunately, between LG and Google, no one bothered to test this out. Some intense googling should send you to the Google USB driver download page. Unfortunately these are x86 drivers only.

Yeah you heard me. x86 only. Let’s just have a think about that. 3 years ago, in 2010, Microsoft announced that 50% of all Windows 7 were in 64 bit land. Which way would that number have gone in the 3 years since 2010? Yet Google only bundles 32 bit drivers for their USB drivers.

Luckily for all of us 64 bit users out there, some one else (not Google or any of the huge corporations behind this phone) has compiled the drivers for 64 bit Windows. You can download them here.

Problem 2 – Device not showing in My Computer

It took me a while to get to the bottom of this, but it is because of the protocol that the phone has been setup to use – Media Transfer Protocol.

In order to use this on Windows, you need to have Windows Media Player installed. You might laugh and think that everyone has Media Player installed, but actually many European Windows installs don’t.

To download Windows Media player you need to go to this MS download page in x86 Internet Explorer. I’m not joking. It must be x86 Windows Explorer so that Microsoft can verify that your version of Windows is genuine before letting you have the download. Going to the page in anything but x86 Windows Explorer will give you a whole host of pain when trying to validate your install.

Edit:

Incredibly, you may still get issues connecting the device to your computer after all of the above. If so, you will need to force Windows to use Microsoft’s generic driver, and not Google’s driver.

To do so:

  1. Locate the device in Device Manager. Right click and select “Update driver”.
  2. Click “Browse my computer for driver software”
  3. Click “Let me pick from a list of device drivers on my computer”
  4. In the dialogue that shows up, select “MTP USB Device”
  5. This will install the generic MTP driver that will let the device be used in Windows for file transfers etc. It’s worth noting that in with this driver, the phone will not be visible over adb.

Conclusion

Whilst its great that this phone is vanilla Android and is largely free of bloat, I couldn’t gift the phone to anyone non technical. To expect a normal everyday user to go through any of the above is utterly ridiculous.

Technical, Visual Studio

Using NDepend to clean up code and remove smells

At some point in your development career, you would have had an existing project dumped on you that you will have problems understanding and generally getting around the code. Those difficulties can be the result of some undocumented domain reasons, but could also be because of code smells. The code smells will also make the domain difficult to understand. This, I’m sure, will have been experienced by nearly every developer.

When this happens, the project that you are working on contains a large amount of technical debt. Every new developer on the project loses time trying to navigate their way around the confusing and smelly code. The project becomes infamous within your team and nobody wants to work on it. The code becomes unloved with no real owner. You need to repay some of the technical debt.

In the agile world, we should be refactoring and reviewing code bravely and regularly to improve it’s quality and to reduce the number of code smells. This however can be difficult for a number of reasons:

  • Confidence – “can I really change this code without breaking xy and z section of this application?”
  • Reasoning – “Is renaming this method from “xyz” to “Xyz” really the correct thing to do?”

It’s safe to assume that nobody is going to be completely sure of the above to questions in all circumstances. This is why it’s becoming more and more common to use a code analysis tool to help you find any potential code smells, and advise you on how to fix them. A code analysis tool can be that advisor telling you what you can do to reduce your technical debt. It can also stop you from racking up a technical debt in the first place.

In this post, I’ll be exploring NDepend, a powerful static code analysis tool that can highlight any smells in your code, and give me some good metrics about my code. I’ll be running it against the latest a version of NerdDinner, which can be downloaded from here.

You can run through this walk-through as you are reading this post with NDepend. You can find installation instructions here.

NDepend examines compiled code to help you find any potential issues in the generated IL code – so if you are running NDepend outside of Visual Studio, make sure you build your project first.

nDepend 101 – Red / Yellow / Green code

So let’s start with the basics. One of the coolest things about NDepend is the metrics that you can get at so quickly, without really doing much. After downloading and installing NDepend, you will see a new icon in your Visual Studio notification area indicating that NDepend is installed:

ndependnew

Now, we can fire up NDepend simply by clicking on this new icon and selecting the assemblies that we want to analyse:

ndependattach

We want to see some results straight away, so lets check “Run Analysis Now!”. Go ahead and click ok when you are ready. This will then generate a html page with detailed results of the code analysis. The first time you run NDepend you will be presented with a dialogue advising you to view the NDepend Interactive UI Graph. We’ll get to that in a moment – but first lets just see what NDepend’s default rules thought of NerdDinner:

ndependwarning

Yellow! This means that NerdDinner has actually done ok – we have some warnings but no critical code rule violations. If we had some serious code issues, this icon would turn red. These rules can be customised and new rules can be added, but we’ll cover this later. So we now have a nice quick to view metric about the current state of our code.

This is a really basic measure, but it lets us know that something in our code, in it’s current state, either passes or fails analysis by NDepend. You may be questioning the usefulness of this, but if your team knows that their code must pass analysis by NDepend, a little red / yellow / green icon becomes a useful and quick to see signal. Are my changes good or bad?

Dependency Graph

The Dependency graph allows you to visually see which libraries are reliant on each other. Useful if you want to know what will be effected if you change a library or swap it out for something else (you should be programming against interfaces anyway!):

ndependgraph1

By default, the graph is also showing you which library has the most number of lines of code. The bigger the box, the greater the lines of code. This sizing can also be changed to represent other factors of a library, such as the number of intermediate instructions. This lets you easily visualise information about your codebase.

Queries and Rules

Out of the box, NDepend will check all of your code against a set of pre-defined rules which can be customised. Violations of these rules can be treated as a warning, or as a critical error.

So NerdDinner has thrown up a few warnings from nDepend. Let’s have a look at what these potential code smells are, and see how they can be actioned:

ndepend-queries-and-rules

 

So, within our Code Quality rule group, NerdDinner has thrown up warnings against 3 rules. NDepend’s rules are defined by using a linq query to analyse your code. Let’s take a look at the query to find any methods that are too big:

ndepend-methods-too-big

 

It’s quite self explanatory. We can easily alter this linq query if we want to change our rules – e.g. alter the number of lines of code necessary to trigger the warning. Looking at the results from the query:

ndepend-warning-results

 

NDepend has directed us to 2 methods that violate this rule. We can get to them by double clicking on them, so that we can start re-factoring them. It’s worth stating here that a method that is too big potentially violates the single responsibility principle as it must be doing too much. It needs breaking up.

Using NDepend to enforce future change rules

A stand out feature of NDepend that I haven’t seen anywhere else before is it’s ability to execute rules against changes. I.e, you can tell NDepend to look at two different versions of the same dll, and check that only changes made meet a set of rules. This can be handy in situations where you cannot realistically go back and fix all of the previous warnings and critical violations. Whilst you can’t expect your team to fix the old code smells, you at least expect them to be putting good clean code into the application from now onwards.

Again, NDepend comes with some really good sensible rules for this kind of situation:

ndepend-code-quality-regression

 

Conclusion

Whilst there are plenty of tools out there to help you write clean, maintainable, non smelly code, NDepend does strike as a very powerful and highly customisable tool. It also stands out as it can be run within Visual Studio itself or as a standalone executable. This opens up the potential to it being run on a build server as part of a build process. I certainly have not done NDepend justice in this post as there is heaps more to it, so I would recommend downloading it and running it against that huge scary project that you hate making changes to.

 

Entity Framework, Technical

Running SQL commands with EF Code First

Before ORMs we used to write SQL code.

Yes – real, “bare metal” SQL. We used it for our CRUD operations, and to perform other larger data manipulation tasks. The database server should be the quickest way to find, remove and join data – provided you know what you are doing.

Then we started using ORMs and stopped writing SQL. The advantages of this are that we should have reduced our development time, needed fewer developers with a good knowledge of SQL programming, and didn’t have to write lengthy and repetitive SQL statements (anyone who has worked on or built a data warehouse will fully agree).

But with this, we sacrificed control over what SQL was run against our database server, leaving it to the ORM to decide what to run.

Looking specifically at Entity Framework’s code first, lets take a look at how you can run into problems with a delete.

So here’s the scenario. I have a task that pulls in data from an external source every hour and needs to be “mirrored” into a table in my application’s database.  Let’s call the table BatchImportData.

As I do not own the external data and have absolutely no control over it and need to mirror the data into my application’s database, I need to do the following to get the task accomplished:

  • Delete all of the data in the BatchImportData table
  • Grab the data from the external resource
  • Insert all of the grabbed data into BatchImportData

Using EF code first, I would normally expect to delete all records from the BatchImportData table with the following code:

foreach (var batchImportDataItem in context.BatchImportData)
    {
         Db.BatchImportData.Remove(batchImportDataItem);
    }

This will work, but it will be slow to execute. At the very least, EF will run a delete statement for every single record that exists in BatchImportData.

If we were writing bare metal SQL, we would write either a single delete statement, or a single truncate statement:

DELETE FROM BatchImportData

--OR

TRUNCATE TABLE BatchImportData

We can still do this through EF Code First simply by opening up our DbContext a bit more. Currently, our DbContext will look something like this:

public class DbContext : System.Data.Entity.DbContext, IDbContext
{
    public IDbSet<BatchImportData> BatchImportData { get; set; }
}

Let’s add a public method in our DbContext that exposes System.Data.Entity.DbContext.Database.ExecuteSqlCommand:

public class DbContext : System.Data.Entity.DbContext, IDbContext
{
    public int ExecuteSqlCommand(string sql)
    {
        return base.Database.ExecuteSqlCommand(sql);
    }

    public IDbSet<BatchImportData> BatchImportData { get; set; }
}

This method will take in a SQL statement and will run it against the database.

You can then call the new ExecuteSqlCommand method that you have just added:

   Db.ExecuteSqlCommand("TRUNCATE TABLE BatchImportData");

We now have a much quicker way of removing all records from a table.

Use with caution!

Do not use this if you are going to build up a SQL statement based on user input. You will make yourself susceptible to an injection attack.

This SQL command is merely a string – it is not strongly typed. If we rename our BatchImportData entity and forget to update this SQL command to reflect this change, we will experience a runtime error.

This opens you up to some potential serious data loss mistakes. The classic being a missing where clause.