That CSS / Javascript library isn’t as big as you think

I take some pride in going against the current tide of thought in most things. This is especially true when it comes to such a trend based industry, such as software development. We suffer from Hype Cycles in a big way.

Currently, in the world of Web Development, the community seems to be uber conscious about the potential over use of Javascript and CSS libraries, with the fear that this overuse is bloating the web. I naturally agree with the core sentiment – having a page that requires several different Javascript libraries makes me wince a little, and other things would need to be considered (what clients will be using this web app, etc) before a decision could be made about these libraries.

However, lots of developers in the community are taking this Kilobyte conscious attitude and using it to put off other devs from using popular and well established libraries.

jQuery

A few months ago it was jQuery, as someone developed a tool that attempted to evaluate if your use of jQuery was necessary – youmightnotneedjquery.com. Unfortunately, this view caught on, and presumably a few poor souls got sucked in and attempted to re-invent a heavily proved and well tested wheel, only to hit those quirky cross browser pot holes that jQuery lets you normally ignore.

Right now, it’s Twitter Bootstrap that is getting the attention of the devs that claim to be Kilobyte concious.

Here’s the problem. The size argument doesn’t stand up in the case of jQuery, or in the case of Twitter Bootstrap. Nearly all arguments that you will see complaining about the size of these two libraries are bogus.

gzip and essential web optimisation

Well, one of the best trends to have ever swept the world of Web Development, was the front end optimisation push a few years ago, driven largely by Steve Souders‘ fantastic book, High Performance Websites.

Did we forget what we learnt in this book? Remember gzip? The bogus size arguments will always look at the size of the raw minified files. If you are sending these files over the wire in a production environment, you’re doing it wrong. All production Web Servers should be sending as much content to the client gzipped. This will significantly reduce the amount of data that needs to go between the server and the client (or “over the wire”):

 

jquery-gzip

 

The above screengrab is from the network tab in Chrome developer tools, when accessing jQuery 2.1.1. The darker number in the “Size” column is what actually came down over the wire – 34KB of gzipped data. The grey number (82.3KB) is the size of the uncompressed file. Gzipping has saved the the server and the client nearly 50 Kilobytes in data transfer.

If, for whatever reason, you can’t use gzipping in your production environment, then use the CDN. This will make your site even quicker as visitors to your site will likely have their cache already primed, saving you even more of those precious Kilobytes. And, it will also be gzipped.

Twitter Bootstrap

What’s worse is when the size argument is incorrectly used to pit one library against another. I’ve even seen blog posts blindly claiming that Web Developers should use Foundation over Twitter Bootstrap, because “Foundation is more lightweight”. In fact, Foundation’s core CSS file is 120KB gzipped, whilst Twitter Bootstrap’s is 98KB.

Given that it is so easy to debunk any size arguments, I actually think that our old friend, the Hype Cycle, is playing a part. Could it be that the popularity of jQuery and Twitter Bootstrap has made them uncool? I think it’s very possible.

“I don’t use Bootstrap. I use Foundation”.

250px-FuddOnTap

Well, in Shelbyville they drink Fudd.

Straplines

My point here is that we seem to have forgotten the point of these libraries – to aid us reduce our biggest cost – development.

Just take the jQuery strapline into consideration:

“Write less, do more”.

And one of the Twitter Bootstrap straplines:

“Bootstrap makes front-end web development faster and easier. It’s made for folks of all skill levels, devices of all shapes, and projects of all sizes”.

Everyone is correct to consider the impact of library sizes, but please take a measured approach in a world where bandwidth is increasing for most clients, and development costs remain high. Real world projects (that are my bread and butter) have deadlines and costs.

Like Hanselman said – “The Internet is not a closed box. Look inside.” Look inside and make your own decision, don’t just follow the sentiment.

In defence of the RDBMS: Development is not slow. You just need better tooling.

I pride myself on being a real world software developer.

Anything that I publish in this blog is a result of a some real world coding. I don’t just play around with small, unrealistic demoware applications. I write real world applications that get used by lots of users.

With experience comes wisdom, and in software part of that wisdom is knowing when to employ a certain technology. There is almost never a right or wrong answer on what technologies you should be using on a project, as there are plenty of factors to consider. They usually include:

  • Is technology x supposed to be the right tool for this job?
  • Do we have the knowledge to work with and become productive in technology x within the scope of the budget?
  • How mature is technology x?

The above questions will even have varying importance depending on where you are currently working. If you are working at an agency, where budgets must be stuck to in order for the company to make a profit, and long term support must be considered, then the last two points from the above have a greater importance.

If you are working in house for a company that has it’s own product, then the last two points become less important.

Just about the only wrong reason would be something like:

  • Is technology x the latest and greatest thing, getting plenty of buzz on podcasts, blogs and twitter?

This doesn’t mean you should be using it on all projects right now. This just means that technology x is something that you ought to look into and make your own assessment (I’ll let you guess where it might be on the technology hype life cycle). But is this project the right place to make that assessment? Perhaps, but it will certainly increase your project’s risk if you think that it is.

One of the technologies that we are hearing more and more about are NoSQL databases. I won’t go into explanation details here, but you should be able to get a good background from this Wikipedia article.

Whilst I have no issues with NoSQL databases, I do take issue with one of the arguments against RDBMS’s – that development against them is slow. I have now seen several blog posts that have argued that developing with a NoSQL databases makes your development faster, which would imply that developing against a traditional RDBMS is slow. This isn’t true. You just need to improve your tooling.

Here’s a snippet from the mongo db “NoSQL Databases Explained” page:

NoSQL databases are built to allow the insertion of data without a predefined schema. That makes it easy to make significant application changes in real-time, without worrying about service interruptions – which means development is faster, code integration is more reliable, and less database administrator time is needed.

Well I don’t know about you guys, but I haven’t had to call upon the services of a DBA for a long time. And nearly all of my applications are backed by a SQL Server database. Snippets like the above totally miss a key thing out – the tooling.

Looking specifically at .net development, the tooling for database development has advanced massively in the last 6 years or so. And that is largely down to the plethora of ORMs that are available. We now no longer need to spend time writing stored procedures and data access code. We now no longer need to manually put together SQL scripts to handle schema changes (migrations). Those are real world huge development time savers – and they come at a much smaller cost as at the core is a well understood technology (unless your development team is packed with NoSQL experts).

Let’s look specifically at the real world use. Most new applications that I create will be built using Entity Framework code first. This gives me object mapping and schema migration with no difficulty.

It also gives me total control over the migration of my application’s database:

  1. I can ask Entity Framework to generate a script of the migrations that need to be run on the database. This can then be run manually against the application, and the application will even warn you at runtime if the schema is missing a migration
  2. I can have my deployment process migrate the database. The Entity Framework team bundle a console app that can be packaged up and called from another process – migrate.exe
  3. I can have my application migrate itself. That’s right – Entity Framework even allows me to run migrations programatically. It’s not exactly hard either.

My point is this: whilst the migrating of a schema may not be something that you would need to do in a NoSQL database (although you would need to handle these changes further up your application’s stack), making schema changes to an RDBMS’s schema just isn’t as costly or painful as is being made out.

Think again before you dismiss good ol’ RDBMS development as slow – because it hasn’t been slow for me for a long time.

Using a Nexus 4 in x64 Windows Land

I’ve had a horrible morning.

I’ve been dealing with shitty, unpolished crappy software all damned morning.

All because people that are too intelligent aren’t stepping back from what they are doing and running a “real world” acceptance test.

I’m now the owner of a Nexus 4. The phone is blazingly fast, has fantastic battery life and is largely free of bloat. Today I decided to try and put a few mp3s on my phone. So, I connected my phone to my desktop running Windows 7×64.

Problem 1 – No drivers

I can accept that every now and then, a device will not interface with my desktop pc. Usually the device vendors have tested this out and provided a CD or a link to download the drivers. Unfortunately, between LG and Google, no one bothered to test this out. Some intense googling should send you to the Google USB driver download page. Unfortunately these are x86 drivers only.

Yeah you heard me. x86 only. Let’s just have a think about that. 3 years ago, in 2010, Microsoft announced that 50% of all Windows 7 were in 64 bit land. Which way would that number have gone in the 3 years since 2010? Yet Google only bundles 32 bit drivers for their USB drivers.

Luckily for all of us 64 bit users out there, some one else (not Google or any of the huge corporations behind this phone) has compiled the drivers for 64 bit Windows. You can download them here.

Problem 2 – Device not showing in My Computer

It took me a while to get to the bottom of this, but it is because of the protocol that the phone has been setup to use – Media Transfer Protocol.

In order to use this on Windows, you need to have Windows Media Player installed. You might laugh and think that everyone has Media Player installed, but actually many European Windows installs don’t.

To download Windows Media player you need to go to this MS download page in x86 Internet Explorer. I’m not joking. It must be x86 Windows Explorer so that Microsoft can verify that your version of Windows is genuine before letting you have the download. Going to the page in anything but x86 Windows Explorer will give you a whole host of pain when trying to validate your install.

Edit:

Incredibly, you may still get issues connecting the device to your computer after all of the above. If so, you will need to force Windows to use Microsoft’s generic driver, and not Google’s driver.

To do so:

  1. Locate the device in Device Manager. Right click and select “Update driver”.
  2. Click “Browse my computer for driver software”
  3. Click “Let me pick from a list of device drivers on my computer”
  4. In the dialogue that shows up, select “MTP USB Device”
  5. This will install the generic MTP driver that will let the device be used in Windows for file transfers etc. It’s worth noting that in with this driver, the phone will not be visible over adb.

Conclusion

Whilst its great that this phone is vanilla Android and is largely free of bloat, I couldn’t gift the phone to anyone non technical. To expect a normal everyday user to go through any of the above is utterly ridiculous.

Using NDepend to clean up code and remove smells

At some point in your development career, you would have had an existing project dumped on you that you will have problems understanding and generally getting around the code. Those difficulties can be the result of some undocumented domain reasons, but could also be because of code smells. The code smells will also make the domain difficult to understand. This, I’m sure, will have been experienced by nearly every developer.

When this happens, the project that you are working on contains a large amount of technical debt. Every new developer on the project loses time trying to navigate their way around the confusing and smelly code. The project becomes infamous within your team and nobody wants to work on it. The code becomes unloved with no real owner. You need to repay some of the technical debt.

In the agile world, we should be refactoring and reviewing code bravely and regularly to improve it’s quality and to reduce the number of code smells. This however can be difficult for a number of reasons:

  • Confidence – “can I really change this code without breaking xy and z section of this application?”
  • Reasoning – “Is renaming this method from “xyz” to “Xyz” really the correct thing to do?”

It’s safe to assume that nobody is going to be completely sure of the above to questions in all circumstances. This is why it’s becoming more and more common to use a code analysis tool to help you find any potential code smells, and advise you on how to fix them. A code analysis tool can be that advisor telling you what you can do to reduce your technical debt. It can also stop you from racking up a technical debt in the first place.

In this post, I’ll be exploring NDepend, a powerful static code analysis tool that can highlight any smells in your code, and give me some good metrics about my code. I’ll be running it against the latest a version of NerdDinner, which can be downloaded from here.

You can run through this walk-through as you are reading this post with NDepend. You can find installation instructions here.

NDepend examines compiled code to help you find any potential issues in the generated IL code – so if you are running NDepend outside of Visual Studio, make sure you build your project first.

nDepend 101 – Red / Yellow / Green code

So let’s start with the basics. One of the coolest things about NDepend is the metrics that you can get at so quickly, without really doing much. After downloading and installing NDepend, you will see a new icon in your Visual Studio notification area indicating that NDepend is installed:

ndependnew

Now, we can fire up NDepend simply by clicking on this new icon and selecting the assemblies that we want to analyse:

ndependattach

We want to see some results straight away, so lets check “Run Analysis Now!”. Go ahead and click ok when you are ready. This will then generate a html page with detailed results of the code analysis. The first time you run NDepend you will be presented with a dialogue advising you to view the NDepend Interactive UI Graph. We’ll get to that in a moment – but first lets just see what NDepend’s default rules thought of NerdDinner:

ndependwarning

Yellow! This means that NerdDinner has actually done ok – we have some warnings but no critical code rule violations. If we had some serious code issues, this icon would turn red. These rules can be customised and new rules can be added, but we’ll cover this later. So we now have a nice quick to view metric about the current state of our code.

This is a really basic measure, but it lets us know that something in our code, in it’s current state, either passes or fails analysis by NDepend. You may be questioning the usefulness of this, but if your team knows that their code must pass analysis by NDepend, a little red / yellow / green icon becomes a useful and quick to see signal. Are my changes good or bad?

Dependency Graph

The Dependency graph allows you to visually see which libraries are reliant on each other. Useful if you want to know what will be effected if you change a library or swap it out for something else (you should be programming against interfaces anyway!):

ndependgraph1

By default, the graph is also showing you which library has the most number of lines of code. The bigger the box, the greater the lines of code. This sizing can also be changed to represent other factors of a library, such as the number of intermediate instructions. This lets you easily visualise information about your codebase.

Queries and Rules

Out of the box, NDepend will check all of your code against a set of pre-defined rules which can be customised. Violations of these rules can be treated as a warning, or as a critical error.

So NerdDinner has thrown up a few warnings from nDepend. Let’s have a look at what these potential code smells are, and see how they can be actioned:

ndepend-queries-and-rules

 

So, within our Code Quality rule group, NerdDinner has thrown up warnings against 3 rules. NDepend’s rules are defined by using a linq query to analyse your code. Let’s take a look at the query to find any methods that are too big:

ndepend-methods-too-big

 

It’s quite self explanatory. We can easily alter this linq query if we want to change our rules – e.g. alter the number of lines of code necessary to trigger the warning. Looking at the results from the query:

ndepend-warning-results

 

NDepend has directed us to 2 methods that violate this rule. We can get to them by double clicking on them, so that we can start re-factoring them. It’s worth stating here that a method that is too big potentially violates the single responsibility principle as it must be doing too much. It needs breaking up.

Using NDepend to enforce future change rules

A stand out feature of NDepend that I haven’t seen anywhere else before is it’s ability to execute rules against changes. I.e, you can tell NDepend to look at two different versions of the same dll, and check that only changes made meet a set of rules. This can be handy in situations where you cannot realistically go back and fix all of the previous warnings and critical violations. Whilst you can’t expect your team to fix the old code smells, you at least expect them to be putting good clean code into the application from now onwards.

Again, NDepend comes with some really good sensible rules for this kind of situation:

ndepend-code-quality-regression

 

Conclusion

Whilst there are plenty of tools out there to help you write clean, maintainable, non smelly code, NDepend does strike as a very powerful and highly customisable tool. It also stands out as it can be run within Visual Studio itself or as a standalone executable. This opens up the potential to it being run on a build server as part of a build process. I certainly have not done NDepend justice in this post as there is heaps more to it, so I would recommend downloading it and running it against that huge scary project that you hate making changes to.

 

Running SQL commands with EF Code First

Before ORMs we used to write SQL code.

Yes – real, “bare metal” SQL. We used it for our CRUD operations, and to perform other larger data manipulation tasks. The database server should be the quickest way to find, remove and join data – provided you know what you are doing.

Then we started using ORMs and stopped writing SQL. The advantages of this are that we should have reduced our development time, needed fewer developers with a good knowledge of SQL programming, and didn’t have to write lengthy and repetitive SQL statements (anyone who has worked on or built a data warehouse will fully agree).

But with this, we sacrificed control over what SQL was run against our database server, leaving it to the ORM to decide what to run.

Looking specifically at Entity Framework’s code first, lets take a look at how you can run into problems with a delete.

So here’s the scenario. I have a task that pulls in data from an external source every hour and needs to be “mirrored” into a table in my application’s database.  Let’s call the table BatchImportData.

As I do not own the external data and have absolutely no control over it and need to mirror the data into my application’s database, I need to do the following to get the task accomplished:

  • Delete all of the data in the BatchImportData table
  • Grab the data from the external resource
  • Insert all of the grabbed data into BatchImportData

Using EF code first, I would normally expect to delete all records from the BatchImportData table with the following code:

foreach (var batchImportDataItem in context.BatchImportData)
    {
         Db.BatchImportData.Remove(batchImportDataItem);
    }

This will work, but it will be slow to execute. At the very least, EF will run a delete statement for every single record that exists in BatchImportData.

If we were writing bare metal SQL, we would write either a single delete statement, or a single truncate statement:

DELETE FROM BatchImportData

--OR

TRUNCATE TABLE BatchImportData

We can still do this through EF Code First simply by opening up our DbContext a bit more. Currently, our DbContext will look something like this:

public class DbContext : System.Data.Entity.DbContext, IDbContext
{
    public IDbSet<BatchImportData> BatchImportData { get; set; }
}

Let’s add a public method in our DbContext that exposes System.Data.Entity.DbContext.Database.ExecuteSqlCommand:

public class DbContext : System.Data.Entity.DbContext, IDbContext
{
    public int ExecuteSqlCommand(string sql)
    {
        return base.Database.ExecuteSqlCommand(sql);
    }

    public IDbSet<BatchImportData> BatchImportData { get; set; }
}

This method will take in a SQL statement and will run it against the database.

You can then call the new ExecuteSqlCommand method that you have just added:

   Db.ExecuteSqlCommand("TRUNCATE TABLE BatchImportData");

We now have a much quicker way of removing all records from a table.

Use with caution!

Do not use this if you are going to build up a SQL statement based on user input. You will make yourself susceptible to an injection attack.

This SQL command is merely a string – it is not strongly typed. If we rename our BatchImportData entity and forget to update this SQL command to reflect this change, we will experience a runtime error.

This opens you up to some potential serious data loss mistakes. The classic being a missing where clause.

Redirecting legacy pages in asp.net

Picture this situation.

An old (legacy) application has landed on your project pile. It is largely built in php, and you intend on re-writing it in asp.net MVC.

You will therefore need to somehow inform any parties that may be trying to access the old urls ending in .php, that the resource they are looking for has moved permanently. You may also wish to do this for SEO reasons.

This is something that cannot be achieved easily through routing; by default, IIS will not pass requests for resources ending in .php to your application. Your routing will therefore never be put to use for resources ending with .php.

The nicest solution I found to this issue is to setup a list of redirects within the system.Webserver section of your web.config file. The following listing below will send a HTTP Response status of 301 (Moved Permanently) for any requests for index.php and for prices.php:

<system.webServer>
    <httpRedirect enabled ="true" httpResponseStatus="Permanent" exactDestination="true">
        <add wildcard="/prices.php" destination="/prices"/>
        <add wildcard="/index.php" destination="/"/>
    </httpRedirect>
</system.webServer>

Index.php will now redirect to /, and prices.php will now redirect to /prices.

This code is currently running in the wild on Azure.

If you are unsure if you need a Permanent redirect or not, have a read of this article from Google.

Creating a composite primary keys in Entity Framework 4.1

There are two main ways of achieving this. Let’s look at an object – Brochure: { ProductId, Year, Month, ProductName}. We want:

  • ProductId
  • Year
  • Month

To make up the primary key.

Method 1 – Data annotations

In your entity class, simply decorate any properties that you want to make up your key with the attribute “Key”:

public class Brochure
{
    [Key, Column(Order = 0)]
    public int ProductId { get; set; }

    [Key, Column(Order = 1)]
    public int Year { get; set; }

    [Key, Column(Order = 3)]
    public int Month { get; set; }

    [Required]
    public string ProductName { get; set; }
}

Method 2 – DbMigration class

NOTE: You shouldn’t need to use this method if you are using full entity framework code first. However, some projects only use entity framework to handle migrations – so this might be of use to you:


public partial class BrochureTable : DbMigration
{
   public override void Up()
   {
      CreateTable("Brochures", c => new
      {
         ProductId = c.Int(nullable: false),
         Year = c.Int(nullable: false),
         Month = c.Int(nullable: false),
         ProductName = c.String(maxLength: 60)
      })
     .PrimaryKey(bu => new {bu.ProductId, bu.Year, bu.Month});
   }
}

Enjoy!

Why I wont be installing another custom rom on my Android phone

A while back I rooted my Samsung Galaxy S2. I decided to root the phone and swap the rom out after getting frustrated waiting for Android updates to firstly get updated again by Samsung, and then bastardised by t-mobile. This would usually involve a varying range of irritations; from whacking on uninstallable applications that I would never use, to renaming the stock browser to “web n walk”.

Aside from the application bloat irritations, the actual sweet goodness at the core (the latest release of the Android OS) was still something that I looked forward to. However, it takes time for Samsung and your mobile operator to get their changes in, and this can often be months. For example, Ice Cream Sandwich was officially released on the 19th of October 2011. It eventually landed on t-mobile uk branded Samsung Galaxy S2 handsets in June the following year. A painful 8 month wait.

So, after lots of careful consideration and lots of research, I rooted my phone and installed a custom rom using some of the awesome information available over at galaxys2root.com.

The rooting itself was quite straight forward. I then selected a custom rom that I had heard several rave reviews about – Resurrection Remix.

After installation and initially playing with it, I was very impressed. But as the weeks went on I started to notice a few bugs of varying degrees of irritation. Some were a little annoying. Others were rage inducing.  I was also getting worse battery life and plenty of apps just didn’t work.

But I still stuck with it. Why? It was so much faster than the official rom.

After a few months, I decided I’d better check for a newer version of Resurrection Remix with the hope that it would fix some of the issues that I was experiencing.

There was. I considered it, did my research, and found that most users were satisfied with it. A Google search for “resurrection remix [version number] issues” was my research.

After installation and a few weeks of use, I was much happier. I was now getting much better battery life and many of my apps that did not quite function properly previously were now working as expected.

I’m currently still on Resurrection Remix version 2.7. Some parts of the rom are great, but others are not – there are a few annoying glitches. One of the worst is that I cannot update the gmail app. Frustratingly, this is even though the rom’s community is one of the biggest out there.

If you are thinking about running a custom rom on your android phone, consider the following:

I personally now would not recommend going the way of running a custom rom. Sure, it’s fun playing around with your phone and seeing how it runs under a highly customisable, non stock rom, but if you use your phone heavily and rely on it to work, I just don’t believe it’s worth the risk of running into a frustrating glitch.

I think that from now on, I’ll be sticking with the stock roms but will be rooting so that I can use apps like Titanium Backup, and so that I can uninstall unwanted bloatware apps.

Running Entity Framework code first migrations programatically

Entity Framework code first migrations can easily be run programmatically. You can specify a specific migration, or you can just update to the latest migration.

To rollback all migrations (calls the “Down” method on each migration):

var configuration = new Configuration();
 var migrator = new DbMigrator(configuration);
 //Rollback
 migrator.Update("0");

To rollback or update to a specific migration:

var configuration = new Configuration();
var migrator = new DbMigrator(configuration);
//Update / rollback to "MigrationName"
migrator.Update("MigrationName");

To update to the latest migration:

var configuration = new Configuration();
 var migrator = new DbMigrator(configuration);

//Update database to latest migration
 migrator.Update();

Making Microsoft Security Essentials behave like antivirus software

I have been running Microsoft’s free antivirus software, Security Essentials, on all of my home machines since it first was released.

On three separate occasions, I’ve discovered Trojans running on machines that are supposed to be protected by this antivirus software.

This freaked me out the first time, concerned me the second time and made me rage quit Security Essentials the third time. I’m now running Bit Defender at home.

I looked into these problems as I could not be the only one facing these issues. I was right – I found several forum posts by people complaining of the same problems.

Upon looking into the issue further, I discovered that there isn’t actually anything wrong with the detection on Security Essentials. In fact, it ranks quite nicely amongst alternative free antivirus solutions (Avast, AVG, Avira).

The problem appears to be its default settings. By default, Security Essentials will be setup to run at 2 am on Sunday, and will only look for an update on virus definitions just before it runs. If, like most home users, your PC may be off at 2 am on a Sunday, these two critical actions will not happen. No update. No scan. This will leave your PC with very little protection.

If you’re going to use Security Essentials, you need to tweak the settings to make it more protective of your PC. Below are my recommended settings. Fire up Security Essentials and navigate to “Settings”.

1. Scheduled Scan

I’d recommend having a daily “Quick Scan” at a time that you know your PC will be on. If you’re worried about the slowdown, simply limit your CPU usage. And remember, the slowdown and downtime that you get as the result of a virus will be a lot worse than any slowdown than you could get as a side effect of an antivirus scan:


SecurityEssentialsSettingsScheduledScan

2. Default Actions

If my antivirus thinks it’s found a severe or high alert, I want it removed:

SecurityEssentialsSettingsDefaultActions

3. Real-time protection

This should be on. If it isn’t, turn it on.

4. Excluded files and locations

Be sensible here. Add any directories and folders that you will be working on regularly that are unlikely to get infected. For example, as a developer, I know that my source code is unlikely to be effected by a virus. As I will be writing changes to these files to the drive regularly, I also do not want any slowdown as a side effect of the antivirus scanning my edited files:

SecurityEssentialsSettingsExcludedLocations

5. Excluded file types

Again, you want to be sensible here and ideally have as few files as possible being scanned here. The default settings of .ini and .log files should be sufficient here.

6. Excluded processes

If you use any heavy applications for work, it is worth adding them into this list. As a developer, I tend to spend a lot of time in Visual Studio. I know this process is a safe one as I installed it and it came from a vendor that I trust:

SecurityEssentialsSettingsExcludedProcesses

7. Advanced

The only change I’d suggest here is setting Security Essentials to scan your removable drives:

SecurityEssentialsSettingsAdvanced

Security Essentials should now be of a greater protective value to you. If you don’t think this will protect you enough, consider purchasing an Anti Virus solution.