Before containerization really took hold, .net and node.js apps would generally run on bare metal, where servers were self administered. In the .net world, we would "publish" our applications in release mode (basically a distribution build) and would ship our compiled code onto our servers where we'd generally let IIS do the job of running our application. Fatal crashes were less of a concern in this world because of the type safety, and the application pool management of IIS.
In the node.js world, fatal crashes were more of a concern. Without typesafety, runtime crashes can be more of a risk. Out of the box, if a node.js application crashes, it will exit and will not serve any further requests.
This is where process monitors came in and provided a vital job in node.js apps running in production. A process monitor is an application that wraps your application and attempts to keep it healthy. Some of these process monitors also include logging and some metrics.
Two popular node.js process monitors are
pm2 is a bit more feature rich, which we'll come to later, and is more or less the defacto standard process monitor in node.
Both at their core, will attempt to re-start your node.js app if it crashes, so that it continues to serve requests. Both can also be configured to watch for any file system changes, so that the application is restarted if there is a source code change.
Process monitoring in .net seemed to become more of a concern when containerization took off, which co-incided with .net core becoming available and opening the door to cross platform .net development.
Out in the wild, I remember seeing several containerized .net code apps that were deployed into linux containers.
As a result of this, they were not built containing IIS, and instead internally ran .net's lightweight Kestral webserver, with the intention being that a more hardened webserver would sit infront of it (e.g. IIS or NGINX).
These containers typically ran supervisord to keep the .net core application alive. It's worth mentioning now that the .net's teams tutorials seem to lean towards using systemd instead.
It's possible to deploy a containerised app and totally separate our your keep alive and monitoring concerns to outside of the container.
Much of this relies on the docker flag --restart
. For example:
docker run --restart=always -p 8081:8080 -p 81:80 -d --name myapp myapp
With the restart flag set to always
, docker will continually restart the container if it exits - so you just need to make sure you have bound the ENTRYPOINT
or CMD
command directly to the node.js or .net process.
node.js:
CMD npm start -- --no-daemon --watch
.net:
ENTRYPOINT ["dotnet", "DotNet.Docker.dll"]
It's worth mentioning here that pm2 doesn't just deal with keeping a node.js process alive, but that it also deals with the clustering of node.js applications, which is absolutely essential in production when running on a multi-core environment.
So, if you're letting go of pm2, you need to do your clustering elsewhere. This is where we start getting into "other" considerations - especially budget, anticipated load and skillset. Clustering in the containerised world generally means using Docker Swarm, or Kubernetes to manage and monitor app instances. These are technologies that will do the job, but require their own infrastructure (and investment associated with that infrastructure) and skillset.
So generally, no, we do not need a process monitor in our containerized apps as long as we have set Docker up to restart our containers for us.
However, doing so is not an anti-pattern. If you haven't got the luxury of a budget, skills and time to invest into clustering your containers, it wont do you any harm to cluster inside of the container.
]]>We had a nice problem - we were getting into the lofty heights of 75% percent coverage with over 6,500 unit tests over a decent sized front-end codebase. It's worth noting that we're not hugely focussed on percent coverage as a metric (instead we like to make sure business logic is well covered), however it's an easily digestible metric that people understand.
The problem we were having was that the console output for each unit test run was becoming unmanageable. Each test run was generating around 1.4 megabytes of text. This may not sound like a lot going by the filesize, but it's certainly a lot of text for a human to scroll through.
A good example of this level of output being a hindrance, is getting a test failure early in the test run. At the end of the run, you'll get a message telling you that a test failed, somewhere:
Chrome Headless 122.0.6261.94: Executed 6609 of 6752 (skipped 142) 1 FAILED (38.304 secs / 34.228 secs)
TOTAL: 6609 SUCCESS 1 FAILED
Now in order to find that failed test, we'll need to do a lot of scrolling, or re-run the tests and this time pipe the output to a text file, so that we can easily search through the output for the failure. Aside from losing all of our helpful text colouring in the console, having to open the text file and search is friction we could do without.
All we really need to see in the test run output is what failed and perhaps a count of how many passed.
However, there were a few reasons the test run output was so big:
Each time there was a call to our mocked backend an endpoint that wasn't mocked fully, we'd get a warning. This would happen even if there was no failure.
This includes information and warning messages. Again, neither are an indicator of anything fatal.
Chrome Headless 122.0.6261.94: Executed 0 of 6752 SUCCESS (0 secs / 0 secs)
Chrome Headless 122.0.6261.94: Executed 1 of 6752 SUCCESS (0 secs / 0.015 secs)
Chrome Headless 122.0.6261.94: Executed 2 of 6752 SUCCESS (0 secs / 0.02 secs)
Chrome Headless 122.0.6261.94: Executed 3 of 6752 SUCCESS (0 secs / 0.025 secs)
Chrome Headless 122.0.6261.94: Executed 4 of 6752 SUCCESS (0 secs / 0.029 secs)
The above was being logged for each an every test - again, information that is of little value to anyone that wants to know if the entire test suite passed or not.
Aside from inconveniencing developers, it was also creating a lot of work for our build server's logs. We run this test suite for every branch, every time there is a push. That's a lot of unnecessary output to wade through when you need to.
The goals:
We tackled this by taking two different approaches.
Most unit test runners will allow you to configure this centrally. In the case of Karma, this is easily done by specifying the level of logs to actually print.
Instead of outputting the progress report in a verbose manner on the build server, we instead opted to run the "failed" reporter on the build server. This instead only prints a "." for each passing test, whilst still printing the full stack trace when there is a test failure.
This drastically reducing the build log output and made it much easier to find the valuable information in the build log.
]]>/
├── region
│ └── city
│ └── area
├── Help
└── Account page
Assume we have a <router-outlet>
directives in the following:
/
- Home - so this is the "master" router-outletregion
- So the routes that sit below region
(city
and area
) will be rendered hereHow do we pass a value that is resolved in the area
route back up the home page?
In an ideal world we'd be able to remodel our entire html structure so that it roughly followed the routing and we would not need to pass data back up to the parent. This solution is for those situations where you just can't.
If you attempt to retrieve the value in the homepage, you'll hit problems near enough no matter what you do.
Subscribing to router events in the homepage will cause you pain as they will be fired for every child route in a path when there is a transition. For example, if you land on home/region/city/area
, the NavigationEnd
event will be fired three times - once for region, once for city and once for area.
Whilst this will still let you access the route data for area when the NavigationEnd
event is fired for it, the NavigationEnd
for the area
route will be fired first, followed in quick secession by the NavigationEnd
events for the city
and the area
routes.
So if you manage to get hold of the data from the area
route, you need to be careful not to lose that data when the event fires for the parent routes.
This is where angular services can help us. Angular services are singletons, and exist in memory for the duration of an Angular app being loaded and running.
Create a session service and register it with you app module:
ng generate service app/services/SessionService
And next, let's create a property in our new SessionService to hold a reference to our area:
export class SessionService {
public area: Area | undefined;
...
}
Now, we can update our area
component to write it's resolved data into the session service:
this.route.data.subscribe(data => {
this.sessionService.area = data['area'];
});
We then also need to clean up after ourselves, and remove the data from the session when we destroy the area
component:
ngOnDestroy () : void {
this.sessionService.area = undefined;
}
Next, we can import the SessionService into the home
component and start accessing the value in our html buy using the protected
modifier:
constructor (
protected session: SessionService
) { }
And now in our html for the home
component:
<SomeChildComponent
[area]=session.area>
</SomeChildComponent>
And we have now passed our area all the way back up to our home component and consumed it!
Wonderfully, this works because our session is an object - so by passing it around in this way, we are simply passing around a reference in memory to the same object.
]]>I'll write a post about what is going wrong with twitter at a later date, but in the meantime, let's have a look at the alternatives that are out there, and how they fare. In no particular order:
I downloaded Threads as soon as it became available in the UK and I feel like this will be the real challenger to Twitter. Even if it's feature set is a little spotty, it feels like it has hit the critical mass of users across different communities for it to be the world's next microblogging platform.
Take for example the football community (football twitter). A football journalist known for breaking stories about important transfers, Fabrizo Romano, is on Threads. If he is active on Threads and dropping his transfer news on there, a good chunk of football twitter will follow him there.
This does not mean that there is no space for the other platforms to exist. Take for example TrustCafe. With it's community oriented feel - Branches dedicated to a particular topic, like photography - it could fill the space currently being given up by Reddit, who have forced 3rd party Reddit apps to shutdown in favour of their own app.
]]>My situation was that I had a new "shell" database ready to roll, with the schema and migrations already executed and in place as the result of a automated deployment to a new server.
I found the easiest way to get this working was to not use the -Fc
flag, or custom format.
The following command will dump your data and schema information into a .sql
file:
pg_dump -U mydatabaseuser -c -f filename.sql
Running the command may take a while depending on the size of your database.
When you're ready to restore your backup filename.sql
file (once you've moved it to wherever it needs to go, like a new server), simply run:
psql -d mydatabasename -U mydatabaseuser -f filename.sql
Note that we don't need to use pg_restore
here since our file is plain sql.
I first came across this phrase when I worked at a small development house. Despite the company only being small (5 employees, all were devs), they delivered and maintained a large number of line of business applications for some well known clients.
There were multiple reasons behind this success. One of the major reasons was the fact that all new projects had to be deployed into their production environments in an automated fashion before any real development work started.
This approach has a few advantages:
This includes setting up DNS records and build pipelines. It exposes any limitations in the deployment plan early on. It also forces you to resolve problems like production database migration, early on.
This is one of the cornerstones of agile development. Rather than waiting until we think we're approaching a finished state with the project before tackling the deployment pipeline, having it done first makes it the first class citizen that it needs to be
More on this point later...
In this context, the skeleton is our barebones project with very little in it, and no real business logic. The idea of it "walking" refers to it being deployed to production in an automated fashion, and with a production config.
The world of web development suffers from an enormous amount of technology churn. It feels like every day there is a new framework, runtime or module bundler.
When we want to use a new technology, we need to prove that it can run in our production environment, and with a production config. "It works on my machine" isn't something we can tell our paying customers.
When we run web applications in production, we typically remove all debugging code, and we bundle and minify our code. We need to prove that our technology stack is capable of delivering a production ready site to our end client.
A few years back, I was working on a team that was looking into starting a new Angular project. At the time the Angular documentation favoured TypeScript, but stated that JavaScript could be used. As the team felt they were strong JavaScript developers and were relatively unfamiliar with TypeScript, their preference was to continue writing JavaScript.
As performance was a key metric for that project, one of the first things we did was scaffold the project, and run the distribution build just to test the bundle sizes, and to get a feel for how it ran in the browser. We did not want to spend months building out a project only to find that the in browser performance had got worse with a supposedly superior technology.
We were glad we did - we discovered that the empty JavaScript version of the application shipped 800Kb of JavaScript to the browser, whereas the TypeScript version shipped around 140Kb. The treeshaking in the JavaScript version simply did not work. It was a non starter, and we were very happy we found this our early on. We posted our findings on StackOverflow.
Get to production early, even if your app isn't ready. You'll catch critical issues early.
]]>Use mature, well established technologies to avoid pouring time into fixing things that aren't related to what you are building. Asses technologies as objectively as you can based on your requirements. Avoid packages that evolve too frequently.
On my journeys as a developer, I've come across teams that have consistently failed to deliver sprint targets, and products that failed to deliver many things of use to end users. Getting to the end of a two week sprint and proudly telling the product owner that the sprint was a success because the development team re-wrote the database access layer of the application with zero impact on the end user, is not going to be well received.
This was often because of poor technical decisions that resulted in the development team's focus moving away from actually delivering features, to fighting with frameworks and technologies that either did not fit in with the application or were simply not mature enough to be used yet.
Developing for the web is a fast moving area of technology, and as a result, there are many competing technologies that essentially do the same thing. Take a for example, a server side web framework. There are lots of well established web frameworks out there, but for various reasons, we keep inventing new ones. This in isolation, is not a bad thing - it gives us an opportunity to consider new ways of doing things.
However, it becomes a problem when development teams adopt immature technologies, or technologies that are not suited to what they are doing, and attempt to deliver something that has to run in the wild. Take for example a standard problem that us web developers have to deal with - Cross-site request forgery. Any public facing website that takes user input must have measures in place to counter act the threat of Cross-site request forgery. In established frameworks like expressJs, this is trivial, and can be implemented in about an hour. In an immature server side framework, you'd likely have to spend days putting together your own solution to guard against the threat of CSRF. These are days that could be spent working on actual product features.
In software development, we regularly hear about new technologies and associated success stories. A good example would be NoSQL databases. In the last decade or so, we've seen a shift away from traditional Relational Databases, and a pivot towards document databases, like MongoDB.
MongoDB has it's detractors (here and here), as well as it's evangelists. When picking a technology to use, we ought to be making some suitability assessments. For example, if we were considering using a NoSQL database like MongoDb, we should be asking ourselves the following:
If your answers are no to any of the first two, or yes to the last one, then you should consider a well established relational database like PostgreSql, which will be easier to query structured data and even supports unstructured JSON data.
Forcing structured relational data into a schemaless database will result in technical debt in the form of clumsy to write and slow to execute database queries.
If you are worried about tooling, don't be! There are plenty of open source packages out there that can help you with migrations.
See also: In defence of the RDBMS: Development is not slow. You just need better tooling
As I mentioned earlier, web development (especially anything happening on the front end), is a fast moving area. This can lead to problems when package maintainers re-write APIs, or deprecate parts of an API in favour of some new API calls. If you are consuming a package like this, you will get pain when you need to make an adjustment, as you'll probably need to re-write all of your config files.
Take for example migrating webpack from version 3 to 4:
Or migrating babel from version 6 to 7, which is so complex that it needs a nearly 2,500 word guide.
Even if you aren't upgrading the package, you can get pain when you need to make an adjustment. This is because any research that you now do will throw up additional noise - the way of doing whatever you're trying to do in the old version of the package, and the way of doing it in the new version of the package.
Any mistakes made when opting for proprietary cloud services can be costly from a billing point of view, can tie you into a platform, and can give your end users a bad experience. Looking specifically at databases, the cloud providers offer plenty of managed solutions. Some are slightly adjusted versions of standard database technologies (e.g. Azure Sql Database), and others are more heavily adapted versions of existing technologies (e.g. Amazon Redshift, which is not to be confused with Amazon RDS).
An example of a mistake here would be plugging Amazon Redshift directly into a web application. It's really not what the service is for - but you'd need to do your research to find this out.
Even if you get something working, it's better to realise your mistake early and to throw those mistakes away before they start hurting your users, and you become to wedded to the underlying technology. The cost of correcting a poor technical decision will get more expensive as time goes on, so the earlier the better.
A complaint I see often is that we as devs focus on our own development experiences instead of our user's experience. Here I'd say there needs to be a balance. The developers need to have a good, friction free (any of the previous mentioned points can be sources of friction) experience when they are developing so that they can deliver features. But the users also need to not be burdened with the mistakes of any poor technical decisions that have been made (such as using the wrong database technology, resulting in application slowness).
I would expect that a strong technical lead would be able to find the balance between developer happiness, and product owner and user happiness. A good technical lead will put a development team on a platform that keeps them productive, which in turn keeps the product owner and users happy as working, tangible features that they see will get delivered.
To summarise, a good technical lead will:
Nearly every weekend there is some sort of big discussion on Twitter, and the weekend just gone, it was about the term "Fullstack", with some claiming that the Fullstack developer does not exist.
This main complaint seems to be coming from a section of the front end community. Their main grievance around the term fullstack, seems to stem from an impression (again, this an opinion held by some people) that fullstack developers have not done a good job of developing for the browser. We often see sites that do not need to be single-page apps, developed as single-page apps. In fact, only yesterday, I came across a technical blog that was built this way. It was downloading a few hundred kilobytes of JavaScript just to deliver a trivial blog post to the client.
This is a case of the wrong tool being used for the job - a blog does not need to be a single-page application. A server side framework or a plain old html file with the content already present would have done a far better job. It would have loaded quicker and would have been a lot more accessible. But a few examples of this does not mean that the fullstack developer does not exist.
Most of the teams I've worked on has been a number of developers that have been employed as fullstack developers. As with all teams, there is a different range of competencies and skills across the team. For example, one of the developers might be more knowledgeable than the other developers about back end data access. Another may be more knowledgeable about CSS3.
The key in these teams has always been around collaboration and knowledge sharing. When we create a pull request, we anticipate feedback from our team mates. Given that they have a diverse range of competencies, we can expect that our team mates will suggest improvements in areas that we may not have considered. In doing so, they are spreading some of their knowledge.
This of course is not the only vector for knowledge sharing - pairing will also inevitably happen in a team of developers like this.
The opposite to the fullstack developer is a chain of siloed developers. In place of one developer, you might have:
So now, when we need a single feature developed, we need to get three developers aligned on what the requirements are, or we have a 4th lead developer there to break the work down and spell all of the changes out for the developers. The lead developer ironically needs some fullstack knowledge.
Having worked on teams where the developers were much more siloed, we often encountered integration problems at the boundaries. The front end developer may pick up their part of work only to find out that the API developer hasn't included a particular field that the front end developer was expecting. Rather than the front end developer quickly making a change to the API, the work now needs to go back up the chain, and the front end developer is now blocked.
There are plenty of competent fullstack developers out there that read documentation and are eager to deliver the best experience to their end users. Team collaboration, access to training, and a good spread of skills can help to achieve this. Strong technical leads can help to put this in place, and stop any skillset skew from degrading the in app experience.
]]>Run your cron jobs inside their own container if you can afford the luxury of a dedicated container for cron jobs and your application structure allows it. If not, just run the cronjob directly on the server hosting the container, or use a cloud service that let's you run functions on a schedule.
Most of the sites that I look after now run within a Docker container. The philosophy fits well - an application siting neatly inside a dedicated container that with all of it's dependencies. It should also be free from outside interference.
The only downside to this approach is if you want to run more than one process at a time. Docker files describe what a container needs, before ending with a CMD
instruction. This instruction essentially tells Docker which process to hang onto, and to monitor.
So, if your Docker container runs a node application, your CMD
instruction might look like this:
CMD npm start -- --no-daemon --watch
Where --no-daemon
and --watch
are being passed to the start script in the package.json
file.
You've got some options:
You shouldn't really run two processes in one Docker container. This was the approach I wanted to take because my node application contained a few scripts that I wanted to run on a schedule. These scripts were kept in the main application because they shared the dependencies of the main application, such as my database access logic.
You can apparently get this working, but you run the risk of one of the processes falling into the background, and not being monitored by Docker - so you'll get a silent fail. I tried this after reading numerous stackexchange threads, like this one, and this one, and this one. I never got anything working using these approaches, and lost a few hours trying.
This seems like a bit of a luxury to me, but it does work, and is much easier to get working than the setup above. I say it's a luxury because it means you need to go through the setup and deployment of an additional container (and Docker images can get pretty large). In my case, it also meant that I needed to build another container with a copy of the whole app, and it's dependencies.
However, this will get complicated should you need to do any file system operations. I did, and now that I had two copies of the application in two separate containers, meant that my jobs container could not access the files in my app container. I had no interest in spending even more time attempting to get a shared volume to work across the two containers, so gave up.
You can view details on how to do this in this stackoverflow thread.
From any host machine that is running a Docker container, you can get terminal access into the running container. It's comparable to getting an ssh connection going to a remote server. For example, if my Docker container is called app_container_1
:
docker exec app_container_1 /usr/local/bin/node /usr/src/app/jobs/my-script-to-run.js
Whilst this approach is much simpler, it only works if you've got access to the host machine. So if you're purely in the cloud, your only option is to run a dedicated container for the jobs, or to use some other cloud service that lets you run a function on a schedule.
]]>It's also no longer self hosted! It's now hosted on Netflify and I'm happy that this site is now one less thing for me to backup.
This website is now the best it's ever been:
In this post I'll outline the steps I took to migrate the blog from WordPress to Eleventy.
I first came across Eleventy after seeing it in action on the starter kit Hylia. Eleventy piqued my interest because it is super simple to get going. You don't need to learn much at all, and the documentation is concise and practical. Essentially at build time, Eleventy will interpret markdown files and transform them into semantic html documents. You can fit templates around these, be they nunjucks, pug, or plain html.
The first thing I did was get an empty Eleventy site generated.
mkdir mynewblog
cd mynewblog/
npm init -y
npm install @11ty/eleventy --save-dev
echo "# hello world" > index.md
All we've done here is create a folder with a singe markdown file. There isn't much going on here, so let's just go ahead and ask Eleventy to generate us a site in that folder:
npx @11ty/eleventy
The output will look like this:
Writing _site/index.html from ./index.md.
Processed 1 file in 0.06 seconds (v0.9.0)
Which tells us that Eleventy found our index.md file, processed it, and generated us an index.html file in the _site folder, which is the default output folder that Eleventy will generate html to.
So let's see how this looks in the browser:
npx @11ty/eleventy --serve
This will serve out your _site folder on localhost:8080
Great, so that all works!
Let's examine the _site/index.html file that Eleventy generated
<h1>hello world</h1>
Eleventy has done what we've expected, but in order to make a production ready site, we're going to need to tell Eleventy about all of the html that needs to wrap the content that our markdown files contain. So let's create a new folder called _includes, and in there create a file called layout.pug:
doctype html
html(lang="en")
head
title My new site
meta(name='viewport' content='width=device-width, initial-scale=1')
body
article
header
h1 !{title}
| !{content}
_includes is a special folder that Eleventy will ignore when generating your site. If we were to put our layout.pug in the root of our site, Eleventy would generate a layout.html file for us, which we don't need.
Next, we need to tell our index.md to use the _includes/layout.pug file. Update index.md to contain the following:
---
title: "Hello world! This is the title"
date: "2019-10-16"
layout: "layout.pug"
tags: ["post"]
---
## This is a sub title
We've now added some front matter to our index.md file. Eleventy will use the front matter to build up some metadata about the file, which we can access later. We've also told Eleventy which layout file to use to transform the index.md file.
Let's ask Eleventy to build our site again:
npx @11ty/eleventy
Now let's have a look at our generated page _site/index.html:
<html lang="en">
<head>
<title>My new site</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
</head>
<body>
<article>
<header>
<h1>Hello world! This is the title</h1>
</header>
<h2>This is a sub title</h2>
</article>
</body>
</html>
Great, now let's look at getting all of our posts out WordPress.
We basically need a way of generating a markdown file for each of our posts in WordPress. After a brief search on GitHub, I came across this npm package. It will read your WordPress export.xml file, and generate you a markdown file for each of your posts. So here's what I did:
node index.js --addcontentimages=true
The addcontentimages argument will fix the image paths in your generated markdown files.
You will then have an output folder that contains a folder for each of your posts, which will contain an markdown file for the content, and a folder containing the images. Here's the tree output my review of the Surface Book keyboard:
+-- 2019-04-11-surface-book-keyboard-review
│ +-- images
│ │ +-- craft-how-it-works.png
│ │ +-- logitech-craft.png
│ │ +-- surface-book-keyboard-angle-1024x499.jpg
│ │ +-- surface-book-keyboard-angle.jpg
│ │ +-- surface-book-keyboard-battery-compartment-1024x763.jpg
│ │ +-- surface-book-keyboard-battery-compartment.jpg
│ │ +-- surface-book-keyboard-numeric-1024x768.jpg
│ │ +-- surface-book-keyboard-numeric.jpg
│ │ +-- surface-book-keyboard-side-1024x813.jpg
│ │ +-- surface-book-keyboard-side.jpg
│ │ +-- surface-book-keyboard-top.jpg
│ │ +-- surface-book-keyboard-wide-angle-1024x768.jpg
│ │ \-- surface-book-keyboard-wide-angle.jpg
│ \-- index.md
Now we can copy the output folder into our new site (I'm going to copy it into a /posts folder), run Eleventy, and it'll generate us html for each of the posts.
We need to update each of the generated markdown files to include front matter that points to our layout file. This may be acceptable if you only have a few posts to go through, but in my case I had over a hundred, so I ended up hacking the WordPress export to markdown code so that all generated markdown files included the layout front matter.
Once the layout front matter has been set, you can ask Eleventy to re-generate your site:
npx @11ty/eleventy
You'll now get a much bigger output as Eleventy works through the blog posts in your /posts folder, and generates a set of html files for them! Go ahead an serve the new site, and have a browse around your imported content. It's worth doing this to check for any minor paragraphing fixes you can make.
Depending on how you configured your original WordPress install, and how you asked the WordPress export to markdown package to output your files, you may have broken your old links.
In my case, my WordPress install placed content at /YYYY/MM/DD/title-slug
. I don't really want to maintain a folder for the year, month, and day, so I'm instead opting to have posts at posts/YYYY-MM-DD-title-slug
.
This means that a post that was located at
https://edspencer.me.uk/2019/04/11/does-the-surface-keyboard-work-with-ubuntu-yes-but-only-with-ubuntu-18-onwards
Will now be located at
https://edspencer.me.uk/posts/2019-04-11-does-the-surface-keyboard-work-with-ubuntu-yes-but-only-with-ubuntu-18-onwards
Redirection is not a concern for Eleventy. Eleventy also runs a build time, not runtime, so its simply not in the place to deal with the redirects anyway.
So, wherever you're going to host your updated site, you need to deal with the re-directs. I decided to host the new Eleventy generated site on Netflify as:
Netlify supports all forms of re-direction simply by including a _redirects
file in the root of your site. This file contains a list of route pairs, with the route first and the new route second:
/2019/04/11/does-the-surface-keyboard-work-with-ubuntu-yes-but-only-with-ubuntu-18-onwards /posts/2019-04-11-does-the-surface-keyboard-work-with-ubuntu-yes-but-only-with-ubuntu-18-onwards
Here I haven't specified the redirect HTTP status code, which on Netflify will default to 301 - Moved permanently. You can read more about Netlify redirects here.
We also need to get the new _redirects
file into our site folder. I used an npm script for this and did a simple file copy. In package.json:
"scripts": {
"build": "npx @11ty/eleventy && npm run copy-redirects",
"copy-redirects": "cp _redirects _site/_redirects"
},
So now, running npm run build
will compile the site using Eleventy, and will then copy the _redirects
file into the right place.
Now, you just need to automate your deployments. If you host your site in GitHub, connecting your deployments can be done by following these steps.
If you run a self hosted git instance elsewhere (like I do), you can follow these steps.
The quick version is that you need to:
I've decided not to bring the comments over from my old blog. There really weren't many comments of value - the only useful ones were on a post I made about using your own router with BT Internet, where plenty of BT Internet users exchanged tips on how they got it working.
The others were of low value and were just another thing to moderate. If you really want comments, I'd recommend webmentions over a 3rd party service like Disqus, which you really shouldn't add to your site as it'll start bloating it with 3rd party JavaScript and cookies.
I'm living without the WordPress text editor. I'm happy to write markdown (actually, I prefer it) as it keeps me in the text editor. If you want a rich text editor, you could use a dedicated markdown editor locally, or plug in a cms that will commit to source control, like Netlify CMS.
And that's it! WordPress isn't a bad platform - it's an excellent enabler. It lets people that are not web developers get something onto the web. Let's not be gatekeepers!
The downsides of WordPress are mainly control, maintenance and performance (which I've blogged about in the past). Having a site built with Eleventy allows you to be superlean and have a rapid, lightweight site that is a joy to use and that you are proud of.
]]>Web server implementation for http/2 is also good, with NGINX having supported it from it's core install for a few years now.
A big http/2 advantage is that you will not run into http 1.1's limits on the maximum number of parallel connections to a single domain. These vary between browser with Chrome by default supporting 6 connections (it's worth noting that each browser install can be manually configured to change this number, although I doubt many people do).
Let's have a think about what happens when we request an imaginary webpage - edspencer.me.uk/test-page.html - over http 1.1.
Looking at a browser that supports a maximum of 6 connections at a time, imagine our test-page.html contains references to 9 external assets, referenced in the header, all hosted on the same domain.
What is going to happen here? Well, assuming that our cache is empty, the first 6 referenced files will be downloaded first. Anything after the first 6 will be queued until one of the 6 download slots frees up. This will happen when a download completes.
A simplified analogy would be you're queuing at a checkout, and there are 6 tills staffed by 6 operators. A maximum of 6 people can be served at the same time.
This also happens for Ajax requests to the same domain, which must also form an orderly queue, with a maximum of 6 going over the wire at the same time.
There were a few workarounds for this in the http 1.1 world. One was to combine your assets into bundles. So in our above example, our 3 stylesheets become one single stylesheet, and our 3 JavaScript files would become a single .js file. This reduces the number of request slots needed by 4.
Another way would be to serve your assets from different domains in order to bypass the 6 connections per domain limit. For example, having your images served from images.edspencer.me.uk and your stylesheets from styles.edspencer.me.uk.
Both of these techniques worked well under http 1.1, but had their downsides.
Imagine I have a whole section in my web application that is only for users with admin access. If I'm bundling all application styles and scripts into two respective files, I'm burdening the clients that will never access the admin tools of my application with the code for my admin tool. Their experience of my website would improve if I served them a smaller set of assets that did not include the code needed to run the admin tool that they will never access.
Setting up subdomains requires webserver and DNS configuration. I also then need to work out how I'm going to get the web application's static assets onto their relevant subdomains. It's a lot of effort.
With http/2, you don't need to bundle anymore, and you can instead split and serve your web app using multiple files without worrying about blocking one of those limited download slots. This is largely because of the improvements in the protocol transport, which have resulted in the recommended minimum limit for browsers implementing http/2 being much greater, at 100 concurrent connections.
Splitting your bundles more logically, instead of bundling into one will result in more, smaller bundles being sent to the client. For example you could have a bundle that contains the JavaScript for the admin pages of a web app, and it'll only get served to the client should they land on an admin page.
If someone visits your web app and only lands on the homepage, you don't need to serve them with the code needed to run the admin pages of your web app.
Setting up http/2 on NGINX is trivial, with the only change needed being the inclusion of the characters "h2" into the listen definition for your site:
server {
listen 443 ssl http2;
...
}
All you then need to do is restart NGINX, and you're good to go. You can test this out by taking a peek at the network tab in the developer tools of your preferred browser. Note the "h2" in the Protocol column:
This is a WordPress blog, and whilst I've taken steps to improve it's performance, I have struggled to get the raw number of assets needed by the site down. After enabling http/2, I got an immediate and significant performance score improvement from Google Lighthouse:
I'll be migrating this blog to a headless CMS soon which will give me much more control and will hopefully give me a nice score of 100! Watch this space.
Here is an example of it working without the @babel/plugin-proposal-object-rest-spread plugin included.
You do not need to include the @babel/plugin-proposal-object-rest-spread plugin.
]]>If you want to use any of these, you should either bring in your own polyfills for what you need, or you should include corejs with Babel.
Babel 7 will transpile the following out of the box:
There are plugins for each of these but they do not appear to be necessary. It looks like Babel is trying to steer towards transpiling syntax and keyword features, and not functions or objects, which need to be polyfilled.
]]>I started by looking for the basics - was const getting changed to var? Yes it was. Were ES6 classes being converted to functions? Yes they were. And were object spreads being transpiled down? Yes they were, but this was now being done in a different way to Babel 6.
In Babel 7.4, using an object spread operator will result in Babel injecting a polyfill function called _objectSpread into the top of the outputted JavaScript file. This function will then be called wherever you are using an object spread. E.g:
Input:
function someOtherTest () {
const p1 = {
name: 'p1'
};
const combinedP1 = {
height: 100,
...p1
}
}
Results in the following, after being run through Babel:
function ownKeys(object, enumerableOnly) { var keys = Object.keys(object); if (Object.getOwnPropertySymbols) { var symbols = Object.getOwnPropertySymbols(object); if (enumerableOnly) symbols = symbols.filter(function (sym) { return Object.getOwnPropertyDescriptor(object, sym).enumerable; }); keys.push.apply(keys, symbols); } return keys; }
function \_objectSpread(target) { for (var i = 1; i < arguments.length; i++) { var source = arguments\[i\] != null ? arguments\[i\] : {}; if (i % 2) { ownKeys(source, true).forEach(function (key) { \_defineProperty(target, key, source\[key\]); }); } else if (Object.getOwnPropertyDescriptors) { Object.defineProperties(target, Object.getOwnPropertyDescriptors(source)); } else { ownKeys(source).forEach(function (key) { Object.defineProperty(target, key, Object.getOwnPropertyDescriptor(source, key)); }); } } return target; }
function \_defineProperty(obj, key, value) { if (key in obj) { Object.defineProperty(obj, key, { value: value, enumerable: true, configurable: true, writable: true }); } else { obj\[key\] = value; } return obj; }
function someOtherTest() {
var p1 = {
name: 'p1'
};
var combinedP1 = \_objectSpread({
height: 100
}, p1);
}
Here's a link to see this in action on the Babel repl.
As you can see, it's a lot more code, but is a tax that some of us have to pay in order to support some older browsers.
And this is where we need to be careful. Most front end build processes typically follow this flow:
What you do not want to do, is to run Babel before the output has been combined into one file. If you follow this pattern:
You will end up with some serious bloat! In every file that you have used the object spread operator in, Babel will inject a locally scoped _objectSpread function. So you could easily end up with multiple copies of the exact same polyfill function.
This is the web, and we don't like unnecessary bloat - so the correct workflow for your front end build is:
This way, you'll only get polyfill functions injected once into your outputted JS, even if the polyfill is used multiple times.
]]>If you read that post, it should be clear that bundling and uglifying your JavaScript with webpack is very straightforward. However, getting webpack to do other front end build operations, such as compile SCSS before uglifying and bundling the generated CSS, is not a simple task. In fact, it's something I feel I put too much time into.
Since that post, I've updated that project so that webpack does one thing - bundle JavaScript.
All of the other build operations (compile SCSS, etc) are now being done in the simplest way possible - Yarn / NPM scripts.
These were previously jobs we'd leave to a task runner, like gulp. These worked and were reliable, but they had their downsides:
Instead of using gulp or webpack for our entire font end build, we're going to use yarn / NPM scripts. So, let's start with our SCSS. We can use node-sass package to compile our SCSS into CSS files:
yarn add node-scss --dev
Now we just need to add a command into the "scripts" section of our package.json file that will call the node-scss package:
...
"scripts": {
"build-scss": "node-sass --omit-source-map-url styles/appstyles.scss dist/css/appstyles.css"
}
It's basically a macro command. Calling "build-scss" is a shortcut for the longer command that we've entered into our package.json file. Here's how we call it:
yarn run build-scss
Now let's add another script to call webpack so that it can do what it's good at - bundling our JavaScript modules:
...
"scripts": {
"build-scss": "node-sass --omit-source-map-url styles/appstyles.scss dist/css/appstyles.css",
"build-js": "webpack --mode=development"
}
Which now means that we can run:
yarn run build-js
To build our JavaScript.
We've now got two different yarn script commands. I don't want to run two commands every time I need to run a front-end build, so wouldn't it be great if I could run these two commands with a single "build" command? Yes it would!
...
"scripts": {
"build-scss": "node-sass --omit-source-map-url styles/appstyles.scss dist/css/appstyles.css",
"build-js": "webpack --mode=development",
"build": "yarn run build-scss && yarn run build-js"
}
All we need to do now is run
yarn run build
And our CSS and JavaScript will be generated. You could add more steps and more yarn scripts as needed - for example, a step to uglify your generated CSS.
]]>Webpack 4 introduced a new build flag that is supposed to make this easier - "mode". Here's how it's supposed to be used:
webpack --mode production
This actually won't be much help if you want to get different build outputs. Whilst this flag will be picked up by Webpack's internals and will produced minified JavaScript, it won't help if you want to do something like conditionally include a plugin.
For example, just say I have a plugin to build my SCSS, if I want to minify and bundle my generated CSS into a single file, the best way to do it is to use a plugin - OptimizeCssAssetsPlugin. It would be great if I could detect this mode flag at build time, and conditionally include this plugin if I'm building in production mode. The goal is that in production mode, my generated CSS gets bundled and minified, and in development mode, it doesn't.
In your webpack.config.js file, it's not possible to detect this mode flag, and to then conditionally add the plugin based on this flag. This is because the mode flag can only be used in the DefinePlugin phase, where it is mapped to the NODE_ENV variable. The configuration below will make a global variable "ENV" available to my JavaScript code, but not to any of my Webpack configuration code:
module.exports = {
...
plugins: \[
new webpack.DefinePlugin({
ENV: JSON.stringify(process.env.NODE\_ENV),
VERSION: JSON.stringify('5fa3b9')
})
\]
}
Tying to access process.env.NODE_ENV outside of the DefinePlugin phase will return undefined, so we can't use it. In my application JavaScript, I can use the "ENV" and the "VERSION" global variables, but not in my webpack config files themselves.
The best solution even with webpack 4, is to split your config files. I now have 3:
The configs are merged together using the webpack-merge package. For example, here's my webpack.prod.js file:
const merge = require('webpack-merge');
const common = require('./webpack.common.js');
const OptimizeCssAssetsPlugin = require('optimize-css-assets-webpack-plugin');
module.exports = merge(common, {
plugins: \[
new OptimizeCssAssetsPlugin({
assetNameRegExp: /\\.css$/g,
cssProcessor: require('cssnano'),
cssProcessorPluginOptions: {
preset: \['default', { discardComments: { removeAll: true } }\],
},
canPrint: true
})
\]
});
You can then specify which config file to use when you call webpack. You can do something like this in your package.json file:
{
...
"scripts": {
"build": "webpack --mode development --config webpack.dev.js",
"build-dist": "webpack --mode production --config webpack.prod.js"
}
}
Some further information on using the config splitting approach can be found on the production page in the webpack documentation.
]]>getData() {
const params = {
orgId: this.orgId,
siteId: this.siteId
};
this.dataService.getFullData(params)
.then(result => {
// ... do stuff
})
.catch(this.handleError);
}
Essentially what I want to do is call a leaner "getLeanData" function instead of "getFullData", if no siteId is provided.
There are few approaches to this problem. One would be to change the way getFullData worked, and to move any switching logic into it. The other would be to break up the promise callback functionality and move it into a separate function, and to just have an if block.
I didn't really like either of those approaches and knew that I could alias the function that I wanted to call instead. Here was my first, non working attempt:
getData() {
const params = {
orgId: this.orgId,
siteId: this.siteId
};
let dataFunction = this.dataService.getFullData;
if (typeof params.siteId === 'undefined') {
dataFunction = this.dataService.getLeanData;
}
dataFunction(params)
.then(result => {
// ... do stuff
})
.catch(this.handleError);
}
The above looked like the right approach, and would have worked if:
Basically, the function call worked, but it was not executing the dataFunction within the scope of the dataService class - resulting in errors.
This happens because when assigning a function to the local variable "dataFunction", only the body of the function, not the object that it needs to be called on, is copied. If the getFullData and getLeanData functions contained non scope specific code, such as a simple console log statement, the behaviour would have been as expected. However in this case, we need the scope of the dataFunction class.
The solution is to call the function with the scope (officially called the "thisArg") explicitly set. This can be done using the Function.prototype.call() method.
getData() {
const params = {
orgId: this.orgId,
siteId: this.siteId
};
let dataFunction = this.dataService.getFullData;
if (typeof params.siteId === 'undefined') {
dataFunction = this.dataService.getLeanData;
}
dataFunction.call(this.dataService, params)
.then(result => {
// ... do stuff
})
.catch(this.handleError);
}
Calling .bind on the function call will also work with you, but I think the use of .call is a little more readable
]]>I've said it before, and I'll say it again - I prefer chiclet keyboards. In my opinion, they are more suited to a long days worth of typing and programming than a mechanical keyboard, or a more conventional keyboard.
The Surface Book keyboard is the second chiclet keyboard that I've purchased in the last month, with the previous one being the Cherry KC 6000. I bought the Surface Book keyboard to be a replacement for my home setup, where I was previously using an ageing and horribly tatty Microsoft Ergonomic USB natural keyboard.
My requirements for my home keyboard were slightly different to my workplace keyboard requirements:
I found two keyboard that matched my needs. The first was the Logitech Craft, which also has the added bonuses of being able to pair with multiple devices, and also has a big wheel that can be used for additional control, although this sounds like it does not typically stretch beyond volume control:
However, there are two downsides to this keyboard:
1 - it's expensive, priced at about £160. Whilst I think that as a developer I need the best tooling, this just feels like a stretch.
2 - It's hard to get hold of. When I was trying to purchase this keyboard, I struggled to find anyone that actually had it in stock, including Amazon. I eventually found it in stock on very.co.uk, but it wasn't really in stock and they sat on my order for over a week before I lost my patience and cancelled.
This left one other keyboard - the Surface Book Keyboard.
The keyboard is aesthetically very pleasing, with a small bezel and a simple and tasteful grey and white colour scheme used. The Bluetooth connection helps with the appearance of the keyboard as it means you don't have any wires to try and hide or make neat. I'd describe the footprint of the keyboard as low profile, as it has such small bezels and looks discreet yet impressive in the middle of your desk in complete isolation from any sort of cabling.
It is super comfortable to type on, with the key presses feeling light yet satisfying to depress, with a sufficient level of feedback delivered to your finger tips. Typing on it is pleasurable and fast.
The general typing comfort is helped along by a healthy level of banking on the keyboard towards the user, which is something that the Cherry KC 6000 fails at. The brilliant thing about this banking is that it's a stroke of smart design - the bank towards the user comes from the battery compartment:
The keyboard layout is sensible, and doesn't try to be too clever in this area by re-stacking keys or jigging around the layout of anything. It very much is laid out like you would find a laptop keyboard - with the Function lock key placed between the Ctrl and Super keys. This probably is a big plus for you if you tend to dock your laptop and work off of it, or if you're used to working on a laptop keyboard. Plus points for me on both - when I'm at home, I work off of my laptop on a stand.
Availability wise, this keyboard is very easy to get hold of. I ordered this online at about 4pm through PC World's website, and was able to collect it the next day at 11am from my local store. It's also well stocked elsewhere around the web, which is a marked departure from my experience when trying to purchase the Logitech Craft.
Cost wise, the Surface Book keyboard can be yours for £79.99. This appears to be a fixed price, much in the way that Apple price their hardware. It's the same price everywhere, unless you look at used items. Here are some links:
The only real downside for me (and this is a nitpick) with the Surface Book keyboard is that it can only be paired to one device at a time. I often switch between my desktop PC for gaming, and my Xubuntu laptop for everything else. This means that every time I do this, I need to pair the keyboard again. Luckily, this isn't much more than a slight inconvenience, as the pairing is a quick and painless experience on both Windows and Ubuntu based operating systems from version 18 onwards.
USB keyboards have two big advantages. They don't need batteries, and they will work as soon as they have a physical connection, even whilst your machine is still booting up. If you need to do anything in the BIOS, for example, you will not be able to do this with a Bluetooth keyboard as the drivers for it will not have been loaded. So, keep your dusty old USB keyboard for the day when you run out of power and have no batteries, or for when you need to jump into your BIOS.
On the whole, this keyboard is fantastic, and I've give it a 9 out of 10. I'd highly recommend this keyboard for general typing, programming, and some gaming.
]]>My main concern however, was whether this keyboard would work with my Xubuntu laptop. Some googling revealed mixed answers - someone running Linux Mint, which is based on Ubuntu, had no luck. Whereas a separate thread on Reddit seemed to indicate that it worked without any problems.
After purchasing the keyboard and using it, here is what I found: In order to successfully pair the keyboard with any PC, the keyboard will ask the PC to present the user with a passcode challenge. On Windows, when pairing, a dialogue will pop up that contains a 6 digit number. It will prompt you to enter a 6 digit passcode on the keyboard. Once this is entered on the Surface Keyboard, it will be paired and will work as expected. When pairing the keyboard with my Xubuntu laptop, here's what I found:
This is because of a few bugs in the bluetooth stack in Ubuntu 16. The passcode dialogue will never appear, meaning that you will not be able to successfully pair the keyboard to a machine running Ubuntu 16 or below. This problem also happens when attempting to pair in the terminal using bluetoothctl.
Luckily, I have a spare laptop, and was able to use that to test out the pairing on Ubuntu 18 before committing myself to upgrading my main work laptop. Using the spare, I was able to successfully pair on Ubuntu 18.04 through the bluetooth gui as the passcode prompt was now appearing. I took the plunge and updated my main Xubuntu laptop and can confirm that the pairing with the Surface Book keyboard fully works on Ubuntu 18.04 and any derivatives.
]]>Because I was having problems with bluetooth, and some furious googling had lead me to conclude that a distribution upgrade would resolve my issues, I decided that now was a suitable time to update my operating system.
Xubuntu's documentation states that when there is a major LTS release available for you to upgrade to, the update manager will pop up with a dialogue box informing you, and giving you the option to update. It looks similar to this:
You will not get this prompt if you have broken PPA repository URLs configured. The best way to find out which PPA URLs are causing problems, is to run...
$ sudo apt-get update
From your terminal, and observing the results:
Reading package lists... Done
E: The repository 'http://ppa.launchpad.net/pinta-maintainers/pinta-stable/ubuntu xenial Release' does not have a Release file
In my case, my PPA configuration for the excellent image editor, Pinta, appeared to be broken. Simply disabling this PPA from the software updater by unchecking the checkbox allowed the OS to fully pull update information, and then prompted me with the distribution update dialogue.
]]>I prefer chiclet keyboards. I haven't done any scientific analysis, but I'm confident that my words per minute typing is higher wen I'm using a keyboard that has chiclet keys. Amongst developers this in an unpopular opinion, with many developers preferring mechanical keyboards (don't be that guy smashing a mechanical keyboard in an open plan office!)
Chiclet keyboards have keys that do not need to depress as far in order to register. They also have an evenly sized gap between each key, making it more difficult for you to fumble the keyboard and hit the wrong key. They are typically found on laptop keyboards.
With that in mind, I needed a new keyboard to replace the one I take into client offices and leave there whilst on a contract. I was previously using an old, fairly standard Dell USB keyboard that was becoming embarrassingly tatty - most of the key letters were completely worn off. It also seemed to be forever caked in a layer of dirt that no amount of cleaning could remove.
My requirements were fairly simple:
The keyboard I found that fits the above is the Cherry KC 6000, which is priced nicely at £35 on Amazon (The linked product incorrectly claims the product is the Cherry KC 600, but this is just a typo - the Cherry KC 600 does not exist and having taken delivery of this item, can confirm that it is indeed the Cherry KC 6000).
On the whole, I am very happy with this keyboard, and would give it 4 out 5 stars:
This is by far the most important factor on any keyboard! The keys have a really nice weight and feel to them, and typing for a long amount of time on this keyboard is comfortable and does not result in any straining pains that I would sometimes get on my previous bog standard keyboard.
This keyboard has no crazy backlighting and comes in two fairly neutral colours - a silver body with white keys, or a black body with black keys. Some may view the lack of backlighting as a negative, but this isn't a problem for me. I don't type in the dark as I don't have the eyes for it, and I touchtype.
This helps with having a general feel of neatness on your desk. The keyboard has only moderate bezels and has no ridges where dust and other crap can get stuck.
This is a small irritation as it just takes some getting used to. On most keyboards, the function keys are laid out in banks of 4, with a bigger space between every 4th function key. This space is gone on the Cherry KC 6000, and the saved space is given to two additional buttons - one to open your default browser, and another to lock your machine. I don't mind having these extra buttons, but annoyingly they are right above the backspace key, so it will take some getting used to not being able to naturally travel to the F11 key to go fullscreen, or the F12 key to open my Guake terminal.
Again, this is another one of those small things that will take you a day or so to get used to. You'd normally only find one backspace key on a keyboard and would not expect to have one on the numeric pad. This one is positioned where the minus key normally is, so I've found myself accidentally deleting characters rather than putting in the minus character a few times.
Other reviews online think that the keyboard is not banked enough towards the user (the way most keyboards have legs that you can have up or down. The keyboard did initially look a little flat on my desk when I first set it up, but I've found that it has not impacted my typing at all.
To conclude, I'm happy with the Cherry KC 6000 keyboard. I t has made me more productive, and is comfortable to use for long typing stints (think 6+ hours of programming!)
]]>Fascinating re-telling of the story of the infamous dark web black market website, the Silk Road. It covers the development of the Silk Road, how it came to exist, who was behind it, how it ran, and how it got taken down. The amount of money the Silk Road made at it's peak was incredible!
This book is awesome! A great retelling of late 90s and early 2000s music piracy crews, who was behind them, how they came to prominence and how they ultimately got caught. It also covers the format wars of digital music, and how the MP3 came to dominate. This is well worth a read for all of you techies and will take you back to the Napster days!
You might have heard of Theranos and it's strange CEO, Elizabeth Holmes. The now defunct startup was built on lies, and ended up collapsing like a house of cards, having burned $750 million from duped investors. Theranos claimed to have ground breaking technology that could run hundreds of blood tests on a single pin prick of blood. In reality, they could only produce unreliable results from a vial of blood. The facade continued for years with Elizabeth Holmes managing to persuade several investors to value Theranos at over a billion dollars. A truly 5 star read.
The author behind The Big Short took his investigative skills to look into the murky world of high frequency trading. The rise of high frequency trading ties up with the rise of the internet age, and lead to stock trading companies spending millions on faster connections, and to even have their servers physically located closer to certain machines in data centres in order to make a connection a fraction faster. A brilliant read, with a great mix of scandal and technology.
Hilarious and alarming account of respected journalist that spent over a year working at a startup, HubSpot. HubSpot sounds like a toxic place to work, with regular firings (known within the cult of HubSpot as "Graduations"), non existent on boarding, and a self cultivated cult of personality around it's leaders. Oh, and it's product really isn't innovative. HubSpot is a real company and is still trading.
Is Bill Gates really an extraordinary individual, or did his circumstances make him an extraordinary individual? Are elite sports professionals really the best in their peer group, or is it just that the month of the year that they were born in meant that they were physically more developed than their peers? Gladwell does a great job of exploring the above and several other outliers, and this is a great read for the curious mind. I'd highly recommend picking up this book, and it'll probably be cheap as it was first published a decade ago!
I didn't really enjoy this read. Let me prefix this by stating that I like Paul Mason and that I find him insightful. This book starts of well with an analysis of previous industrial and technological eras, how they've progressed, and where we are now - genuinely interesting stuff. What I found tricky to follow was just how heavy this book got when it descended into economic theory, backed up with not so brilliantly annotated graphs. Anyway, the tl;dr is that we're all a bit fucked because globalisation.
A book in part about minimalism and in part about consumerism and when to stop. The book does make you think about where your constant drive to own more stuff comes from, and also covers how technology might be playing a big role in pushing our desire for more. A pretty good read but could have been a little more concise.
Excellent, well worth a read. A comprehensive look at our mindset towards race and class in Britain and where it comes from, and what the future might hold. This is one of a few intellectual commentaries on race in the UK that I have read this year.
Another great read. This isn't really by Nikesh Shukla, but is instead a collection of essays by several well known minorities in the UK. My favourite essay in the book was by Riz Ahmed, where he discusses how his career as an actor and going to auditions helped him deal with the special attention he gets at airport security.
Another great intellectual commentary on how we look at race and identity in modern Britain. It also captures a lot of the identity self questioning that many mixed race people experience. This is one of my personal favourites of 2018.
This first hand account of the first 5 years of a junior doctor's career in the NHS is a gripping read. I ploughed through this book in about 3 days. It's very easy to read, but is also compelling with its mixture of funny and incredibly sad accounts of the experiences of a junior doctor.
This is the only sports related biography I read in 2018. I picked it up because of genuine curiosity about the famous story of footballer Jermaine Pennant forgetting that he had left a car at a train station in Spain, and it staying there, running, in the car park, for a week before running out of fuel. Predictably, Pennant plays this whole episode down and claims that he did forget that the car was there as he was rushing back to the UK. However he does strongly claim that he did not leave the engine running. Immediately after this assertion, in the next paragraph, is a statement from Pennant's agent that contradicts this: "I know for a fact that he left that car running".
One of several books covering the rise of Grime music that was released in 2018. This is a good coverage of some of the early artists in grime music emerging out of East London. Whilst it does not cover all grime artists, it gives you a good overview of some of the original members of grime collectives Pay As You Go, and Roll Deep. A good read if you want to know more about Grime music,
]]>I say this as a person that previously hosted all of their personal projects on the Azure cloud platform. I had multiple app engines inside a single Azure App service, which worked pretty well. The main advantage of running anything in the cloud is that you don't have to worry about server maintenance, and, if used correctly, can be cheaper for projects over a certain size. I had a few reasons for getting off the cloud:
Generally, a cloud service will restrict you to running some specific technologies. For example, a specific version of Node.Js. Even if your cloud service supports the technology you want to use, finding out if it supports the version you want to use is not easy. Right now, if you want to know which versions of Node.Js Azure supports, your best place to get that answer is StackOverflow , even though this information should be prominently displayed in the documentation.
The offer from cloud service providers changes a lot. I actually had to look back over my previous blog post detailing how I hosted my projects in the cloud to remind myself how it all worked. The problem is more acute on AWS, where the naming is so whacky that someone maintains a page where the names of AWS products are translated into useful plain English. Elastic Beanstalk, anyone?
There is also a danger of just choosing the wrong cloud product. Like the guy that used Amazon Glacier (real name) storage and ended up racking up a $150 bill for a 60Gb download,
The argument put forward by the cloud guys often is that you get to focus on building your app and not deploying it or worrying about the infrastructure. This is absolutely true for a more trivial apps.
However we all know that it is rare for a useful application to be this flat. If you want to connect your application into another service to store data or even just files, you'll need to learn what that service might be on that platform, and then you'll need to learn that platform's API in order to program against it. Generally, once you've got that service wired in, you'll get some benefits, but the upfront effort is a cost that should be taken into account.
Newsflash - physical servers can be very cheap, especially used ones. Helpfully someone on reddit put together a megalist of cheap server providers. Below is a list of the main providers that are worth checking out in Europe:
As of writing, you can get a gigabit connected, 4 core, 4 Gb Ram server with OneProvider for €8 a month. Whilst comparing the cost of this hardware on a cheap provider to a cloud provider would be silly (you're only supposed to deploy the hardware you need in the cloud), as soon as you run two web apps on an €8 box, you're saving money.
I've got one server with Kimsufi, and one with Online.net. I've had one very small outage on Kimsufi in the 2 years that I've had a box there. The box was unavailable for around 10 minutes before Kimsufi's auto monitoring rebooted the box.
I've had a single lengthier outage in the 18 months I've used Online.net which lead to a server being unavailable for about 5 hours. This was a violation of their SLA and meant that I got some money back. I detailed this outage in a previous post - "Lessons Learned from a server outage".
When you aren't running inside of a neatly packaged cloud product, you have full ownership of the server and can run whatever you want. The downside of this is that there is more responsibility, but in my opinion, the pros out weigh the cons, such as not having to wait for some service to become available (Example - GCS doesn't support http to https redirection, I'm guessing you need to buy another service for that - this is a few simple lines of config in nginx)
Being able to run whatever you want also opens the door for you to be able to play around with more technology without having to worry about an increase in cost. Running my own dedicated boxes has been fun, and I've learned a lot by doing so
]]>This article is now out of date and it's unlikely that you'll be able to use any of the techniques in it to add react to an existing app. This is because there are too many moving parts with too many breaking changes between versions:
In order to brush up on my React knowledge, in this post I'm going to outline how I added it into the admin tools of links.edspencer.me.uk.
This walk through will have an emphasis on not just being a hello world application, but will also focus on integrating everything into your existing build pipeline - so you'll actually be able to run your real application in a real environment that isn't your localhost.
I'm writing this up because I felt the setup of React was a bit confusing. You don't just add a reference to a minified file and off you go - the React setup is a lot more sophisticated and is so broken down and modular that you can swap out just about any tool for another tool. You need a guide just to get started.
We'll be going along the path of least resistance by using the toolchain set that most React developers seem to be using - Webpack for building, and Babel for transpiling.
The current application works on purely server side technology - there is no front end framework currently being used. The server runs using:
The build pipeline consists of the following:
You can get your React dependencies through npm, but I figured that moving to Yarn form my dependency management was a good move. Aside from it being suggested in most tutorials (which isn't a good enough reason alone to move):
While we're here we may as well update the version of node that the app runs under to 8.9.4, which is the current LTS version.
In dockerized applications, this is as easy as changing a single line of code in your docker image:
FROM node:6.9.4
Becomes:
FROM node:8.9.4
Removing Bower was easy enough. I just went through the bower.json file, and ran yarn add for each item in there. This added the dependencies into the package.json file.
The next step was then to update my gulp build tasks to pull the front end dependencies out of the node_modules folder instead of the bower_components folder.
I then updated my build pipeline to not run bower install or npm install, and to instead run yarn install, and deleted the bower.json file. In my case, this was just some minor tweaks to my dockerfile.
The next thing to do was to remove any calls to npm install from the build, and to instead call yarn install.
Yarn follows npm's package file name and folder install conventions, so this was a very smooth change.
You have to use a bundler to enable you to write modular React code. Most people tend to use Webpack for this, so that's the way we're going to go. You can actually use Webpack to do all of your front end building (bundling, minifying, etc), but I'm not going to do this.
I have a working set of gulp jobs to do my front end building, so we're going to integrate Webpack and give it one job only - bundling the modules.
Firstly, let's add the dependency to webpack.
yarn add webpack --dev
Now we need to add an empty configuration file for Webpack:
touch webpack.config.js
We'll fill out this file later.
Lastly, lets add a yarn task to actually run webpack, which we'll need to run when we make any React changes. Add this to the scripts section of your package.json file:
{
"scripts": { "build-react": "webpack" }
}
You may be thinking that we could just run the webpack command directly, but that will push you down the path of global installations of packages. Yarn steers you away from doing this, so by having the script in your package.json, you know that your script is running within the context of your packages available in your node_modules folder.
Babel is a Javascript transpiler that will let us write some ES6 goodness without worrying too much about the browser support. Babel will dumb our javascript code down into a more browser ready ES5 flavour.
Here's the thing - every tutorial I've seen involves installing these 4 babel packages as a minimum. This must be because Babel has been broken down into many smaller packages, and I do wonder if this was a bit excessive or not:
yarn add babel-core babel-loader babel-preset-es2015 babel-preset-react
Once the above babel packages have been installed, we need to wire it up with webpack, so that webpack knows that it needs to run our react specific javascript and jsx files through Babel.
Update your webpack.config.js to include the following.
module.exports = { module: { loaders: \[ { test: /\\.js$/, loader: 'babel-loader', exclude: /node_modules/ }, { test: /\\.jsx$/, loader: 'babel-loader', exclude: /node_modules/ } \] } }
Note that the webpack.config.js file is not yet complete - we'll be adding more to this shortly.
We're nearly ready to write some React. We just need to add a few more packages first, and that'll be it, I promise.
yarn add react react-dom
This will add the core of React, and another React module that will allow us to do some DOM manipulation for our first bit of react coding. This does feel a bit daft to me. Webpack is sophisticated enough to run through our modules and output exactly what is needed into a single JS file. Why, therefore, do we need to break the install of a server side package down so much, if Webpack is going to grab what we need?
We're now at the point where we can write some react. So in my application, I'm going to have a small SPA (small is how SPAs should be - if you need to go big, build a hybrid) that will be the admin tool of my web application.
So, in the root of our web app, let's add a folder named client, and in this folder, let's add the following two files:
import React from 'react';
import ReactDOM from 'react-dom';
import App from './admin-app.jsx';
ReactDOM.render(<App />, document.getElementById('react-app'));
import React from 'react';
export default class App extends React.Component {
render() {
return (
<div style={textAlign: 'center'}>
<h1>Hello World - this is rendered by react</h1>
<div>
);
}
}
The above two files aren't doing a lot. The JSX file is declaring our "App" class, and the JS is telling react DOM to push our app JSX into a html element with the id "react-app".
Now we should complete our webpack config to reflect the location of our React files. Update webpack.config.js so that the entire file looks like this:
const path = require('path');
module.exports = {
entry:'./client/admin.js',
output: {
path:path.resolve('dist'),
filename:'admin_bundle.js'
},
module: {
loaders: [
{ test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ },
{ test: /\.jsx$/, loader: 'babel-loader', exclude: /node_modules/ }
]
}
}
We're telling webpack where then entry point of our React application is, and where to output the bundled code. (dist/admin_bundle.js).
Note we're also leaning on a new package (path) to help us with some directory walking, go ahead and add that using yarn add path.
Now, let's go ahead and ask webpack to bundle our React app:
yarn run build-react
Now if everything as worked as expected, webpack would have generated a bundle for us in dist/admin_bundle.js. Go ahead and take a peek at this file - it contains our code as well as all of the various library code from React that is needed to actually run our application.
As this SPA is only going to run in the admin section, we need to add two things to the main page in the admin tool that we want this to run on.
In my case, I'm using the awesome pug view engine to render my server side html. We need to add a reference to our bundled Javascript, and a div with the id "react-app", which is what we coded our react app to look for.
Here's a snippet of the updated admin pug view:
div.container
div.row
div.col-xs-12
h1 Welcome to the admin site. This is rendered by node #react-app
script(type="text/javascript", src="/dist/admin_bundle.js")
Alternatively, here's how it looks in plain old html:
<div class="container">
<div class="row">
<div class="col-xs-12">
<h1>Welcome to the admin site. This is rendered by node</h1>
</div>
</div>
<div id="react-app"></div>
</div>
<script type="text/javascript" src="/dist/admin_bundle.js">
Now all we need to do is to run our application and see if it works:
Et voila!
As previously stated, I've got an existing set of gulp tasks for building my application, that I know work and that I'd like to not have to re-write.
We've kept out webpack build separate, so let's now tie it up with our existing gulp task.
Previously, I would run a gulp task that I had written that would bundle and minify all Js and SCSS:
gulp build
But now we also need to run webpack. So, let's update our package.json file to include some more custom scripts:
"scripts": {
"build-react": "webpack", "build": "gulp build && webpack"
}
Now, we can simply run:
yarn run build
Now we just need to update our build process to run the above yarn command instead of gulp build. In my case, this was a simple update to a dockerfile.
The above setup has got us to a place where we can actually start learning and writing some React code.
This setup is a lot more involved than with other frameworks I've worked with. This is because the React is a lot less opinionated on how you do things, and the tooling that you use. The disadvantage to this is that it makes the setup a bit trickier, but the advantage is that you have more freedom elsewhere in the stack.
I do however think that there may be a bit too much freedom, and a little more convention and opinion wouldn't hurt. For example some sensible install defaults that could be changed later would cover this off. Whilst there may be some yeoman generators out there, they wont help with integration into existing web applications.
What is interesting is noting the differences between how you build your React applications to how you build your AngularJs applications. With AngularJs, you reference one lib and off you go. You may only use a small part of AngularJs, but you've referenced the whole thing, whereas with React, and Angular, you have a build process which outputs you a file containing exactly what you need.
I think this overall is a good thing, but we should still put this into context and be careful to not quibble over kilobytes.
We also saw how we can integrate the React toolchain into our existing toolchain without throwing everything out.
Next up, I'll be writing a blog post on what you should do in order to run React in production.
]]>I was fast asleep, and my phone was on do not disturb, so that situation stayed as it was until I woke up.
After waking up and seeing the message, I assumed that the docker container running the UI bit of this website had stopped running. Easy, I thought, I'll just start up the container again, or I'll just trigger another deployment and let my awesome GitLab CI pipeline do it's thing.
Except this wasn't the case. After realising, in a mild panic, that I could not even SSH onto the server that hosts this site, I got into contact with the hosting company (OneProvider) for some support.
I sent off a support message and sat back in my chair and reflected for a minute. Had I been a fool for rejecting the cloud? If this website was being hosted in a cloud service somewhere, would this have happened? Maybe I was stupid to dare to run my own server.
But as I gathered my thoughts, I calmed down and told myself I was being unreasonable. Over the last few years, I've seen cloud based solutions in various formats, also fail. One of the worst I experienced was with Amazon Redshift, when Amazon changed an obscure setup requirement meaning that you essentially needed some other Amazon service in order to be able to use Redshift. I've also been using a paid BitBucket service for a while with a client, which has suffered from around 5 outages of varying severity in the last 12 months. In a weird co-incidence, one of the worst outages was today. In comparison, my self hosted GitlLab instance has never gone down outside of running updates in the year and a half that I've had it.
Cloud based or on my own server, if an application went down I would still go through the same support process:
An SLA or a Service Level Agreement basically outlines a service provider's responsibilities for you. OneProvider's SLA states that they will aim to resolve network issues within an hour, and hardware issues within two hours for their Paris data centre. Incidentally, other data centres don't have any agreed time because of their location - like the Cairo datacenter. If they miss these SLAs, they have self imposed compensation penalties.
Two hours had long gone, so whatever the problem was, I was going to get some money back.
I have two main services running off of this box:
I could live with the link archive going offline for a few hours, but I really didn't want my website going down. It has content that I've been putting on here for years, and believe it or not, it makes me a little beer money each month though some carefully selected and relevant affiliate links.
Here's where docker helped. I got the site back online pretty quickly simply by starting the containers up on one of my other servers. Luckily the nginx instance on that webserver was already configured to handle any requests for edspencer.me.uk, so once the containers were started, I just needed to update the A record to point at the other server (short TTL FTW).
Within about 20 minutes, I got another email from Uptime Robot telling me that my website was back online, and that it had been down for 9 hours (yikes).
I use a Wordpress plugin (updraft) to automate backups of this website. It works great, but the only problem is that I had set this up to take a backup weekly. Frustratingly, the last backup cycle had happened before I had penned two of my most recent posts.
I started to panic again a little. What if the hard drive in my server had died and I had lost that data forever? I was a fool for not reviewing my backup policy more. What if I lost all of the data in my link archive? I was angry at myself.
At about 2pm, I got an email response from OneProvider. This was about 4 hours after I created the support ticket.
The email stated that a router in the Paris data centre that this server lives in, had died, and that they were replacing it and it would be done in 30 minutes.
This raises some questions about OneProvider's ability to provide hosting.
I'll be keeping my eyes on their service for the next two months.
Sure enough, the server was back and available, so I switched the DNS records back over to to point to this server.
Now it was time to sort out all of those backups. I logged into all of my servers and did a quick audit of my backups. It turns out there were numerous problems.
Firstly, my GitLab instance was being backup up, but the backup files were not being shipped off of the server. Casting my memory back, I can blame this on my Raspberry Pi that corrupted itself a few months ago that was previously responsible for pulling backups into my home network. I've now setup Rsync to pull the backups onto my RAIDed NAS.
Secondly, as previously mentioned, my Wordpress backups were happening too infrequently. This has now been changed to a daily backup, and Updraft is shunting the backup files into Dropbox.
Thirdly - my link archive backups. These weren't even running! The database is now backed up daily using Postgres' awesome pg_dump feature in a cron job. These backups are then also pulled using Rsync down to my NAS.
It didn't take long to audit and fix the backups - it's something I should have done a while ago.
After debugging the functions that were calling into moment.js, it became apparent that:
So with that in mind, we really shouldn't be asking moment.js (or any function) to do the same calculations over and over again - instead we should hold onto the results of the function calls, and store them in a cache. We can then hit our cache first before doing the calculation, which will be much cheaper than running the calculation again.
So, here is the function that we're going to optimise by introducing some caching logic into. All code in this post is written in ES5 style JavaScript and leans on underscore for some utility functions.
function calendarService() {
function getCalendarSettings(month, year) {
var calendar = getCurrentCalendar();
// Just a call to underscore to do some filtering
var year = _.findWhere(calendar.years, { year: year});
var month = _.findWhere(year.months, { month: month});
return month;
}
}
The above function calls out to another function to get some calendar settings (which was itself fairly expensive) before doing some filtering on the returned object to return something useful.
Firstly, we need to have a place to store our cache. In our case, storing the results of the functions in memory was sufficient - so lets initialise an empty, service wide object to store our cached data:
function calendarService() {
var cache = {};
function getCalendarSettings() {
...
}
}
When we add an item into the cache, we need a way of uniquely identifying it. This is called a cache key, and in our situation there will be two things that will uniquely identify an item in our cache:
With the above in mind, let's build a function that will generate some cache keys for us:
function getCacheKey(functionName, params) {
var cacheKey = functionName;
_.each(params, function(param) {
cacheKey = cacheKey + param.toString() + '.';
});
return cacheKey;
}
The above function loops through each parameter passed in as part of the params array, and converts it to a string separated by a full stop. This will currently only work with parameters that are primitive types, but you could put your own logic into handle objects that are parameters.
So, if we were to call getCacheKey like this:
getCacheKey('getCalendarSettings', [0, 2017]);
It would return:
'getCalendarSettings.0.2017'
Which is a string, and will be used as a cache key as it uniquely identifies the function called and the parameters passed to it.
We now have our in memory cache object, and a function that will create us cache keys - so we next need to glue them together so that we can populate the cache and check the cache before running any functions. Let's create a single function to have this job:
function getResult(functionName, params, functionToRun) {
var cacheKey = getCacheKey(functionName, params);
var result = cache[cacheKey];
if(!_.isUndefined(cache[cacheKey]) {
// Successful cache hit! Return what we've got
return result;
}
result = functionToRun.apply(this, params);
cache[cacheKey] = result;
return result;
}
Our getResult function does the job of checking the cache, and only actually executing our function if nothing is found in the cache. If it has to execute our function, it stores the result in the cache.
It parameters are:
Our getResult function is now in place. So let's wire up getCalendarSettings with it:
function getCalendarSettings(month, year) {
return getResult('getCalendarSettings', [month, year], runGetCalendarSettings);
function runGetCalendarSettings(month, year) {
var calendar = getCurrentCalendar();
// Just a call to underscore to do some filtering
var year = _.findWhere(calendar.years, { year: year});
var month = _.findWhere(year.months, { month: month});
return month;
}
}
We've now updated getCalendarSettings to call getResult and instead return the result of that function. We're also exploiting JavaScript's variable hoisting to use the runGetCalendarSettings function before it has been declared. Our function is now fully wired up with our in memory cache, and we'll save unnecessary computation that has already been previously completed.
This code could be improved upon by:
function calendarService() {
var cache = {};
function getCalendarSettings(month, year) {
return getResult('getCalendarSettings', [month, year], runGetCalendarSettings);
function runGetCalendarSettings(month, year) {
var calendar = getCurrentCalendar();
// Just a call to underscore to do some filtering
var year = _.findWhere(calendar.years, { year: year});
var month = _.findWhere(year.months, { month: month});
return month;
}
}
function getResult(functionName, params, functionToRun) {
var cacheKey = getCacheKey(functionName, params);
var result = cache[cacheKey];
if(!_.isUndefined(cache[cacheKey]) {
// Successful cache hit! Return what we've got
return result;
}
result = functionToRun.apply(this, params);
cache[cacheKey] = result;
return result;
}
function getCacheKey(functionName, params) {
var cacheKey = functionName;
_.each(params, function(param) {
cacheKey = cacheKey + param.toString() + '.';
});
return cacheKey;
}
}
]]>The proper way to get around this is by programming in such a way that is considerate as to how the Node.js / Javascript runtime works.
However, you should also be running your Node.js application in production with this in mind to get the absolute best performance that you can. To get around this limitation, you need to do some form of clustering.
You can run your Node.js app in production by using the "node" command, but I wouldn't recommend it. There are several Node.js wrappers out there that will manage the running of your Node.js app in a much nicer, production grade manner.
One such wrapper is PM2.
In it's most basic form, PM2 (which stands for Process Monitor) will keep an eye on your Node.js app for any crashes, and will attempt to restart the process should it crash. Crucially, it can also be configured to run your Node.js app in a clustered mode, which will enable us to take advantage of all of the cores that are available to us.
PM2 can be installed globally via npm:
npm install pm2 -g
Helpfully, we don't need to change any of our application code in order to have PM2 cluster it.
PM2 has an optional argument - i, which is the "number of workers" argument. You can use this argument to instruct PM2 to run your Node.js app in an explicit number of workers:
pm2 start app.js -i 4
Where 4 is the number of workers that we want to launch our app in.
However, I prefer to set the "number of workers" argument to 0, which tells PM2 to run your app in as many workers as there are CPU cores:
pm2 start app.js -i 0
Et voilla!
Moving the code and the data needed to run this website was made easy by docker and a Wordpress plugin called UpdraftPlus. For additional testing, I just tweaked my local hosts file to simulate the DNS change.
I also needed to move my SSL certs. I could have dropped the site back into plain http and requested the certs again using the certbot, but I decided against this as it would be more of a hassle.
You need to move the contents of two directories and one file in order to keep your site running SSL and so that certbot on the new server is aware of how to renew your website's SSL certs.
1: Copy the folder:
/etc/letsencrypt/live/yoursitedomain.com
2: Copy the folder:
/etc/letsencrypt/archive/yoursitedomain.com
3: Copy the file:
/etc/letsencrypt/renewal/yoursitedomain.com.conf
4: Copy the folder:
/etc/letsencrypt/accounts/acme-v01.api.letsencrypt.org/directory/someguiddirectoryname
LetsEncrypt community discussion thread:
]]>Most of this bad press is centred around security concerns. Many of these concerns are valid, but need not be a concern of yours if you are intending to run WordPress in production. You just need to a responsible webmaster. In this post I'll list out some tips that will make your WordPress install robust and fast.
This is the most important security concern that you need to have. WordPress even makes updating your install super easy. You don't need to log into any servers, you just need to login to the admin tool and head over to the dedicated updates page. From there you can simply press a button to get all of your pluggins and WordPress itself updated. Once your install is fully updated, you'll see a nice clean page like this, telling you that there is nothing to update:
WordPress update screen telling me I'm fully updated
In the same way that you keep your laptop or PC up to date, you should be keeping your WordPress install up to date.
WordFence is a popular security plugin that will offer you some protection against trending attacks. It's a plugin that you should install, but it does not absolve you of all of your security responsibilities. You should still be regularly updating your server's OS and any libraries installed on it.
Askimet is a very popular WordPress plugin that is essential if you allow commenting on your website through WordPress. Askiment will block shedloads of spam posts to your site, and you won't even need to look t them. I don't trust 3rd party services like disqus, so this was an essential plugin for me.
WordPress has a few moving parts. Some of those parts are held in files, others in a MySQL database. You could periodically back these two up manually, or you could take advantage of one of the great WordPress plugins that you can use to automate your backups and make it super easy. An excellent one is Updraft Plus. This plugin can be set to regularly backup your entire WordPress site and can even store the backups in a cloud file service like Dropbox.
A cache plugin will improve the load speed of your site. It will save database calls, and will instead pull data directly from memory. A popular plugin is WP Super Cache. And remember, a quick load time can mean that search engines rank you higher, and your visitors will love you.
Again, this will give you a speed advantage and will save on your bandwidth use. A popular plugin is WP Smush. This plugin can be set to batch compress all images in your site, and can be used to compress images as and when they are added to your site.
Depending on how you've built your WordPress site will affect how you do this. If you have customised your WordPress templates or made your own theme, you should introduce a step in your build process to bundle and minify your JS and CSS.
If you are just using a 3rd party theme that you haven't customised a lot, you should grab a plugin to bundle and minify your JavaScript and CSS. A plugin that I've had some success with is Better WordPress Minify. You may have to tweak it's settings slightly to make sure it doesn't break any of your other plugins that are rendered out on the UI (e.g. a source code highlighter plugin).
The standard install of WordPress doesn't use the latest version of jQuery. Depending on the user's that you'd like support, you may want to update to the latest version of jQuery. You can do this in your build process, or you can do this with a plugin, like jQuery updater.
]]>Over the last decade or so, we saw a worrying trend in the Broadband market - we lost a lot of competition. This happened as a few bigger corporations entered the broadband market and consolidated their market share by buying up and closing smaller and often very good broadband operators.
Remember Bulldog internet? Well, they got eaten by TalkTalk. Remember BE internet? They got eaten by O2. Who then got eaten by Sky.
Bulldog and BE internet were once, very well regarded and popular internet providers. I'll let you do your own research on what TalkTalk and Sky's customers currently think of them.
Over the last year or so, this trend has reversed a bit, and we've had a few of the newer entrants trying to push themselves in, for example, EE and Vodafone.
EE and Vodafone are mobile network operators and that is where they do the majority of their business. Both offer some fairly competitive broadband packages, but for some odd reason, choose not to bundle anything else in their broadband packages. So two massive companies that offer mobile phone and internet services, don't offer any packages that link the two. Huh?
I cannot understand why they would not do this. Consumers would benefit from getting better deals, and EE and Vodafone would benefit by getting customers that were more embedded into their services. The broadband and mobile services offered by these businesses are essentially treated as two separate entities. When I couldn't find any combined broadband and mobile deals online, I reached out to their online sales staff. Both EE's and Vodafone's sales responded with "You're talking to broadband sales, I can't help you with mobile sales".
I eventually reached out to both companies on twitter - EE actually will throw 5 gigs of data onto your phone package, but that isn't great for someone like me - and they don't actually shout about that offer anywhere.
EE - you are missing a trick. Vodafone - you are missing a trick. Get some packages that link the two and train your staff on all consumer products. Don't treat your broadband and mobile offer as two totally different things. As a potential customer, don't bounce me between departments if I want to talk about buying broadband and mobile.
In Europe, many people use the same provider for their TV, broadband, and family mobile packages. There is no reason why this sort of offer wouldn't be as popular in the UK.
So, we've now got a pretty good 4G network up and down the country - however unlimited mobile data packages have become rare and expensive.
I'm currently on an old three unlimited data package. It costs me £23 a month. If I wanted to take out that package now, it would cost me £30. When I took my package out - it was one of the most expensive. It's now one of the cheapest.
Worryingly, three are now traffic shaping and chipping away at net neutrality by offering up packages that have data limits, but let you access some services in an unlimited fashion. They call it "Go Binge", and claim that if offers you access to Netflix and some other smaller TV streaming services. They are treated as an option on mobile packages:
I'd rather they were just into the business of offering up data, not offering up *some* data. Also, this is starting to look like some of the mobile phone contracts offered up in countries where there are no net neutrality laws.
Facebook, twitter and whatsapp only unlimited in certain packages
Currently no one offers up unlimited data except for three - and that'll cost you £33 a month.
Data has gotten more expensive on mobiles. We've got more big companies offering broadband, but aren't using their significant market presence in other areas to offer up better deals.
]]>Anyway, I run my own gitlab instance on a box that only has 4 gigs of ram. Gitlab also has to share these limited resources with a few other webapps.
I noticed that gitlab was one of the biggest consumers of the ram on my box, and did some research into reducing it's memory footprint.
Open the gitlab config file, which should be located at /etc/gitlab/gitlab.rb.
##! **recommend value is 1/4 of total RAM, up to 14GB.**
postgresql['shared_buffers'] = "256MB"
I set this at 15 instead of 25 as I don't have that many commits going on.
sidekiq['concurrency'] = 15 #25 is the default
prometheus_monitoring['enable'] = false
Run:
gitlab-ctl reconfigure
You should then run through a few commits and check gitlab is running smoothly.
]]>I was reasonably happy with the service I got.
However, there are some downsides when you don't host yourself:
One of the awesome things about Wordpress is the amount of themes and plugins that are out there. When using the hosted platform at wordpress.com, you do not have full administrative control over wordpress, so you can't just install some of the plugins as you wish. And those that use wordpress a lot, know that there are some essential plugins, like WP Smush.
If you want to install a non standard theme on a hosted wordpress.com site, you can't. You can however, pay for the option to install one of their premium themes. So you can't really style your site in the way you want, without getting your wallet out.
Also - ads. wordpress.com hosted sites "occasionally" show ads to users. Here's the thing - I really, really distrust ad networks. Aside to opening your site up to becoming a vector for Malvertising attacks and the creepy level of ubiquitous tracking, I also really dislike just how invasive ads on the web have become. I understand the need to monetise content on the web, but there are better ways of doing it rather than just indiscriminately littering ads around content.
In fact, this site is itself monetised where appropriate. Some articles contain useful and relevant affiliate links - but this may actually have contravened wordpress.com's terms and conditions. So I was also risking my site randomly getting yanked offline.
I'm a web developer. It's what I do, day in, day out. I want everything that I do to follow web best practices - and a site hosted on wordpress.com will not. Opening up the developer tools network tab in Chrome, and hitting a wordpress.com hosted site, will reveal a few things. Aside from A LOT of requests for tracking assets, there are several requests for unminified javascript files. Like this.
There are a few of these about, but I've really gone off cloud based solutions and didn't want to spend hours researching other providers.
I looked at a few, but saw that the migration path would be painful, especially if self hosted.
medium.com isn't self hosted. Ghost can be self hosted but isn't anywhere as easy as self hosting wordpress. It's also funny that the ghost vs wordpress page says "Ghost is simple!", and the ghost vs medium page says "Ghost is powerful!".
I do not trust a paid blog site to keep it's pricing structure as is. I really don't want to be in the position where I need to suddenly pay up more money to host or to frantically have to migrate because some company decided to change their pricing structure.
So here we are, still running on wordpress, but this time we're self hosted. The migration was easy, and took me about 2 hours.
I hear you, along with everyone else that has been sucked up by the technology hype lifecycle. Wordpress does indeed get bashed a bit because there is an unfair perception of security problems around it. There are some things you should be doing if you are running a wordpress site in production to make it more secure. I'll address these things in a later blog post, but many of them will just be standard web security best practices.
]]>I currently have an NGINX webserver running infront of this blog. The job of NGINX here is to handle the SSL traffic, decrypt it, and forward it onto the docker container that runs this blog in plain old http.
If you're going to do this, you need to make sure your NGINX config is setup to send the right headers through to wordpress, so that wordpress knows about the scheme the traffic came in on. So, in your NGINX config file, you'll need the following:
location / {
proxy_pass http://127.0.0.1:5030;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
}
That should be all you need. Wordpress has been around, and older blog posts seem to indicate that you may need some additional plugins. I didn't find that this was the case. Hope this helps.
]]>It did however take a me a while to workout how to adjust the screen brightness from the keyboard after noticing that none of the function keys double up as a screen brightness adjustment.
Fn + Del
Fn + Backspace
Enjoy!
]]>To do this, you will need access to your domain name's DNS settings.
A naked domain is the domain without the "www." that you often see on websites. There are various reasons for using a naked url, that I won't go into in this post.
Jump into your domain name's DNS settings. Create a CNAME entry for awverify.yourwebsite.com, and point it to your azure domain (e.g. awverify.yourwebsite.azurewebsites.net). This tells azure that you are the owner of the domain.
Now, go into your Azure control panel and locate your web app.
Select "Buy Domains" and then "Bring External Domains":
[gallery ids="485,483" type="rectangular"]
You will then be shown a dialogue on the right with a text box where you can enter you naked domain name (e.g. yoursite.com - no www):
After you enter the naked domain, azure will load for a minute whilst is checks for your awverify CNAME dns entry.
Once verified, you can then point your actual domain to your Azure website.
Note: You can use a CNAME or an A record DNS entry to resolve the naked domain of your site. Both methods are listed below:
Once verified, you Azure will reveal an IP address. This should show up just below the text box where you entered the domain name. If it doesn't show, wait a few minutes and refresh the entire page. The IP address should then be displayed.
Head over to your DNS settings and enter an a record for "*"resolving to your ip address listed in Azure. You should now have a working naked domain name.
Head over to your DNS settings and enter a CNAME record for "*" resolving to yourwebsite.azurewebsites.net. You should now have a working naked domain name.
The merits of using an A record vs a CNAME entry are not something that I will go into in this post. You can read more about the two DNS entry types here.
As well as having a naked domain work, you will probably also want your www to work as well. This can be done using the same methods above, but crucially you will need to tell Azure that you also have ownership of the subdomain as well:
e.g. In order to verify www.yourwebiste.com, you need to create a CNAME dns entry for awverify.www.yourwebsite.com that resolves to awverify.yourwebiste.azurewebsites.net
In order to verify blog.yourwebiste.com, you need to create a CNAME dns entry for awverify.blog.yourwebsite.com that resolves to awverify.yourwebiste.azurewebsites.net
Again, once verified, you are free to setup and A record or CNAME record DNS entry to point to your Azure Web App.
]]>A few years back, Scott Hanselman wrote a blog post on how you could utilize the "Standard" tier of Azure websites to save money by hosting multiple sites. Well, that was back in 2013 and the azure offer has significantly changed.
Firstly, Azure Websites has now been merged under Azure App Service, along with a few other services. Here is a 5 minute video from channel 9 explaining what exactly is in Azure App Services.
So, how do I get that "Shared" tier multiple websites setup that Scott Hanselman originally blogged about? Well, the Azure App Service pricing details page looks like you can get there with the "Basic" tier, which is cheaper than "Standard":
Azure App Service offer. Screenshot grabbed in November 2015
And how do I actually set this up in my Azure portal?
Confusingly, what is priced as an Azure App Service basically means everything under the "Web + Mobile" under the "new" option in the azure portal:
Why? Because:
An App Service plan is the container for your app. The App Service plan settings will determine the location, features, cost and compute resources associated with your app.
More info here.
You get told this information when setting up your app service plan and location (not sure why it defaults to Brazil...)
So, an App Service Plan is basically the billable container for your apps.
So if you want to create a new web app under the same App Service, simply select it when setting up a new Web App.
]]>This should help prevent your Angular App from running into memory problems in the browser. There are basically two approaches to this.
In directives that wrap some sort of plugin (e.g. Slick Slider), you need to listen out for the "$destroy" event and call that particular plugin's cleanup methods. In the case of Slick Slider, it's the unslick() method, but it could simply be a call to jQuery's remove() method, or you could just set the value of the html element to an empty string:
$scope.$on('$destroy', function() {
// Call the plugin's own api
$('.slick-slider').unslick();
// or call jQuery's remove function
$(element).remove();
// or, if you aren't using jQuery
element.html("");
});
When you create a watch on a scoped variable, or on an event in angular, the $watch function returns a function that when called, will remove the watch. Call this returned function when your scope is destroyed as it is no longer needed:
var unbindCompanyIdWatch = $scope.$watch('companyId', () {
// Do Something...
});
$scope.$on('$destroy', function() {
unbindCompanyIdWatch();
});
The less binding going on, the less watchers there are.
If you render values in your dom using angular that you know are only going to be set once and will never change during the lifecycle of the page, it is a candidate for using one-time binding. The One-Time binding syntax is basically two colons - "::", and can be used a few ways in your html templates:
<!-- Basic One-Time Binding -->
<p>
{ {:: test } }
</p>
<!-- Within ng-repeat -->
<ul>
<li ng-repeat="item in ::items">
{ {::item.name} }
</li>
</ul>
<!-- Within ng-show -->
<p ng-show="::showContent"> Some Content </p>
<!-- Within ng-if -->
<p ng-if="::showContent"> Some Content </p>
By specifying a property for angular to track an item within a collection by, you will prevent angular from rebuilding entire chunks of the dom unnecessarily. This will give you a performance boost which will be noticeable when dealing with large collections:
<ul>
<li ng-repeat="item in items track by item.itemId">
{ {item.name} }
</li>
</ul>
Ben Nadel has an excellent post on track by that you should checkout.
Of course, you shouldn't need to tie this up with some one-way binding, as track by would be pointless with a collection that does not need to change.
If some action happens that means that data that is loading is no longer needed (e.g. a navigation change), you should cancel the no longer required http requests. Most browsers limit the number of concurrent requests to a single domain. So, if your requests are no longer required, cancel them and free up those request slots.
You can do this by simply resolving the promise. Your requirements of when this cancellation needs to happen will be different for every app, so I would recommend that you write a httpRequestManagerService, and marshal any http requests through it that you deem necessary. You can then resolve your promises based on some event - e.g. a navigation change event. Ben Nadel has a good post on cancelling angular requests.
On the surface, ng-show and ng-if produce the same result. However, under the hood, they behave slightly differently.
ng-show still renders your html no matter what. If it does not need to be shown, the html element with the ng-show directive will simply be marked with a display:none css style.
ng-if will completely remove the html and all children that contain the directive.
There is obviously a cost in completely removing and adding entire chunks of html, however if you are dealing with a lot of html, or if your html within your ng-if contains a lot of angular expressions, I have found it to be more performant than ng-show.
My advice is to evaluate both in each case before making a decision.
Please feel free to throw in any more performance tips in the comments
]]>Intel Core i5 i5-4690K This is one of the best "bang for buck" CPUs that you can get at the moment. I was previously running an i7 but this new Haswell architecture i5 beats my old i5 comfortably across the board, and it also runs cooler than my old i7.
Corsair CMY16GX3M2A1866C9R Vengeance Pro Series 16GB (2x8GB) DDR3 1866Mhz CL9 XMP Performance Desktop Memory I've been burnt in the past by cheaper RAM becoming unstable, so now I will never scrimp on RAM. This RAM supports Intel's XMP for overclocking, and has been enabled since day one without any issues.
MSI Z97 Gaming 5 Intel LGA1150 Z97 ATX Motherboard One of the cheapest parts of this build. I was very skeptical about getting a mainboard that does not have an integrated Intel NIC (this board instead opts for a Killer Networks NIC). My last mainboard had a Bigfoot Networks E2100 NIC, which out of the box was incredibly buggy and unstable. It was actually so unusable that I ended up disabling the TCIP capabilities of the card and letting the tested and reliable TCIP stack in Windows do it's thing. The Killer Networks E2100 card is now basically abandonware, and the card does not work with newer games online, and until recently wasn't compatible with the iTunes store. However, the E2200 is current and is still getting plenty of attention from Killer Networks, and I haven't had any issues with it online so far. My advice would still be to go for a tried an tested Intel NIC if you can, although I'm yet to experience any problems with the E2200 Killer Networks card on this mainboard.
One of the best things about this mainboard is the BIOS, which has fantastic user interface and give you plenty of control over overclocking features, both simple and advanced. This piece of kit was fantastic value for money.
MSI NVIDIA GTX 970 Gaming Twin Frozr HDMI DVI-I DP Graphics Card The more graphics memory, the better. This card lets me comfortably play the newest games (including GTA5) with the graphics settings all maxed out. It also runs quietly.
Corsair Hydro Series H55 All-In-One Liquid Cooler for CPU This was a surprise win for me. I previously used a Be Quiet CPU fan, which was nice and silent and kept my CPU nice and cool. However, this ready-to-rock water cooler from Corsair really impressed me, not just on the noise levels, but also on the cooling capabilities. For the first time in years, my CPU will happily idle at 25°C.
Crucial CT512MX100SSD1 2.5 inch 512GB SATA III Solid State Drive The OS hard drive caused me great pain originally. I started this build off running the OCZ Arc 100. A 480 Gigabyte SSD priced very cheaply at £120. However, this was simply too good to be true, and within a week of the new build, this SSD suffered some serious file corruption and required a reinstall of Windows, which would only go on after a hard SSD wipe (a Windows installer format was not enough). However, I decided not to proceed with the OCZ Arc 100 as a quick bit of research revealed that it was too much of an unreliable drive and has a few problems. Have a read about all of the other problems that other people had with this drive over at newegg.com. You pay for what you get, and I have sent the defective OCZ Arc 100 back for a refund, and am instead running a more highly rated but more costly Crucial SSD.
Having a hard drive fail on you is bad enough, but it's that much more hassle when it's the hard drive that contains the operating system for your battlestation on it. The Arc 100 was the only let down of this build, and it did come as a surprise as I had previously run a smaller OCZ SSD without any problems.
I'm running Windows 8.1, which on the 31st of July will become Windows 10 :-)
I also run a Xubuntu VM within VM Ware Player for my golang playtime. If you're an Ubuntu user, I recommend that you give Xubuntu a try. You might just prefer XFCE, like I do.
]]>SELECT
DC.Name
FROM
sys.schemas S
INNER JOIN Sys.objects O
on S.schema_id = O.schema_id
INNER JOIN Sys.default_constraints DC
ON O.object_id = dc.parent_object_id
WHERE S.name = 'SCHEMA_NAME'
AND O.name = 'TABLE_NAME'
]]>So I've spoken about this several times in the past, and I'll speak about it again, but I firmly believe that software development suffers from trend hype lifecycles in a massive way:
Technology Trend Hype Lifecyles
I do however think that there is one key thing missing from the above diagram - the "Anti Trend". Sometimes a technology will come a long that is genuinely popular and useful for a real reason - it actually does a job, is well received and supported and gets pretty popular. In software development, it's something that can make our difficult job easier. The "Anti Trend" refers to the detractors, who I suspect, want to be the first to "bail out" on a technology.
I'm all for critique of a technology or a way of solving a problem, but your arguments need to stand up.
I had a look into this in my post "That CSS / Javascript library isn’t as big as you think", where I pointed out that it was odd that those criticising jQuery and Twitter Bootstrap complained about the size of these libraries, but seemed to be ignorant of the basics - free cdns and gzipping.
I also had a look into the Anti Trend in my post "In defence of the RDBMS: Development is not slow. You just need better tooling", where I pointed out that one of the key criticisms of relation databases was that development against them was slow. This is only the case if you don't bother using any tooling.
Different sections of the of the development community run through the technology trend hype lifecycle at different speeds, and the Javascript community runs through the trend lifecycle at breakneck speeds.
So, right now, the Anti Trenders are having a pop at Angular (a few months ago it was jQuery). The arguments are usually the same and genuinely do not stand up in a real world situation. I've seen blog posts from people that I previously had respected quite a lot, critiquing angular in seriously misplaced manner:
If you don't like how Angular can be used, you probably wont like how Ember, React, and Knockout can be used either. These front end frameworks exist and are used for a reason - to solve the problem of getting a client and their data around an application nicely and seamlessly.
What shocked me about the Anti Trend blog posts was that they revealed a level of ignorance on the side of the authors. For me, someone with over a decade's worth of experience of publishing material on the web and developing real web applications (you know, ones that have requirements, costs and deadlines, and need to actually work and do something), front end frameworks like Angular and Knockout solved a very real problem. Both technologies have helped me to provide a richer client experience that were more testable, and they helped me get there quickly and keep the customers and the users happy.
It's an age old technique that can be applied to just about any argument. "I'm over here with the cool kids, you're over there with the weirdos". You might be wondering what I'm on about, but it's actually an argument that I've seen in an anti Angular blog post:
I’d say Angular is mostly being used by people from a Java background because its coding style is aimed at them
The above is 100% verbatim from an anti Angular blog post. "People that like Angular must be uncool enterprise developers". Sorry dude, some of us work in the enterprise and build real world line of business applications. We have bills to pay and don't live rent free working on speculative up startup ideas.
The LifeInvader social network HQ from GTA5. Your speculative start up also could get this big
If you hadn't heard, Angular 2 will bring in some changes. If you didn't like Angular in the first place, why are you crying for the loss of version 1? And why is this such a big issue in the Javascript community? Did you know that ASP.Net will undergo significant changes with the next release (called vNext)? The .net community is generally excited for this release, and isn't mourning the loss of the old versions.
This reddit user summed up this argument nicely:
One of the best things about being a developer is getting challenges thrown at you every day, and thinking of solutions to those problems. Can you imagine having someone on your team that was 100% negative, and constantly stopped their work to call you over to tell you that they had discovered some problem, and that it therefore was no other way around the problem without throwing the whole thing out and starting again? It would be pretty annoying, right?
Well, if you're going to criticise front end frameworks and offer no alternatives or other solution, I'm going to assume that you are advocating the use of A LOT of jQuery instead (which I'm guessing you think is too bloated, and that you'd tell me to write bare metal javascript).
It's silly isn't it? I'm not saying it couldn't be done, I'm saying it would be hard, your code would suck, would be difficult to test, and would take an eternity to deliver.
Make your own discussion. Talk do devs that you know in your network that may have used the technology in a real world situation. If you don't know any, find your local Web Developer meetup and get talking to people. Build a small prototype and form your own opinion. Don't just follow the trend, or the anti trend. What is your project priority? Delivery? Or something else?
It's not unreasonable to consider blog posts on the subject, but please consider if the author has a valid opinion. Do they actually build and deliver real world apps, or, do they now make their money from blogging, podcasting and running a training company on the side? Some good places to go for some real world insight (as in real actual code problems) into Angular Js are:
In the above links you will quickly discover real world Angular challenges and how they were overcome. It will also give you an indication of how well trodden the road is before you decide to set off down it.
If you're going to bash Angular, think:
Form your own opinion from real world experiences.
]]>BT used to supply Infinity customers with two pieces of equipment.
However, as the technology has improved, BT now supply Infinity customers with a single combined BT Home Hub (5 onwards), which contains both a router and a modem.
The nice thing about the old approach is that it meant it was easier to use your own hardware, which means more control.
Well, the good news is that you can still ditch the BT Home Hub, and use whatever router and modem combo that you like.
You essentially have two options:
This is the option I've gone for, and here's what I did.
Search on ebay for one of these:
A BT Openreach VDSL modem. This link should get you what you are after.
Then, get yourself the best router that you can afford that supports PPPoE. I've gone for the Netgear R6300-100UKS AC1750 Dual Band Wireless Cable Router. It's awesomely powerful and has killerWiFi. Infact, the WiFi is so good that I've ditched my homeplug powerline WiFi boosters (5G ftw).
To get your internet working, you then just need to do the following.
It's essentially the setup that is described in this diagram, just replace the BT Home Hub with a PPPoE router of your choice:
You can now get VDSL router and modems. Infact, they are easier to buy than VDSL modems. Here's a good search listing. The setup will be the same as the above, just without steps 1 & 2, and you will obviously connect the router directly to the phone line.
I would personally go with option 1.
Option 1 gives you much more flexibilty in terms of the physical location of your router. So, rather than it being stuck near your master socket, you can now have the router next to your master socket, and instead have your router somewhere more useful, like right in the middle of the house. All you then need to do is connect the router to the modem using an ethernet cable.
If you have any other tips, please post them in the comments.
]]>As someone that was used to traditional bulletin boards, the idea that most valuable content appeared at the top of a comment thread actually blew my mind.
Rather than the comments all being treated as equal, and just being displayed in chronological order, they were instead shown in the order that the community decided.
I loved the fact that someone might create a thread that linked to some content, and then the top voted comment was usually someone who was an expert in that field and could shed more light on the content. Users could then collaborate with them and garner more information.
The community decides a comment's value through a system of upvoting and downvoting.
Here's the problem - what one user thinks is upvote worthy, another user may not think is upvote worthy. Now this isn't a big problem for smaller communities - e.g. a subreddit with a few hundred subscribers about something specific, like a particular model of a car. This is because it's users all have a shared interest and will generally agree on what should be upvoted and what shouldn't
When a community has a broader appeal (e.g. news, or funny images), this model is flipped on it's head. There is no longer a united community present. The community has fewer common interests. So what content will appeal to most users? Something simple usually. Something that is quick to read and easy to understand. This usually is an attempt at being funny or witty, and might make you chuckle for a moment, but will not add any more insight into the subject at question. These will get the highest number of upvotes, leaving anything of genuine insight or value to sink to the bottom and drown in a sea of pointless remarks.
This can be illustrated very easily. Any thread on a popular subreddit will generally take the following format:
1. Thread posted linking to an image or article
2. Top voted comment is a statement of under 20 words attempting to be funny.
3. A deep thread of unfunny attempts at being witty are anchored to the top voted comment, as other redditors eagerly jump in an attempt at getting a few upvotes.
This breaks the upvote and downvote model. The top voted comment no longer adds any real value to the discussion, but instead distracts you. Comment voting systems are now bringing the noise back, and in a much worse way than chronological comments ever did. In wide appealing communities, they are a race to the bottom. The user that can write the most unintelligent, slapstick comment in the least amount of words wins the fake internet points.
You now need to scroll down through the comments and hunt for the points of genuine insight that actually add value to the discussion.
As the community collectively gets more and more unintelligent, this problem gets worse. Users will even downvote comments or linked articles that are factually accurate simply because they dislike it. This community is now a horrible place to be.
It isn't just a Reddit problem. Upvoting systems have been bolted onto many discussion mediums as the internet woke up to their auto noise filtering benefits.
Have a look at this comment thread on a Guardian article about the ongoing shitstorm at Tesco PLC. The TL;DR is that Tesco have managed to overstate their profits by£250 Million, causing £2 Billion to be wiped off the company's value, and resulting in many senior managers getting suspended.
The comment thread is a festival of eye rolling idiocy, as the idiots take over and have an idiotic circlejerk free-for-all of bullshit, probably whilst on their lunch breaks at work. We can easily put the comments into 4 categories:
Thanks - your comment is funny but adds no value. 40 Upvotes at the time of writing.
This comment very misinformed, claiming that the whole episode is an attempt by Tesco to get their corporation tax bill down. Let's just go over that for a minute - Tesco's deliberately wiped off £6 Billion from their share price and suspended lots of senior managers, so that they could avoid tax? Oh and by the way, if you make more profit you pay more corporation tax.
101 upvotes at the time of writing. So thats at least 101 people that have read the comment and had some sort of belief in it. The upvote count is telling you that this comment has some sort of value. It doesn't. It's worthless.
76 upvotes for the person telling us about their food shopping bill. Thanks. This comment reminds me of the "I brought my own mic!" line from a classic Simpsons episode.
Sitting there, smugly telling us irrelevant information that we don't need to know. It's almost spam.
4. Comments that actually contribute to the current discussion
And there we have it - this comment spurred a valuable discussion thread related to the content of the topic. 18 measly upvotes.
One of the fixes is to remove comments. On a news article that states factual information about something that has happened, how much value can the community really add? Well, I think the community can always add value and insight, especially for those that will want to dig deeper into a particular subject or story, so I wouldn't like to see this happen.
I've got a few ideas how you could filter the noise out of discussion threads.
Rather than simply upvoting or downvoting, why don't we apply a tag instead? Upvoting just feels too broad - if you upvote something it may be because you find it funny, because you agree with it, or because you found it insightful. So what if you could drag an icon onto a comment that represented how you felt about it instead?
So looking at our first comment from above:
This would have 40 "funny" tags instead of 40 upvotes.
And our final comment:
Would have 18 "insightful" tags.
You could then even put comment threads into "funny" mode, where the comments are sorted by the highest number of "funny" tags. Likewise with "insight" mode, where the comments would be sorted by the highest number of "insight" tags. This is similar to how canvas used to work, before Christopher Poole pulled it.
I think this could work, so I'm going to see if I can create something that will use this commenting system. Watch this space.
]]>Having just finished off a sprint with a fairly large team, I decided to install the NDepend 5 Visual Studio 2013 extension and do some digging around the solution. Whilst the information that NDepend 5 has shown to me has been a little depressing, it's also compulsive reading and does break out the "engineer" within you, as you find yourself questioning the point of classes or methods, and thinking how you could better refactor NDepend code violations. It's a great exercise to pair up with another Dev on your team and run through the code violations together - I found I could bounce code refactoring ideas off of a colleague in my team. NDepend could even be used here to help run code reviews as many of it's out of the box rules follow the the SOLID principles, and not some member in your team's opinion.
Once you've installed NDepend, get ready to let it lose on your codebase. Brace yourself for the results - remember, it isn't personal...
Dashboards add a huge value to any application (I know because I have built a few), and what can be particularly valuable is a set of visual indicators that quickly summarise some metric. Aside from NDepend's Red / Green light that appears in the bottom right of Visual Studio (which is a very high level indicator of the state of your codebase and is a nagging reminder that you've got some critical code violations to sort out), the dashboard gives you a more detailed information. You can get to the dashboard by clicking on NDepend's indicator icon, or by selecting "Dashboard" from the window that is shown after NDepend has had a good old inspection of your codebase.
The top half of the new Dashboard shows you some cool high level stats about the dll's that you just analysed. If you are feeling brave, you could even share some of these stats with the project sponsors, but naturally only after you've got your code coverage up. The dashboard also shows you the current state of codebase against NDepend's code rules, and any custom rules that you may have added, with the most critical violations shown at the top.
Before I continue looking into the new features of NDepend 5, it's worth noting one of the key things that you need to do in order to unlock the full power of NDepend. Each time you analyse your codebase, NDepend captures information about the current state of it. This capture is a timecapsule to how it looked, and can be used as a "baseline" for comparative analysis. The more you analyse, the more baselines you will have.
By default, the data gathered from each analysis is stored in an "NDependOut" folder that lives at the same level as your solution file. I would recommend committing this folder to source control so that other members of your team can make use of these files, and so that they are treated with the same importance as your actual codebase.
A new addition to NDepend 5 is the ability to restrict code analysis to recent code only. This will help you to hide the noise and focus on the code that your team is writing right now. On a codebase that has got a few years behind it, and is of a significant size, it would be a little unrealistic to refactor all of the older code. Even if you are using other coding standards add-ins such as Style Cop, it's easy to get lost in a sea of broken rules when you are looking at a codebase that has existed for several years.
In order to define what your recent code actually is, you will need to provide NDepend with a "baseline", if it cannot find the data on its own. This will tell NDepend about the state of the code at a previous time. The state of the codebase will be captured by NDepend every time you press the big "Analyse" button, so remember, the more you analyse, the more data you will have. You will be prompted for this information, if needed, when you check the "Recent Violations Only" checkbox.
In my case, I found that it was best to analyse the codebase at the start of a sprint, and then again as the sprint approached its final days, in order for there to be enough time for the team to do any refactoring that we felt was necessary on any code written during the sprint.
You could even analyse after each feature request or bug was closed off in order to get more finely grained data.
As the baselines offer an insight into the state of the code, NDepend 5 comes bundled with a set of graphs that detail how your codebase is performing between analyses. The default graphs will give you an insight into the how your codebase is changing in terms of the number of lines of code (I personally enjoy simplifying and deleting code, so this metric is quite an important one for me), the number of rules violated, code complexity, and reliance on third party code. You ideally want to see your codebase quality getting better - these graphs give you an instant insight into this.
Or at least stop it from growing.
The best way to get a feel for NDepend 5 is to have a play with the 14 day trial version. On a team of any size, NDepend really does show it's value, and will push you and your team into writing better code.
]]>Currently, in the world of Web Development, the community seems to be uber conscious about the potential over use of Javascript and CSS libraries, with the fear that this overuse is bloating the web. I naturally agree with the core sentiment - having a page that requires several different Javascript libraries makes me wince a little, and other things would need to be considered (what clients will be using this web app, etc) before a decision could be made about these libraries.
However, lots of developers in the community are taking this Kilobyte conscious attitude and using it to put off other devs from using popular and well established libraries.
A few months ago it was jQuery, as someone developed a tool that attempted to evaluate if your use of jQuery was necessary - youmightnotneedjquery.com. Unfortunately, this view caught on, and presumably a few poor souls got sucked in and attempted to re-invent a heavily proved and well tested wheel, only to hit those quirky cross browser pot holes that jQuery lets you normally ignore.
Right now, it's Twitter Bootstrap that is getting the attention of the devs that claim to be Kilobyte concious.
Here's the problem. The size argument doesn't stand up in the case of jQuery, or in the case of Twitter Bootstrap. Nearly all arguments that you will see complaining about the size of these two libraries are bogus.
Well, one of the best trends to have ever swept the world of Web Development, was the front end optimisation push a few years ago, driven largely by Steve Souders' fantastic book, High Performance Websites.
Did we forget what we learnt in this book? Remember gzip? The bogus size arguments will always look at the size of the raw minified files. If you are sending these files over the wire in a production environment, you're doing it wrong. All production Web Servers should be sending as much content to the client gzipped. This will significantly reduce the amount of data that needs to go between the server and the client (or "over the wire"):
The above screengrab is from the network tab in Chrome developer tools, when accessing jQuery 2.1.1. The darker number in the "Size" column is what actually came down over the wire - 34KB of gzipped data. The grey number (82.3KB) is the size of the uncompressed file. Gzipping has saved the the server and the client nearly 50 Kilobytes in data transfer.
If, for whatever reason, you can't use gzipping in your production environment, then use the CDN. This will make your site even quicker as visitors to your site will likely have their cache already primed, saving you even more of those precious Kilobytes. And, it will also be gzipped.
What's worse is when the size argument is incorrectly used to pit one library against another. I've even seen blog posts blindly claiming that Web Developers should use Foundation over Twitter Bootstrap, because "Foundation is more lightweight". In fact, Foundation's core CSS file is 120KB gzipped, whilst Twitter Bootstrap's is 98KB.
Given that it is so easy to debunk any size arguments, I actually think that our old friend, the Hype Cycle, is playing a part. Could it be that the popularity of jQuery and Twitter Bootstrap has made them uncool? I think it's very possible.
Well, in Shelbyville they drink Fudd.
My point here is that we seem to have forgotten the point of these libraries - to aid us reduce our biggest cost - development.
Just take the jQuery strapline into consideration:
"Write less, do more".
And one of the Twitter Bootstrap straplines:
"Bootstrap makes front-end web development faster and easier. It's made for folks of all skill levels, devices of all shapes, and projects of all sizes".
Everyone is correct to consider the impact of library sizes, but please take a measured approach in a world where bandwidth is increasing for most clients, and development costs remain high. Real world projects (that are my bread and butter) have deadlines and costs.
Like Hanselman said - "The Internet is not a closed box. Look inside." Look inside and make your own decision, don't just follow the sentiment.
]]>Anything that I publish in this blog is a result of a some real world coding. I don't just play around with small, unrealistic demoware applications. I write real world applications that get used by lots of users.
With experience comes wisdom, and in software part of that wisdom is knowing when to employ a certain technology. There is almost never a right or wrong answer on what technologies you should be using on a project, as there are plenty of factors to consider. They usually include:
The above questions will even have varying importance depending on where you are currently working. If you are working at an agency, where budgets must be stuck to in order for the company to make a profit, and long term support must be considered, then the last two points from the above have a greater importance.
If you are working in house for a company that has it's own product, then the last two points become less important.
Just about the only wrong reason would be something like:
This doesn't mean you should be using it on all projects right now. This just means that technology x is something that you ought to look into and make your own assessment (I'll let you guess where it might be on the technology hype life cycle). But is this project the right place to make that assessment? Perhaps, but it will certainly increase your project's risk if you think that it is.
One of the technologies that we are hearing more and more about are NoSQL databases. I won't go into explanation details here, but you should be able to get a good background from this Wikipedia article.
Whilst I have no issues with NoSQL databases, I do take issue with one of the arguments against RDBMS's - that development against them is slow. I have now seen several blog posts that have argued that developing with a NoSQL databases makes your development faster, which would imply that developing against a traditional RDBMS is slow. This isn't true. You just need to improve your tooling.
Here's a snippet from the mongo db "NoSQL Databases Explained" page:
NoSQL databases are built to allow the insertion of data without a predefined schema. That makes it easy to make significant application changes in real-time, without worrying about service interruptions – which means development is faster, code integration is more reliable, and less database administrator time is needed.
Well I don't know about you guys, but I haven't had to call upon the services of a DBA for a long time. And nearly all of my applications are backed by a SQL Server database. Snippets like the above totally miss a key thing out - the tooling.
Looking specifically at .net development, the tooling for database development has advanced massively in the last 6 years or so. And that is largely down to the plethora of ORMs that are available. We now no longer need to spend time writing stored procedures and data access code. We now no longer need to manually put together SQL scripts to handle schema changes (migrations). Those are real world huge development time savers - and they come at a much smaller cost as at the core is a well understood technology (unless your development team is packed with NoSQL experts).
Let's look specifically at the real world use. Most new applications that I create will be built using Entity Framework code first. This gives me object mapping and schema migration with no difficulty.
It also gives me total control over the migration of my application's database:
My point is this: whilst the migrating of a schema may not be something that you would need to do in a NoSQL database (although you would need to handle these changes further up your application's stack), making schema changes to an RDBMS's schema just isn't as costly or painful as is being made out.
Think again before you dismiss good ol' RDBMS development as slow - because it hasn't been slow for me for a long time.
]]>I've been dealing with shitty, unpolished crappy software all damned morning.
All because people that are too intelligent aren't stepping back from what they are doing and running a "real world" acceptance test.
I'm now the owner of a Nexus 4. The phone is blazingly fast, has fantastic battery life and is largely free of bloat. Today I decided to try and put a few mp3s on my phone. So, I connected my phone to my desktop running Windows 7x64.
I can accept that every now and then, a device will not interface with my desktop pc. Usually the device vendors have tested this out and provided a CD or a link to download the drivers. Unfortunately, between LG and Google, no one bothered to test this out. Some intense googling should send you to the Google USB driver download page. Unfortunately these are x86 drivers only.
Yeah you heard me. x86 only. Let's just have a think about that. 3 years ago, in 2010, Microsoft announced that 50% of all Windows 7 were in 64 bit land. Which way would that number have gone in the 3 years since 2010? Yet Google only bundles 32 bit drivers for their USB drivers.
Luckily for all of us 64 bit users out there, some one else (not Google or any of the huge corporations behind this phone) has compiled the drivers for 64 bit Windows. You can download them here.
It took me a while to get to the bottom of this, but it is because of the protocol that the phone has been setup to use - Media Transfer Protocol.
In order to use this on Windows, you need to have Windows Media Player installed. You might laugh and think that everyone has Media Player installed, but actually many European Windows installs don't.
To download Windows Media player you need to go to this MS download page in x86 Internet Explorer. I'm not joking. It must be x86 Windows Explorer so that Microsoft can verify that your version of Windows is genuine before letting you have the download. Going to the page in anything but x86 Windows Explorer will give you a whole host of pain when trying to validate your install.
Edit:
Incredibly, you may still get issues connecting the device to your computer after all of the above. If so, you will need to force Windows to use Microsoft's generic driver, and not Google's driver.
To do so:
Whilst its great that this phone is vanilla Android and is largely free of bloat, I couldn't gift the phone to anyone non technical. To expect a normal everyday user to go through any of the above is utterly ridiculous.
]]>When this happens, the project that you are working on contains a large amount of technical debt. Every new developer on the project loses time trying to navigate their way around the confusing and smelly code. The project becomes infamous within your team and nobody wants to work on it. The code becomes unloved with no real owner. You need to repay some of the technical debt.
In the agile world, we should be refactoring and reviewing code bravely and regularly to improve it's quality and to reduce the number of code smells. This however can be difficult for a number of reasons:
It's safe to assume that nobody is going to be completely sure of the above to questions in all circumstances. This is why it's becoming more and more common to use a code analysis tool to help you find any potential code smells, and advise you on how to fix them. A code analysis tool can be that advisor telling you what you can do to reduce your technical debt. It can also stop you from racking up a technical debt in the first place.
In this post, I'll be exploring NDepend, a powerful static code analysis tool that can highlight any smells in your code, and give me some good metrics about my code. I'll be running it against the latest a version of NerdDinner, which can be downloaded from here.
You can run through this walk-through as you are reading this post with NDepend. You can find installation instructions here.
NDepend examines compiled code to help you find any potential issues in the generated IL code - so if you are running NDepend outside of Visual Studio, make sure you build your project first.
So let's start with the basics. One of the coolest things about NDepend is the metrics that you can get at so quickly, without really doing much. After downloading and installing NDepend, you will see a new icon in your Visual Studio notification area indicating that NDepend is installed:
Now, we can fire up NDepend simply by clicking on this new icon and selecting the assemblies that we want to analyse:
We want to see some results straight away, so lets check "Run Analysis Now!". Go ahead and click ok when you are ready. This will then generate a html page with detailed results of the code analysis. The first time you run NDepend you will be presented with a dialogue advising you to view the NDepend Interactive UI Graph. We'll get to that in a moment - but first lets just see what NDepend's default rules thought of NerdDinner:
Yellow! This means that NerdDinner has actually done ok - we have some warnings but no critical code rule violations. If we had some serious code issues, this icon would turn red. These rules can be customised and new rules can be added, but we'll cover this later. So we now have a nice quick to view metric about the current state of our code.
This is a really basic measure, but it lets us know that something in our code, in it's current state, either passes or fails analysis by NDepend. You may be questioning the usefulness of this, but if your team knows that their code must pass analysis by NDepend, a little red / yellow / green icon becomes a useful and quick to see signal. Are my changes good or bad?
The Dependency graph allows you to visually see which libraries are reliant on each other. Useful if you want to know what will be effected if you change a library or swap it out for something else (you should be programming against interfaces anyway!):
By default, the graph is also showing you which library has the most number of lines of code. The bigger the box, the greater the lines of code. This sizing can also be changed to represent other factors of a library, such as the number of intermediate instructions. This lets you easily visualise information about your codebase.
Out of the box, NDepend will check all of your code against a set of pre-defined rules which can be customised. Violations of these rules can be treated as a warning, or as a critical error.
So NerdDinner has thrown up a few warnings from nDepend. Let's have a look at what these potential code smells are, and see how they can be actioned:
So, within our Code Quality rule group, NerdDinner has thrown up warnings against 3 rules. NDepend's rules are defined by using a linq query to analyse your code. Let's take a look at the query to find any methods that are too big:
It's quite self explanatory. We can easily alter this linq query if we want to change our rules - e.g. alter the number of lines of code necessary to trigger the warning. Looking at the results from the query:
NDepend has directed us to 2 methods that violate this rule. We can get to them by double clicking on them, so that we can start re-factoring them. It's worth stating here that a method that is too big potentially violates the single responsibility principle as it must be doing too much. It needs breaking up.
A stand out feature of NDepend that I haven't seen anywhere else before is it's ability to execute rules against changes. I.e, you can tell NDepend to look at two different versions of the same dll, and check that only changes made meet a set of rules. This can be handy in situations where you cannot realistically go back and fix all of the previous warnings and critical violations. Whilst you can't expect your team to fix the old code smells, you at least expect them to be putting good clean code into the application from now onwards.
Again, NDepend comes with some really good sensible rules for this kind of situation:
Whilst there are plenty of tools out there to help you write clean, maintainable, non smelly code, NDepend does strike as a very powerful and highly customisable tool. It also stands out as it can be run within Visual Studio itself or as a standalone executable. This opens up the potential to it being run on a build server as part of a build process. I certainly have not done NDepend justice in this post as there is heaps more to it, so I would recommend downloading it and running it against that huge scary project that you hate making changes to.
]]>Yes - real, "bare metal" SQL. We used it for our CRUD operations, and to perform other larger data manipulation tasks. The database server should be the quickest way to find, remove and join data - provided you know what you are doing.
Then we started using ORMs and stopped writing SQL. The advantages of this are that we should have reduced our development time, needed fewer developers with a good knowledge of SQL programming, and didn't have to write lengthy and repetitive SQL statements (anyone who has worked on or built a data warehouse will fully agree).
But with this, we sacrificed control over what SQL was run against our database server, leaving it to the ORM to decide what to run.
Looking specifically at Entity Framework's code first, lets take a look at how you can run into problems with a delete.
So here's the scenario. I have a task that pulls in data from an external source every hour and needs to be "mirrored" into a table in my application's database. Let's call the table BatchImportData.
As I do not own the external data and have absolutely no control over it and need to mirror the data into my application's database, I need to do the following to get the task accomplished:
Using EF code first, I would normally expect to delete all records from the BatchImportData table with the following code:
foreach (var batchImportDataItem in context.BatchImportData)
{
Db.BatchImportData.Remove(batchImportDataItem);
}
This will work, but it will be slow to execute. At the very least, EF will run a delete statement for every single record that exists in BatchImportData.
If we were writing bare metal SQL, we would write either a single delete statement, or a single truncate statement:
DELETE FROM BatchImportData
--OR
TRUNCATE TABLE BatchImportData
We can still do this through EF Code First simply by opening up our DbContext a bit more. Currently, our DbContext will look something like this:
public class DbContext : System.Data.Entity.DbContext, IDbContext
{
public IDbSet<BatchImportData> BatchImportData { get; set; }
}
Let's add a public method in our DbContext that exposes System.Data.Entity.DbContext.Database.ExecuteSqlCommand:
public class DbContext : System.Data.Entity.DbContext, IDbContext
{
public int ExecuteSqlCommand(string sql)
{
return base.Database.ExecuteSqlCommand(sql);
}
public IDbSet<BatchImportData> BatchImportData { get; set; }
}
This method will take in a SQL statement and will run it against the database.
You can then call the new ExecuteSqlCommand method that you have just added:
Db.ExecuteSqlCommand("TRUNCATE TABLE BatchImportData");
We now have a much quicker way of removing all records from a table.
Do not use this if you are going to build up a SQL statement based on user input. You will make yourself susceptible to an injection attack.
This SQL command is merely a string - it is not strongly typed. If we rename our BatchImportData entity and forget to update this SQL command to reflect this change, we will experience a runtime error.
This opens you up to some potential serious data loss mistakes. The classic being a missing where clause.
]]>An old (legacy) application has landed on your project pile. It is largely built in php, and you intend on re-writing it in asp.net MVC.
You will therefore need to somehow inform any parties that may be trying to access the old urls ending in .php, that the resource they are looking for has moved permanently. You may also wish to do this for SEO reasons.
This is something that cannot be achieved easily through routing; by default, IIS will not pass requests for resources ending in .php to your application. Your routing will therefore never be put to use for resources ending with .php.
The nicest solution I found to this issue is to setup a list of redirects within the system.Webserver section of your web.config file. The following listing below will send a HTTP Response status of 301 (Moved Permanently) for any requests for index.php and for prices.php:
<system.webServer>
<httpRedirect enabled ="true" httpResponseStatus="Permanent" exactDestination="true">
<add wildcard="/prices.php" destination="/prices"/>
<add wildcard="/index.php" destination="/" />
</httpRedirect>
</system.webServer>
Index.php will now redirect to /, and prices.php will now redirect to /prices.
This code is currently running in the wild on Azure.
If you are unsure if you need a Permanent redirect or not, have a read of this article from Google.
]]>To make up the primary key.
In your entity class, simply decorate any properties that you want to make up your key with the attribute "Key":
public class Brochure
{
[Key, Column(Order = 0)]
public int ProductId { get; set; }
[Key, Column(Order = 1)]
public int Year { get; set; }
[Key, Column(Order = 3)]
public int Month { get; set; }
[Required]
public string ProductName { get; set; }
}
NOTE: You shouldn't need to use this method if you are using full entity framework code first. However, some projects only use entity framework to handle migrations - so this might be of use to you:
public partial class BrochureTable : DbMigration
{
public override void Up()
{
CreateTable("Brochures", c => new
{
ProductId = c.Int(nullable: false),
Year = c.Int(nullable: false),
Month = c.Int(nullable: false),
ProductName = c.String(maxLength: 60)
}
).PrimaryKey(bu => new {bu.ProductId, bu.Year, bu.Month});
}
}
Enjoy!
]]>Aside from the application bloat irritations, the actual sweet goodness at the core (the latest release of the Android OS) was still something that I looked forward to. However, it takes time for Samsung and your mobile operator to get their changes in, and this can often be months. For example, Ice Cream Sandwich was officially released on the 19th of October 2011. It eventually landed on t-mobile uk branded Samsung Galaxy S2 handsets in June the following year. A painful 8 month wait.
So, after lots of careful consideration and lots of research, I rooted my phone and installed a custom rom using some of the awesome information available over at galaxys2root.com.
The rooting itself was quite straight forward. I then selected a custom rom that I had heard several rave reviews about - Resurrection Remix.
After installation and initially playing with it, I was very impressed. But as the weeks went on I started to notice a few bugs of varying degrees of irritation. Some were a little annoying. Others were rage inducing. I was also getting worse battery life and plenty of apps just didn't work.
But I still stuck with it. Why? It was so much faster than the official rom.
After a few months, I decided I'd better check for a newer version of Resurrection Remix with the hope that it would fix some of the issues that I was experiencing.
There was. I considered it, did my research, and found that most users were satisfied with it. A Google search for "resurrection remix [version number] issues" was my research.
After installation and a few weeks of use, I was much happier. I was now getting much better battery life and many of my apps that did not quite function properly previously were now working as expected.
I'm currently still on Resurrection Remix version 2.7. Some parts of the rom are great, but others are not - there are a few annoying glitches. One of the worst is that I cannot update the gmail app. Frustratingly, this is even though the rom's community is one of the biggest out there.
If you are thinking about running a custom rom on your android phone, consider the following:
I personally now would not recommend going the way of running a custom rom. Sure, it's fun playing around with your phone and seeing how it runs under a highly customisable, non stock rom, but if you use your phone heavily and rely on it to work, I just don't believe it's worth the risk of running into a frustrating glitch.
I think that from now on, I'll be sticking with the stock roms but will be rooting so that I can use apps like Titanium Backup, and so that I can uninstall unwanted bloatware apps.
]]>To rollback all migrations (calls the "Down" method on each migration):
var configuration = new Configuration();
var migrator = new DbMigrator(configuration);
//Rollback
migrator.Update();
To rollback or update to a specific migration:
var configuration = new Configuration();
var migrator = new DbMigrator(configuration);
//Update / rollback to "MigrationName"
migrator.Update("MigrationName");
To update to the latest migration:
var configuration = new Configuration();
var migrator = new DbMigrator(configuration);
//Update database to latest migration
migrator.Update();
]]>On three separate occasions, I've discovered Trojans running on machines that are supposed to be protected by this antivirus software.
This freaked me out the first time, concerned me the second time and made me rage quit Security Essentials the third time. I'm now running Bit Defender at home.
I looked into these problems as I could not be the only one facing these issues. I was right - I found several forum posts by people complaining of the same problems.
Upon looking into the issue further, I discovered that there isn't actually anything wrong with the detection on Security Essentials. In fact, it ranks quite nicely amongst alternative free antivirus solutions (Avast, AVG, Avira).
The problem appears to be its default settings. By default, Security Essentials will be setup to run at 2 am on Sunday, and will only look for an update on virus definitions just before it runs. If, like most home users, your PC may be off at 2 am on a Sunday, these two critical actions will not happen. No update. No scan. This will leave your PC with very little protection.
If you're going to use Security Essentials, you need to tweak the settings to make it more protective of your PC. Below are my recommended settings. Fire up Security Essentials and navigate to "Settings".
I'd recommend having a daily "Quick Scan" at a time that you know your PC will be on. If you're worried about the slowdown, simply limit your CPU usage. And remember, the slowdown and downtime that you get as the result of a virus will be a lot worse than any slowdown than you could get as a side effect of an antivirus scan:
If my antivirus thinks it's found a severe or high alert, I want it removed:
This should be on. If it isn't, turn it on.
Be sensible here. Add any directories and folders that you will be working on regularly that are unlikely to get infected. For example, as a developer, I know that my source code is unlikely to be effected by a virus. As I will be writing changes to these files to the drive regularly, I also do not want any slowdown as a side effect of the antivirus scanning my edited files:
Again, you want to be sensible here and ideally have as few files as possible being scanned here. The default settings of .ini and .log files should be sufficient here.
If you use any heavy applications for work, it is worth adding them into this list. As a developer, I tend to spend a lot of time in Visual Studio. I know this process is a safe one as I installed it and it came from a vendor that I trust:
The only change I'd suggest here is setting Security Essentials to scan your removable drives:
Security Essentials should now be of a greater protective value to you. If you don't think this will protect you enough, consider purchasing an Anti Virus solution.
]]>Here's the problem: undocumented stored procedures suck. Their usage has been determined by reverse engineering. They are not officially supported. Their implementation and usage could change with updates and new versions, or could disappear totally.
Here's something that illustrates my point: "sp_MSforeachtable" does not exist in SQL Azure. So, if your development environment is SQL Server 2008 but your production environment is SQL azure and you are using "sp_MSforeachtable", you will get problems when you go live, which sucks.
Below is a simple, Azure and Entity Framework friendly bit of SQL that will drop all tables in your database, with the exception of your entity framework migration history table - "__MigrationHistory":
while(exists(select 1 from INFORMATION_SCHEMA.TABLES where TABLE_NAME != '__MigrationHistory'))
begin
declare @sql nvarchar(2000)
SELECT TOP 1 @sql=('DROP TABLE ' + TABLE_SCHEMA + '.[' + TABLE_NAME + ']')
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME != '__MigrationHistory'
exec (@sql) PRINT @sql
end
If you need to drop your table constraints first, the following code will allow you to do so:
while(exists(select 1 from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where CONSTRAINT_TYPE='FOREIGN KEY'))
begin
declare @sql nvarchar(2000)
SELECT TOP 1 @sql=('ALTER TABLE ' + TABLE_SCHEMA + '.[' + TABLE_NAME + '] DROP CONSTRAINT [' + CONSTRAINT_NAME + ']')
FROM information_schema.table_constraints
WHERE CONSTRAINT_TYPE = 'FOREIGN KEY'
exec (@sql) PRINT @sql
end
Credit to SQL Server Central for the above snippet.
]]>If this is the first you have heard about front end optimisation, I must recommend that you read "High Performance Websites" by Steve Souders.
One of the best ways to make your site load quicker is to make it smaller. Reduce the amount of data that needs to go from your server to the client machine. And one of the best ways to make massive gains in doing this is to Bundle and Minify the CSS and Javascript that your application uses.
MVC4 has made this incredibly easy with built in Bundling and Minification. However, you do not need to upgrade your entire MVC project to take advantage of this fantastic new set of features.
Firstly, open up the project that you want to optimise. I'll be demonstrating this with a new empty MVC3 web application. Currently, the head of my _Layout.cshtml file looks like this:
<head>
<title>@ViewBag.Title</title>
<link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" />
<script src="@Url.Content("~/Scripts/jquery-1.7.1.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.validate.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/modernizr-2.5.3.js")" type="text/javascript"></script>
</head>
So lets run the page and just see what's going on under the hood:
So at the moment, my page is a massive 355.89 Kilobytes - and all it does is display some text - "Index". That's way too big - and we can see that the biggest and slowest loading areas of the site are the Javascript libraries that are being pulled onto every page of the site. (It's worth noting at this point that you could used the already minified versions of the files. For the sake of this demo, I'll be using the full fat versions).
So let's get bundling and minifying.
Open the package manager console, and enter the following command to drag down the Microsoft ASP.NET Web Optimization Framework:
Once added to your project, you can the begin configuring your bunding and minification. Open your Global.asax.cs file and add a reference to System.Web.Optimization
using System.Web.Optimization;
Then, go ahead and create a new static method, RegisterBundes:
public static void RegisterBundles(BundleCollection bundles) { }
Now, lets create some Bundles. I'm going to put all of my Javascript into one minified bundle, and my CSS into another minified bundle:
public static void RegisterBundles(BundleCollection bundles)
{
//CSS
var styles = new StyleBundle("~/Content/bundledcss").Include("~/Content/site.css");
//JS
var js = new ScriptBundle("~/Scripts/bundledjs").Include("~/Scripts/jquery-1.7.1.js", "~/Content/jquery.validate.js", "~/Content/jquery.validate.unobtrusive.js", "~/Content/modernizr-2.5.3.js");
bundles.Add(styles); bundles.Add(js);
BundleTable.EnableOptimizations = true;
}
There are three types of bundle that you can create:
Now, lets wire up a call to our new RegisterBundles method. In Global.asax.cs, locate your Application_Start method. Add the following line:
RegisterBundles(BundleTable.Bundles);
Now, when your application starts up, the bundles will be created. Now, all we need to do is to tell the views that they now need to load a bundled file and not the raw, unbundled and unminified css and js files.
In your _layout.cshtml file, or whichever file you have your stylesheets and javascript files referenced, swap out your raw file references. Firstly, add a reference at the top of your view to System.Web.Optimisation:
@using System.Web.Optimization
Now, lets swap the old references to our full fat files out:
<link rel="stylesheet" type="text/css; href="@BundleTable.Bundles.ResolveBundleUrl("~/Content/bundledcss")" />
<script src="@BundleTable.Bundles.ResolveBundleUrl("~/Scripts/bundledjs")">"</script>
And that's us. Let's test it out. Remember to build first:
And we've made a massive improvement. We have:
This won't take long to bring into an existing site and is definitely worth the benefits.
Edit: Of course, you should also move your Javascript to the foot of the html document for even more speed improvements.
]]>So, now for my new years promises. I realise that I've been seriously neglecting this blog, but this year I will be a lot more active on it. I also have a new version of this site that I began working on last year that I will finish and get up quite soon.
]]>Firstly, you will need a helper somewhere in your test project that will return you a mock HttpContext:
public static class MockHelpers
{
public static HttpContext FakeHttpContext()
{
var httpRequest = new HttpRequest("", "http://localhost/", "");
var stringWriter = new StringWriter();
var httpResponce = new HttpResponse(stringWriter);
var httpContext = new HttpContext(httpRequest, httpResponce);
var sessionContainer = new HttpSessionStateContainer("id", new SessionStateItemCollection(), new HttpStaticObjectsCollection(), 10, true, HttpCookieMode.AutoDetect, SessionStateMode.InProc, false);
SessionStateUtility.AddHttpSessionStateToContext(httpContext, sessionContainer);
return httpContext;
}
}
You will then need to Add and Reference System.Web in your test project. Once done, you will be able to set your HttpContext and set HttpContext specific values, such as session variables. In the example below, I am setting up the HttpContext in the SetUp method of a unit test:
[SetUp]
public void SetUp()
{
HttpContext.Current = MockHelpers.FakeHttpContext();
HttpContext.Current.Session["SomeSessionVariable"] = 123;
}
Another more heavy solution to the above would be to use a factory to get at and create your session. I opted for the solution above as it meant not changing my application code to fit in with my unit tests.
]]>You can also execute arbitrary SQL statements. To do so, in your Seed method (which can be overriden from your Migration folder in your Configuration class), simply read the contents of any SQL files you want to execute, and tell the context to run them:
internal sealed class Configuration : DbMigrationsConfiguration<MyDbContext>
{
protected override void Seed(MyDbContext context)
{
var baseDir = AppDomain.CurrentDomain.BaseDirectory.Replace("\\bin", string.Empty) + "\\Migrations";
context.Database.ExecuteSqlCommand(File.ReadAllText(baseDir + "\\DataClear.sql"));
context.Database.ExecuteSqlCommand(File.ReadAllText(baseDir + "\\Table1Insertssql"));
context.Database.ExecuteSqlCommand(File.ReadAllText(baseDir + "\\\\Table2Inserts.sql"));
}
]}
The above code will execute DataClear.sql, Table1Inserts.sql and Table2Inserts.sql, which are all in the root of my migrations folder.
Don't forget that you can generate your insert statements using management studio, or by using the Static Data Generator tool.
]]>Nilsen accurately points out that in the real world, controllers, even with the best of engineering intentions, can get bloated. Nilsen focuses on the scenario of your controllers containing Actions for Views and Ajax requests that will return Json data, which is a common cause of controller bloat.
The post demonstrates that you can use SignalR to isolate any Ajax calls away from your controllers. I like his approach, but its also worth noting that you can use the Asp.Net Web API to achieve similar functionality. Using the Asp.Net Web API will allow you to easily separate your controllers that return data from your controllers that return views, but it does leave the client side requesting and manipulation of the data up to you.
Great post all the same, have a read.
Traditionally, adding dependency injection with any standard injection package to an MVC solution would normally mean that you would have to write a Controller Factory and wire it up in your Application_Start() method.
If you want to add dependency injection into an MVC solution, just add one of the pre-baked nuget packages that does the MVC wire up for you:
It's just much easier - these packages contain code that has done the legwork of wiring up a custom controller factory for you.
Looking specifically at the Ninject.MVC3 package, simply add the NuGet reference. This will add a "NinjectWebCommon.cs" file into your "App_Start" folder:
You can then edit the RegisterServices(IKernel kernel) method to configure your bindings:
private static void RegisterServices(IKernel kernel)
{
kernel.Bind<IWorkRepository>().To<WorkRepository>();
}
]]>I am writing this post as an urgent warning to anyone currently hosting with Eukhost, or considering hosting with Eukhost. Your data is at risk of being compromised.
Because of a mistake made by Eukhost's support staff, I am now in possession of 6 databases that I do not own. All of these databases contain sensitive information, including email addresses, residential addresses, and unencrypted passwords. Here's a screenshot:
I came into possession of these databases as I was terminating my hosting with Eukhost. As part of this process, I asked Eukhost to back up and email me my databases. They responded with a link to a zip file that was supposed to contain my databases. This is where their support staff messed up - they backed up the wrong user's databases.
Obviously this is a huge violation of data protection practices - but it also puts the owner of the other databases in a horrible positon. All of their user data has been compromised through no fault of their own.
My advice if you are considering Eukhost: Avoid them.
My advice if you are currently hosting with Eukhost: Close your account now and order them to delete your data.
]]>Well not quite.
The two really can serve very different purposes.
That's a lot of power.
So the question is, what if my intention is to only ever create a RESTful webservice and I never need to support other message types (such as SOAP)? Should I be using WCF?
Well, the short answer is that you can, but you'd be using a sledgehammer to crack a nutshell. Another major factor will be the issue that WCF doesn't really let you go all the way with REST very easily:
I'm a big fan of the KISS principle. So if you are going to setup a Web Service and don't need SOAP, you should take a good look at the ASP.Net Web API.
But what do the experts say?
Well, Christian Weyer, who basically had a hand in WCF's inception, states in this blog post that the web is going in a new direction, focused around lightweight web based architectures. He is leaning toward using the ASP.Net Web API for these sorts of architectures.
apigee have made a fantastic ebook on Web APIs available to download for free. It's a great read, and got me thinking about web services in more detail.
]]>http://servicestack.net/benchmarks/
We found their research hugely useful when deciding on which ORM to use on a recent project.
Enjoy.
]]>In the past, I had used xsd.exe, which lived in the SDK directory of VS2008.
As I do not have VS2008 installed on my current machine, I had to go hunting for it. On a Windows 7 machine running VS2010, xsd.exe should be found in:
C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin
or
C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\bin\x64
You can then run this application from the command prompt in order to generate an xsd file, as per the usage instructions on msdn here.
Et Voila!
]]>http://vibrantcode.com/blog/2012/4/10/whats-new-in-razor-v2.html
Some of it looks absolutely awesome despite being so so simple. I especially like the dropping of the requirement to type out "@Url.Content"
]]>E.g. you have a table called "Product" and not "Products", or you want your table to be called "Product" and not "Products"
This is the problem that I had. My MVC application consisted of one web page that just dumped out the contents of the "Product" table onto the page. When I browsed to the page, I got an "Invalid object name 'dbo.Products'." yellow screen of death runtime error.
Rename the table to "Products". I didn't want to do this as I'm from the school of singular table names. I was also curious about situations where the tables couldn't be renamed.
Make use of Entity Framework's fantastic Conventions, that allow you to specify how you have or want your database to be setup.
To tell Entity Framework not to pluralise database table names, simply add the following code into your DbContext class:
public class EfDbContext : DbContext
{
public DbSet<Product> Products { get; set; }
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.Conventions.Remove<PluralizingTableNameConvention>();
}
}
This code will remove the Pluralising convention that is by default attached to all model builders. You will then be able to access database tables with Singular names.
Table Naming Dilemma: Singular vs. Plural Names (StackOverflow)
]]>After downloading it and installing it, I tried to integrate it into an existing project in Visual Studio through the Server Explorer, and got the following issue:
This is apparently a known issue, and happens when you right click on "Data Connections" in Server Explorer, and chose "Create New SQL Server Database".
The work around is to create your database using Management Studio (Just the DB, not the actual schema).
Once it has been created, you can then "connect" to the database in Visual Studio's Server Explorer:
This will allow you to connect to the database created in Management Studio without any issues.
If you didn't change it during installation, your instance name will be ".\SQLEXPRESS".
]]>I personally will always chose to create a browser facing web based product as a Web Application in Visual Studio 2010. The main reason being that Web Applications actually create a project file containing all of the configuration details for your application. The other being that you don't need to set anything up in IIS on your local machine.
I recently inherited a Web Site that needed some changes. It was an asp.net site, and the first thing I did was to convert it to a Web Application, just to at least get that explicit definition of what was in the project from the Project file.
It also makes debugging easier if you need to debug your site from the root of your localhost - IIS Express and Cassini will just run a Web Application on a specific port from the root. Web Sites however will run from a Virtual Directory (e.g. http://localhost/MyWebSite/). Web Sites can be configured to run under a port, but it isn't a simple process.
And of course the other advantage of having a Project file containing all of your Web Server configuration for debugging is actually stored within a file, that can be kept under source control, and can be shared with others without the need for lengthy debugging instructions.
]]>You can download the cheat sheet from Google Docs here
Very useful. Print it out and put it under your monitors.
]]>However, if your client is written in some other non .net language, then you may need to know what exactly the date is.
WCF dates are specified as the number of milliseconds since the 1st of January 1970. This is also sometimes referred to as an Epoch Time Value. The guys over at Esqsoft have a handy web based Epoch date time converter. Great for finding JSON date values to put into a request into Fiddler for testing.
So lets take today's date - 11th October 2011, and convert it to an Epoch Time Value:
1318287600
is our output value.
Lets see this in the context of a JSON request:
{
"LastUpdatedDate":"\\/Date(1318287600+0100)\\/",
}
As you can see, we need to wrap the Date in an escape sequence, and then a Date object. We also append the date with our time zone. In my case, I am appending a +0100 as I am one hour ahead of Greenwich Mean Time, currently on British Summer Time (although you wouldn't know it looking at the weather!)
Bertrand Le Roy's blog post on WCF JSON dates http://weblogs.asp.net/bleroy/archive/2008/01/18/dates-and-json.aspx
Fiddler http://www.fiddler2.com/fiddler2/
ESQ Soft Epoch / Date Converter http://www.esqsoft.com/javascript_examples/date-to-epoch.htm
JavaScriptSerializer Class http://msdn.microsoft.com/en-us/library/system.web.script.serialization.javascriptserializer.aspx
]]>Lets start from an Empty MVC3 project. Fire up Visual Studio. Create a new Asp.Net MVC3 Web application. I'm going to call mine SitemapDemo:
For the sake of this demonstration, I have selected an empty template and have chosen Razor as my view engine.
Now before we go any further, lets go ahead and install the NuGet package. Select View > Other Windows and then select "Package Manager Console":
This will then dock the Package Manager Console somewhere into your view. In order to add the Asp.net MVC sitemap provider to the current project, we need to enter the following command into the Package Manager Console, and hit enter:
PM> Install-Package MvcSiteMapProvider
This command will then download the necessary files (dlls, cshtml files) and add them into your MVC project. This could take a few minutes depending on your connection. If this has been successful, your Package Manager Console should give you an output similar to the following:
PM> Install-Package MvcSiteMapProvider
Successfully installed 'MvcSiteMapProvider 3.1.0.0'.
Successfully added 'MvcSiteMapProvider 3.1.0.0' to SitemapDemo.
PM>
Now lets take a look at what exactly the NuGet package manager has added to our project:
As we're using Razor as our view engine, we can go ahead and delete the asax files that have been added to the SitemapDemo > Views > Shared > DisplayTemplates folder. Here's how your solution should now look:
Now that's the install over. Let's add a half decent set of controller actions and views to the site before we go on to playing with the SiteMapProvider. The point of this is to capture the kind of structure that you would find in a typical website.
Important
The MVC Sitemap provider will fail silently in some fashion if we try to force it to work with controller actions that either don't exist or that point to non existent views. This is why we are doing this stage first.
All sites have a homepage, so lets add this first. Right click on your Controllers folder, and add a controller called "HomeController". Lets leave the scaffolding options blank:
Once your controller is created and open, right click inside the Index action and select "Add View..."
In the Add View dialogue that pops up, just go a head hit add. Nothing will need changing. Now lets change the text inside the created page's h2 tag on the page - something like "Index - this is the home page" will do.
And now lets add another controller in the same way that we added the HomeController. Let's call it NewsController. Update the newly created news controller to contain an additional action result called "Sports":
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace SitemapDemo.Controllers
{
public class NewsController : Controller
{
// GET: /News/
public ActionResult Index()
{
return View();
}
// GET: /News/Sports/
public ActionResult Sports()
{
return View();
}
}
}
Now, lets add a view for each of our newly created NewsController actions. Lets do this in the same way that we added the view for the home page - by right clicking within the body of each controller action. Again, we can simply leave all of the defaults in the "Add View" dialogue and just hit "Add" for both views.
Now edit the h2 tag on the News Index cshtml file to read "News Index". Lets also edit the h2 tag on the News Sports cshtml file to read "Sports News".
Let's now add one more Controller for illustration - AboutController. Once created, you can leave the controller unchanged, and can add a view for the Index controller action. This time, lets change the h2 to tags to read "About Page".
As we have now added 4 pages to our website, it's now worth just testing them out before we start integrating the Site Map Provider. Hit the debug button - below are screen shots and their corresponding URLs:
localhost:xxxx
localhost:xxxx/News
localhost:xxxx/News/Sports
localhost:xxxx/About
Ok, so we've now got a small site with a little bit of a structure. Lets represent that structure in an easy diagram:
Visualising our layout like this will really help us describe our site's structure in our Mvc.sitemap file correctly. Our Index page is our wrapper for the entire site as it is the page that sits in the root, and is the first port of call into the site.
Now lets get into configuring our Sitemap. Lets start by editing our Mvc.sitemap file, which is in the root of our project. This file contains all of the xml nodes needed to represent your controller and action combinations.
Edit your Mvc.Sitemap file so that it is the same as the listing below:
<?xml version="1.0" encoding="utf-8" ?>
<mvcSiteMap
xmlns="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-3.0"
enableLocalization="true">
<mvcSiteMapNode
title="Home"
controller="Home"
action="Index">
<mvcSiteMapNode
title="News"
controller="News"
action="Index">
<mvcSiteMapNode
title="Sports News"
controller="News"
action="Sports"/>
</mvcSiteMapNode>
<mvcSiteMapNode
title="About"
controller="About"
action="Index"/>
</mvcSiteMapNode>
</mvcSiteMap>
We have now represented our website structure / workflow in the MVC.Sitemap file. A classic "gotcha" here is forgetting that your entire site is wrapped in a node that represents your homepage. Your sitemap file must contain this node - after all, your website's homepage is the page that the client sees as the root of everything. So even though the Index action is actually at yourwebsite/Index, the client will typically see it just as yourwebsite/. The rest of the structure should make sense when compared to the website navigation diagram, earlier in this post.
Now that we've got some controllers and actions setup, and our site structure described properly in our Mvc.Sitemap file, lets add some navigation to all pages.
Open up your _Layout.cshtml partial, located in the Views/Shared folder. Update the listing so that the code between the body tags looks like this:
<body>
@Html.MvcSiteMap().Menu(false, true, true) @RenderBody()
</body>
We are now calling the MvcSiteMap library and telling it to output the website's navigation on every page. The parameters specified mean that:
And now if we run our application, we should see the navigation laid out on every page, complete with links:
So now we've managed to output a simple navigation onto all pages of our website. If you want to change any styling, or how the navigation is displayed, simply alter the code in Views/Shared/DisplayTemplates/MenuHelperModel.cshtml. Lets make a simple change and add an inline style to change our bullet points from circles to squares:
<ul>
@foreach (var node in Model.Nodes)
{
<li style="list-style-type:square;">
@Html.DisplayFor(m => node) @if (node.Children.Any())
{
@Html.DisplayFor(m => node.Children)
}
</li>
}
</ul>
You can now hit refresh in your browser without needing to re-compile. Your News index page should now look like this:
We can add breadcrumbs in a similarly easy fashion. Let's open up our _Layout.cshtml partial and edit the listing:
<body>
@Html.MvcSiteMap().Menu(false, true, true)
<p>
Start of Breadcrumbs:
</p>
@Html.MvcSiteMap().SiteMapPath() @RenderBody()
</body>
Now all pages on our site will have a handy set of breadcrumb links:
Similarly, if we want to customise the presentation of our breadcrumbs, we need to change the Views/Shared/DisplayTemplates/SiteMapPathHelperModel.cshtml file.
Every real site will need to employ a dynamic / Parametered URL at some point. Incorporating a dynamic URL into the MVC Sitemap is straightforward when you know how. Lets start by adding a new action to the NewsController:
// GET: News/Article/x
public ActionResult Article(int id)
{
ViewBag.id = id;
return View();
}
Now lets add a view - right click anywhere within the new action and select "Add View...". Again, just hit Add - we don't need to change any of the options. Now update the newly created Article.cshtml file with the following:
@{ ViewBag.Title = "Article"; }
<h2>
Viewing Article @ViewBag.id
</h2>
Now lets browse to localhost:xxxx/News/Article/1234:
Note that this new page does not appear anywhere in our sitemap, and that the breadcrumbs are totally empty.
In order to add the page into our navigation, we must first decide where it needs to go. I'd like this page to sit underneath the News section. So lets edit our Mvc.Sitemap file and add a Key attribute to the "News" node. This is simply to give it a unique identifier:
<mvcSiteMapNode title="News" controller="News" action="Index" key="News">
Now we need to decorate our controller action with some attributes that tell it where to insert the action in the site's structure. Update your Article action in your News controller:
//GET: News/Article/x
[MvcSiteMapNode(Title = "Article", ParentKey = "News")]
public ActionResult Article(int id)
{
ViewBag.id = id;
return View();
}
Now lets compile and browse to localhost:xxxx/News/Article/1234:
And we now have Breadcrumbs and navigation for our new page, despite it having a dynamic URL!
You can download the complete solution here
However, on putting together a real world MVC 3 application that made full use of models that could be re-used and partials, I hit a pretty obvious issue. In this application, we were using the same model for creating new users and updating existing users. Why wouldn't we? The model represents the same entity, and will need the same validation. This was a perfect fit and was ideal until we decided to put some remote validation in against the email address field.
Our email address property in our model, after initially adding remote validation, looked something like this:
[DisplayName("Email Address")]
[Required]
[RegularExpression("\[\\w\\.-]+@[\\w-]+\\.(\\w{3}|\\w{2}\\.\\w{2}")]
[Remote("IsUserNameAvailable", "UserManagement")]
public string ContactEmailPrimary { get; set; }
This worked fantastically on the create user page. When the user entered an email address that was already registered, the client made an Ajax request to our following controller action, that checked if the email address was not already in the database:
[HttpGet]
public virtual JsonResult IsUserNameAvailable(string contactEmailPrimary)
{
int results = 0;
var searchResults = _userServiceGateway.SearchUserLogin(contactEmailPrimary);
if (searchResults.Count > 0) return Json("This Email Address has already been registered", JsonRequestBehavior.AllowGet);
return Json(result, JsonRequestBehavior.AllowGet);
}
Which if it was registered, would return false.
"Great" we thought. Feature complete - and it went off to the testers. Only for another bug to have come up. We now couldn't update the user information of an existing user. This was because the remote validation was firing and checking that the email address of the existing users hadn't already been registered - which, it obviously had. This resulted in our page failing client side validation, and the client being unable to save any changes to the entity.
So we needed a solution. We first explored the possibility of disabling the Remote Validation somehow for this one page. This proved fruitless. We then looked at possibly splitting the models used, and having one model for the User Create, and one model for the User Update. We resisted this idea for obvious reasons.
We then looked at the possibility of passing additional parameters to the Remote Validation action. Sure enough, we discovered the AdditionalFields attribute of the Remote validation object.
The AdditionalFields attribute allows you to specify the name of another form element that will appear within the same form. ASP.NET MVC3 remote validation will then pass the value of this additional attribute to your controller action that is performing the validation. So to fix our issue, we just altered the code to pass the AdditionalField of "UserAccountId", which we knew would be greater than zero if the user entity was being edited and not created.
Here's how our finished working code looked, making use of the AdditionalFields attribute:
Model:
[DisplayName("Email Address")]
[Required]
[RegularExpression("\[\\w\\.-]+@[\\w-]+\\.(\\w{3}|\\w{2}\\.\\w{2}")]
[Remote("IsUserNameAvailable", "UserManagement", AdditionalFields = "UserAccountId")]
public string ContactEmailPrimary { get; set; }
public int UserAccountId { get; set; }
View:
<div class="columns">
@Html.LabelFor(model => model.ContactEmailPrimary)
<span class="relative">
@Html.TextBoxFor(model => model.ContactEmailPrimary)
@Html.HiddenFor(model => model.UserAccountId)
</span>
</div>
And finally our controller, which now gets an AdditionalField of "UserAccountId":
[HttpGet]
public virtual JsonResult IsUserNameAvailable(string contactEmailPrimary, int userAccountId)
{
if (userAccountId == 0)
{
int results = 0;
var searchResults = _userServiceGateway.SearchUserLogin(contactEmailPrimary);
bool result = true;
// We need to check for an exact match here as the search only does a "like"
foreach (var userAccountDto in searchResults)
{
if (userAccountDto.UserLogin == contactEmailPrimary)
{
result = false;
break;
}
}
if (!result) return Json("This Email Address has already been registered", JsonRequestBehavior.AllowGet);
return Json(result, JsonRequestBehavior.AllowGet);
}
return Json(true, JsonRequestBehavior.AllowGet);
}
We were now back to create and update functionality working securely.
When delving into using this solution, I made the huge mistake of first not reading up about ASP.net sitemaps in general. If you do not have any previous experience in using sitemaps on an ASP.net website, I would strongly recommend reading this MSDN article on ASP.net sitemaps. Many of the principles used in the ASP.net MVC framework are the same and reading this article will give you some fundamental basic principles.
Once you get the ASP.net MVC sitemap provider downloaded and registered, you should see that a sitemap file (Mvc.sitemap) has been created in your MVC solution's root:
Next, you need to go about editing your MVC.sitemap file so that it actually reflects the pages on your web site that are available to the user. This is the important bit - if you don't get this right, the sitemap provider will not work as expected. All nodes in your file must:
Here is an example of an MVC.sitemap file that is in a good format and is readable to the ASP.net MVC sitemap provider (apologies if the spacing has not worked correctly):
<?xml version="1.0" encoding="utf-8" ?>
<mvcSiteMap xmlns="http://mvcsitemap.codeplex.com/schemas/MvcSiteMap-File-3.0" enableLocalization="false">
<mvcSiteMapNode title="Home" controller="Home" action="Index" changeFrequency="Always" updatePriority="Normal">
<mvcSiteMapNode title="Home" controller="Home" action="Test" description="Home">
<mvcSiteMapNode title="Dashboard" controller="Home" action="Dashboard"/>
<mvcSiteMapNode title="My Profile" controller="Profile" action="MyProfile"/>
<mvcSiteMapNode title="My Jobs" controller="Profile" action="MyJobs"/>
</mvcSiteMapNode>
<mvcSiteMapNode title="Workplace" controller="Workplace" action="Index" description="users">
<mvcSiteMapNode title="Calendar" controller="Workplace" action="Calendar"/>
<mvcSiteMapNode title="Customers" controller="Workplace" action="Customers"/>
<mvcSiteMapNode title="Equipment" controller="Workplace" action="Equipment"/>
</mvcSiteMapNode>
</mvcSiteMap>
Now, all you need to do is get the call to display your menu correct. If you get this wrong, you will end up displaying nodes that you don't want to display. Here's the call that worked for me on a file in a Mvc.Sitemap file that is essentially the same as the code listing above. This is in Razor syntax:
@Html.MvcSiteMap().Menu(false, true, false)
This call is telling the MvcSiteMap to display a menu not starting from the current node (the page the client is currently viewing), starting from the first child node, and hiding the overall starting node. This call has worked for me, but you may wish to tweak it depending on what exactly you want.
I will shortly put up a post about customising the way that the ASP.net MVC Sitemap provider displays a menu or breadcrumb trail.
A more in-depth tutorial is now available from here. It covers starting from scratch, displaying navigation, breadcrumbs, and customising their appearance.
http://mvcsitemap.codeplex.com/
http://msdn.microsoft.com/en-us/library/yy2ykkab.aspx
DECLARE @EmailAddress NVARCHAR(250)
SET @EmailAddress = 'test@test.com'
SELECT
C.Name
S.[Description] AS [Description],
S.LastRunTime
FROM
Subscriptions S
INNER JOIN [Catalog] C ON S.Report_OID = C.ItemID
WHERE
S.ExtensionSettings LIKE '%' + @EmailAddress + '%'
You will then need to manually filter through the results, looking at the description in order to see which report is going to who, and when it was last run. This is because you may have a reply to email address set as the email that you are looking for, which would also be returned by the above query.
I wrote this because it was much easier and quicker than trawling through hundreds of subscriptions. Enjoy.
If anyone has any better ways of manipulating the XML contained within the "ExtensionSettings" field on the Subscription table, then please share.
]]>I have also always paid a yearly amount for some personal web hosting. As I currently deal with largely asp.net, I need to work with Windows Servers. So about two years ago, I came across Eukhost.
I went for Eukhost because of their prices, and the available technologies ticked the right boxes - I even paid a little more to have Sql Server 2008, which is available for a lot cheaper than I could find elsewhere.
However it must be said that the service provided by Eukhost is poor. Below are a list of my grievances with Eukhost, accompanied with evidence.
Technical staff don't take care and lack professionalism
One afternoon I discovered that one of my websites hosted at Eukhost had just stopped working. It was spitting out an asp.net runtime error, and had just come out of nowhere without me updating any of the files on the server. After initiating a support ticket with Eukhost's Windows support, I was amazed to learn that a "technician" had accidentally changed the version of asp.net to 1.1 after doing some routine maintenance
You need to notice that your hosting isn't working
Eukhost don't proactively check that your hosting is available. It's up to you to notice that its down and to contact them and to get them to fix it.
When there is a major problem and they have actually noticed it, don't expect them to tell you
Recently, the server that my hosting lives on came under a massive DDOS attack (according to Eukhost). I only found this out because I discovered that several of my websites were down. I did the usual thing - contact Eukhost with a snotagram and get them to resolve the issue. I was then told about the situation, and told to read the forum. Silly me, not watching the forum for anouncements about the state of the service that I pay them for!
Support staff will get you to clean up their mess where possible
When Eukhost eventually worked out a solution for the DDOS attack, it involved the updating of the DNS records for all websites hosted on that machine. Rather than do it themselves by means of automation or just by putting all hands to the pump, they informed their customers, through means of forum post, that they needed to go and change their DNS settings.
Shared Hosting should be called cattle hosting
During the mess that was the DDOS attack, one of Eukhost's staff got cornered in a forum and asked why nobody at Eukhost felt the need to bother updating the DNS settings themselves. In a rude and inconsiderate response, one the Eukhost's staff claimed that there were "Over 3000 websites hosted on the box and that it would be impossible to update them all". Eukhost have forgotten that as somebody that pays an annual fee for a service, I don't give a monkeys about anyone else on my hosting, it just needs to work.
You can read through the said forum post here, although its contents have been butchered by Eukhost staff in what looks like a PR exercise. I can understand the difficulties in dealing with a DDOS attack, but I would have been a lot more sympatheic and understanding if it had been dealt with to the customers properly.
Anyway, I'm currently looking for a new asp.net hosting company. I'm willing to pay a little more if I can get a better service.
]]>It happens because the engine that transforms the report tries to do so on a presentation basis.
I have been developing reports in SSRS for a few years now, and here are the best ways around the issue that I have found:
1. Don't use standalone textboxes for titles, or any non-data elements. Rather than fiddle with these for hours trying to get them to line up, just insert another row or two as headers above your data driven report element (e.g. table). You can then play with the presentation of the cells to make it look like it isn't part of the same table. This can be done by colouring certain borders white to give the impression that there is nothing there.
2. Use points and not centimetres when specifying sizes. The renderer converts all measurements into points anyway, so converting from centimetres can often lead to rounding errors. This is why you still get cell merging sometimes when you have two table opposite each other, with exactly the same sizes. I appreciate that this can be a bit of a hassle, especially if you already have a report that already specifies everything in centimetres. I'd recommend using a hefty bit of search and replacing in the source rdl fie.
I use both techniques in almost all reports that I develop. It keeps the clients happy.
]]>It does however need a bit of a "kick" to get going. This is how you can do it.
1. Fire up your application in debug mode. Don't bother setting a breakpoint in any of your Javascript source, as it wont work.
2. Once its running, open Solution Explorer. To do this while debugging, select View and then Solution Explorer. It should look something like this:
Note that this is now showing you what is actually being delivered to the browser. This is important because you might have javascript being added dynamically in your code behind. As you browse around your site you will see the items underneath Windows Internet Exporer (or whichever browser you are using), change.
4. You can then open any of the files underneath your browser node in Solution Explorer by double clicking on them. The files named "anonymous code" will contain the Javascript that is embedded into the page itself. you can then debug this Javascript code by putting a breakpoint in it and waiting for it to be hit.
Very useful and powerful stuff.
]]>