Running cron jobs inside a Docker container

30th Oct 2019


Run your cron jobs inside their own container if you can afford the luxury of a dedicated container for cron jobs and your application structure allows it. If not, just run the cronjob directly on the server hosting the container, or use a cloud service that let's you run functions on a schedule.

Most of the sites that I look after now run within a Docker container. The philosophy fits well - an application siting neatly inside a dedicated container that with all of it's dependencies. It should also be free from outside interference.

The only downside to this approach is if you want to run more than one process at a time. Docker files describe what a container needs, before ending with a CMD instruction. This instruction essentially tells Docker which process to hang onto, and to monitor.

So, if your Docker container runs a node application, your CMD instruction might look like this:

CMD npm start -- --no-daemon --watch

Where --no-daemon and --watch are being passed to the start script in the package.json file.

What if I want to run a cron job?

You've got some options:

Running cron inside the same container as the application

You shouldn't really run two processes in one Docker container. This was the approach I wanted to take because my node application contained a few scripts that I wanted to run on a schedule. These scripts were kept in the main application because they shared the dependencies of the main application, such as my database access logic.

You can apparently get this working, but you run the risk of one of the processes falling into the background, and not being monitored by Docker - so you'll get a silent fail. I tried this after reading numerous stackexchange threads, like this one, and this one, and this one. I never got anything working using these approaches, and lost a few hours trying.

Running a dedicated container just for cron jobs

This seems like a bit of a luxury to me, but it does work, and is much easier to get working than the setup above. I say it's a luxury because it means you need to go through the setup and deployment of an additional container (and Docker images can get pretty large). In my case, it also meant that I needed to build another container with a copy of the whole app, and it's dependencies.

However, this will get complicated should you need to do any file system operations. I did, and now that I had two copies of the application in two separate containers, meant that my jobs container could not access the files in my app container. I had no interest in spending even more time attempting to get a shared volume to work across the two containers, so gave up.

You can view details on how to do this in this stackoverflow thread.

Run the cron jobs directly on the host machine

From any host machine that is running a Docker container, you can get terminal access into the running container. It's comparable to getting an ssh connection going to a remote server. For example, if my Docker container is called app_container_1:

docker exec app_container_1 /usr/local/bin/node /usr/src/app/jobs/my-script-to-run.js

Whilst this approach is much simpler, it only works if you've got access to the host machine. So if you're purely in the cloud, your only option is to run a dedicated container for the jobs, or to use some other cloud service that lets you run a function on a schedule.