Automated deployments to Digital Ocean using Wercker

Automated deployments to Digital Ocean using Wercker

A few of these articles may already exist, but as I am pretty sure I have been through them all I know that they are lacking some explanation and information, there was quite a bit I needed to work out for myself to finally get it working properly.

lets start off with what wercker is, in simple terms it is a platform you can use to perform automated deployments of your code, so the simple steps of ‘git commit’ and ‘git push’ can now also publish your application to the web, be it staging or production, or any other environment you need.

Within this guide I am making a few assumptions, 1 of them is that you are using node, but that is not a requirement and configurable in our .yml file that we are going to build up, the other is that you already have an existing repo with files in it on either github or bitbucket. If not, you can set that up and then pick up at the next paragraph.

When you perform a push on git to a specific branch, your app will then be containerized by wercker and then pushed over to Docker Hub registry, similar to github to an extent. Once done, we will have our Digital Ocean server pull that docker image down, remove the current one and then run this new one. All nice and automagically.

Step 1: Account setup

If you do not already have accounts for both docker and wercker, go ahead and set those up now, no point proceeding without them:

Step 2: Wercker setup

Start off by creating a new wercker application and linking it to your project n github/bitbucket repo.

To enable communication between Docker Digital Ocean, we need to setup some environment variables and an SSH key.

Head over to application settings > ‘Environment variables’ and then add in 2 new variables, ‘DOCKER_USERNAME’ and ‘DOCKER_PASSWORD’ and make sure ot check ‘protected’ for the password to hide the actual text from the logs.

Let’s also create ‘DOCKER_REPO’ and set the to your repo path ‘<username>/<app-name>’ and ‘APPLICATION_NAME’ and set that to ‘<app-name>’. You do not have to use the app name here though as this is simply a reference name to be used when launching the docker app.

Lastly, lets store our droplet server IP, ‘SERVER_IP’ to be the IP of your droplet server with Digital Ocean.

I prefer creating a few extra variables, that way all the important things can be updated on wercker without creating additional commits.

No click on ‘SSH keys’ on the left and generate a new key called ‘digitalocean’. Copy the public key over to a text document for now as we will need that in a minute when setting up thins on Digital Ocean.

Now still on wercker, click on ‘Targets’ on the left and add a new deploy target and name it as you need too, for the purpose of the article we will call it ‘Production’ and set the auto deploy to be ‘master’. If you were using gitflow you could match up master to production and develop to dev or qa or similar.

Now we just need to reference the SSH key we created, so back in ‘Environment variables’ add a SSH key named ‘DIGITAL_OCEAN’ paired with the ‘digitalocean’ key we created earlier.

Step3: Digital Ocean setup

This is the easy part, DI has pre configured ‘One click’ apps for use to use, so go ahead and create a droplet, select the ‘One-click Apps’ tab and then choose the currently available ‘Docker’ droplet, then just pick your size and location.

You then select ‘New SSH key’ and paste int he key you put in that text document earlier, change the name if you like and then click create, while it does its thing we will carry on with the rest of the setup.

Step 4: The wercker.yml

This is the file that contains all the instructions we need to pass over to wercker so it knows how we want the app built and deployed within your container.

Each group of instructions is referred to as a pipeline and each specific instruction is a step. Our main focus will be the deploy pipeline, this is the part that will handle moving the code from docker to digital ocean.

Start off by creating a wercker.yml file in your projects root folder, and open it with your preferred editor.

First line we need to put in is the base image, this is what wercker will use to create the container for our application, as we are using node it will be

box: node

If you were creating a python app you might use ‘box:python’

We are now ready to define our 2 pipelines, build and deploy, our build one is pretty simple:

build:
 steps:
 — npm-install

For now this step is pretty simply, all it is doing is installing our node packages, this step could be used to run automated/unit testing, as the deploy will only proceed if the deploy is successful.

Now let us move onto our first few deploy steps:

deploy:
 steps:
 — npm-install
 — script:
 name: install supervisor
 code: npm install -g supervisor
 — internal/docker-push:
 username: $DOCKER_USERNAME
 password: $DOCKER_PASSWORD
 repository: $DOCKER_REPO
 registry: [https://registry.hub.docker.com](https://registry.hub.docker.com)
 cmd: /bin/bash -c "cd /pipeline/source && supervisor — watch ./src src/server.js"

Firstly we are installing supervisor, this will keep our server up and running should anything go wrong on the server, minimizes down time.

After that we begin our docker push using an internal plugin from wercker, we specify the username, password, repo and registry ( I found this to be the reason mine was failing as it is not mentioned in any guide).

Next we have our cmd (command), that simply puts the instance in the correct place and runs the node instance, so everything after watch would be specific to your application, be it ‘npm start’ or ‘node web.js’ however you launch your application is what needs to be there.

This whole pipeline is responsible for pushing our code over to docker in a docker image for us to pull in the next step.

- add-ssh-key:
 keyname: DIGITAL_OCEAN
 - add-to-known_hosts:
 hostname: $SERVER_IP

Following that we add our SSH key so that we will be able to access our DI droplets and then the IP to the list of known hosts.

- script:
 name: pull latest image
 code: ssh root@$SERVER_IP docker pull $DOCKER_REPO:latest
 - script:
 name: stop running container
 code: ssh root@$SERVER_IP docker stop $APPLICATION_NAME || echo ‘failed to stop running container’
 - script:
 name: remove stopped container
 code: ssh root@$SERVER_IP docker rm $APPLICATION_NAME || echo ‘failed to remove stopped container’
 - script:
 name: remove image behind stopped container
 code: ssh root@$SERVER_IP> docker rmi $DOCKER_REPO:current || echo ‘failed to remove image behind stopped container’
 - script:
 name: tag newly pulled image
 code: ssh root@$SERVER_IP docker tag $DOCKER_REPO:latest $DOCKER_REPO:current || echo ‘failed to change tag’
 - script:
 name: run new container
 code: ssh root@$SERVER_IP docker run -d -p 80:$NODE_PORT --name $APPLICATION_NAME $DOCKER_REPO:current

Lastly, we have all the magic, this is the part of our script that moves our code over to DI, deletes the existing instance and deploys and runs our update.

All the commands begin with ‘ssh root@$SERVER_IP’ as they are all being run on the droplet itself.

  • Our first step is to pull the latest image, the one we pushed up earlier.

  • Now we stop the running container

  • We then remove the container we just stopped.

  • This is a cleanup process to remove any of the old docker images as to not waste any space.

  • Next we tag the image we just pulled as the current image so that we can easily identify it for cleanup next time round.

  • Finally we run our new container, here we are mapping port 80 (-p), the HTTP port over to port 5000, the port our node application is listening on. The -d flag sets teh container to be detached, aka run in the background. –name gives our application an easy name to use when starting/stopping/restarting it when needed.

You are done, now all you need to do is commit and push the wercker.yml file to your master branch and the build then deploy should run automatically.

You can see the complete yml file in this gist

Twitter | Instagram | YouTube