When it comes to deploying there are multiple approaches. The old skool way is simply to FTP the files on the server on change. Once you get bored of that you might maybe set up rsync (keeps files in sync automatically). Or you could use some version control based scheme in which the server pulls the changes on deploy.
These simple ways might be more or less alright in limited usage. At some point you might run into a wall, though. And managing the server may become cumbersome. What if you need to revert some change for instance? There are better ways.
Continuous Integration and co.
Continuous Integration and associated concepts have made a big impact on the industry during the past few years. The value of these concepts lies within automation and the visibility they give to the development process.Your Continuous Integration server may execute your tests, some of them heavy, after each push. This means you will be able to catch possible regressions earlier and fix them cheaper.
Continuous Delivery builds upon this idea. The idea is that should the tests pass, you may deploy the build manually. The process will then copy your build to the server and rig it up.
Continuous Deployment actually goes one step further and automates this step. That takes some serious trust in your code!
Setting Up Continuous Delivery Using Node.js and Wercker
The Wercker guys do a great job at explaining how to set up a continuous delivery setup using their system. It is currently in free beta and well worth checking out. They support a variety of platforms and make it very easy to integrate with services like GitHub. I used Digital Ocean for hosting when testing.
Digital Ocean is one of those affordable options when it comes to hosting. Remy Van Elst goes through the pros and cons of DO at his blog. Note that even though Digital Ocean shows prices in both monthly and hourly charges, it is important to note that the hourly charge might not be the same you could expect based on services like Amazon's EC2.
Even though your server is in "off" state, it still gets billed! This has to do with the fact that DO still keeps the IP and hardware reserved for you. As long as you are aware of this fact you should be fine, though. The price is hard to beat.
As you can see based on the Wercker tutorial, they use a YAML based configuration scheme for defining how to build your project and how to deploy it. There are various base boxes to choose from. Each box comes with various dependencies set up already. For instance Node.js developers may want to use `wercker/nodejs` box.
In order to give you some idea of how to set up a basic build with Bower, Grunt, SASS, NPM and all that jazz, consider the Wercker configuration below:
box: wercker/nodejs build: steps: - script: name: install compass code: sudo gem install compass --no-ri --no-rdoc - script: name: install grunt code: sudo npm install -g grunt-cli - script: name: install bower code: sudo npm install -g bower - script: cwd: frontend/ name: install npm dependencies code: | mkdir -p $WERCKER_CACHE_DIR/wercker/npm npm config set cache $WERCKER_CACHE_DIR/wercker/npm sudo npm install --save-dev - script: cwd: frontend/ name: install bower dependencies code: bower install --config.storage.cache=$WERCKER_CACHE_DIR/wercker/bower - script: cwd: frontend/ name: build project using grunt code: grunt build - script: name: echo nodejs information code: | echo "node version $(node -v) running" echo "npm version $(npm -v) running" deploy: steps: - add-to-known_hosts: hostname: $SERVER_IP fingerprint: ff:ff:ff:ff:ff:ff:ff:ff:ff - mktemp: envvar: PRIVATEKEY_PATH - create-file: name: write key filename: $PRIVATEKEY_PATH content: $WERCKER_PRIVATE overwrite: true - script: cwd: frontend/ name: transfer application code: | pwd ls -la tar czf - * | ssh -i $PRIVATEKEY_PATH -l root $SERVER_IP "cd /var/local/www; tar xvzf -" - script: name: start application code: | ssh -i $PRIVATEKEY_PATH -l root $SERVER_IP "if [[ \"\$(status node-app)\" = *start/running* ]]; then stop node-app -n ; fi" ssh -i $PRIVATEKEY_PATH -l root $SERVER_IP start node-app
Yes, I admit it's a bit chunky. Let's go through some basic concepts to dig into the meat so to speak. First of all I choose to use the Node.js base box provided by the Wercker guys. On my build step I install some dependencies and then build the project. My project isn't exactly conventional as my server source is within `frontend/` directory. This is the reason why I use `cwd` every once in a while.
Some of the data is cached so it is faster to build the project later on. You would normally want to execute your tests in the build step too. In this case I was just interested in getting the basic flow to work.
On my deploy step I first do some configuration necessary to make the communication between Wercker and my server to work. In this case I just point to the server by ip although you might want to use a real domain instead. In case you are wondering about that fingerprint bit, it is possible to generate it like this:
It is a security feature that should help to avoid MITM attacks if I remember correctly. Anyway, that's something that is easy to set up and hence you should do it.
After I have the connection related issues sorted I actually transfer the data to the server using a tar pipe. This is a trick I learned from some version of Stack Overflow. It speeds up transfer immensely especially if you have a lot of small files around. I recommend giving it a go just to see how powerful technique it is.
Once the build has been copied to server I simply execute the server. For this purpose I have set up a simple upstart script. The nice thing about it is that you don't even need a supervisor like forever or monit as it will be able to keep your server up should it crash for a reason or another. It is preferable to run your server with a user that has limited rights. That way you mitigate the amount of possible damage an attacker may be able to do to your server.
For some reason the upstart scripts Node.js people like to use seem usually awfully complex. For this reason I asked a couple of friends to provide examples of good ones. Here's one by nnarhinen. Another one from opinsys. Combine and extend based on your needs.
Some of the data is cached so it is faster to build the project later on. You would normally want to execute your tests in the build step too. In this case I was just interested in getting the basic flow to work.
On my deploy step I first do some configuration necessary to make the communication between Wercker and my server to work. In this case I just point to the server by ip although you might want to use a real domain instead. In case you are wondering about that fingerprint bit, it is possible to generate it like this:
- ssh-keyscan -p 22 -t rsa hostname > key.pub
- ssh-keygen -l -f key.pub
It is a security feature that should help to avoid MITM attacks if I remember correctly. Anyway, that's something that is easy to set up and hence you should do it.
After I have the connection related issues sorted I actually transfer the data to the server using a tar pipe. This is a trick I learned from some version of Stack Overflow. It speeds up transfer immensely especially if you have a lot of small files around. I recommend giving it a go just to see how powerful technique it is.
Once the build has been copied to server I simply execute the server. For this purpose I have set up a simple upstart script. The nice thing about it is that you don't even need a supervisor like forever or monit as it will be able to keep your server up should it crash for a reason or another. It is preferable to run your server with a user that has limited rights. That way you mitigate the amount of possible damage an attacker may be able to do to your server.
For some reason the upstart scripts Node.js people like to use seem usually awfully complex. For this reason I asked a couple of friends to provide examples of good ones. Here's one by nnarhinen. Another one from opinsys. Combine and extend based on your needs.
Conclusion
Continuous delivery systems like Wercker make the developer's life a step simpler. It takes some pain out of deployment and allows us to focus on actually getting things done faster. You can never be quite fast enough. There are still some good things, like containers, in horizon. They promise to simplify deployment even further and make it faster to execute tests. For instance you could easily have multiple databases against which to run your tests in parallel. But that's a topic for another post if I ever get into that world.
I hope this post inspired some of you to give some system a go! Please share your experiences in the comment section below. It would be very nice to hear what sort of setups you use in practice and how you benefit from them.