Saturday, 24 January 2015

Dockerize a node.js WebSocket server in 5 minutes

Docker is an incredibly useful tool to build prototypes of Linux hosts and applications.

You can easily build a network of servers inside a single virtual machine, with each server represented by a docker container. Clients can access the services on the same IP address, but different ports.

In this post I'd like to talk about a common prototype case in WebRTC platforms: a WebSocket server. This will be a node.js server and will run inside a Docker container (hosted by an Ubuntu Trusty VM).


The server logic can be as complex as you can imagine, but since it's not the point of this post I'll keep it as simple as the server example in the node.js websocket module:

The WebSocket server will listen on port 8080, accept incoming connections, send back "something" upon client connection, and log the content of the messages from the clients.

We can assume all the files in this article are in the same folder, and we're cd into it. The server logic is inside a 'server.js' file.

As explained in this interesting post from Ogi, you can find docker images with node.js all set and ready to be used, but the purpose of this post is to go a level deeper and build our own image.

Let's create a Dockerfile like this:
Even if you're not familiar with Dockerfiles, I'm sure you find this self-explanatory. The tricky bits are on line 13, where we symlink the nodejs executable to the desired '/usr/bin/node' (see here why), and line 16, where we install the node.js ws module via the npm package manager.

Line 18 tells docker what port this container is expected to receive connections to.

Line 20, the ENTRYPOINT definition, tells docker what command to execute when running.

(Remember that a docker container will run as long as there's a running command in foreground, and will exit otherwise.)

From inside the same folder, we can build our container image with:

docker build -t gvacca/nodejs_ws .

'gvacca' is my username, and 'nodejs_ws' is an arbitrary name for this container. Note the '.', which tells docker where to find the Dockerfile. You've probably noticed I've run 'docker build' without 'sudo': for practical purposes I've added docker into the sudo group.

The command above, when run for the first time, generates about 1K lines of output; you can find in this gist an example.

I can see the image is available:

gvacca@my_vm:/home/gvacca/docker/nodejs_ws$ docker images|grep nodejs_ws
gvacca/nodejs_ws            latest              332dae6a34f1        4 minutes ago       493.1 MB

Time to run the container:

docker run -d -p 8080:8080 -v $PWD:/root gvacca/nodejs_ws

This is telling docker a few things:

1. Run the container in daemonized mode (-d)
2. Map the port 8080 on the host with port 8080 on the container (and yes, they can be different)
3. Create a VOLUME, which is a mapping between a folder on the host and a folder on the container (this is handy because allows you to change files without rebuilding the image)
4. Use the 'gvacca/nodejs_ws' image.

The reason why I don't need to specify a command to be executed is that this is already enforced by the Dockerfile with the ENTRYPOINT specification.

The container is up and running:

gvacca@my_vm:/home/gvacca/docker/nodejs_ws$ docker ps|grep nodejs_ws
6ce3498a67e2        gvacca/nodejs_ws:latest         /usr/bin/node /root/   17 seconds ago      Up 16 seconds       0.0.0.0:8080->8080/tcp   ecstatic_feynman

gvacca@my_vm:/home/gvacca/docker/nodejs_ws$ sudo netstat -nap |grep 8080
tcp6       0      0 :::8080                 :::*                    LISTEN      18807/docker

Now, if you want to test quickly you can use this Chrome extension, which provides a GUI to instantiate a WebSocket connection and send and receive data through it, and play with it. The URL will be: 'ws://IP_ADDRESS:8080'.

You can also access the server's logs with:

docker logs 6ce3498a67e2

(where 6ce3498a67e2 is the first part of the container's unique identifier, as shown in the 'docker ps' output).

Once you have this in place, which takes much longer to describe than to do, you can start building your WebSocket server logic.




Sunday, 11 January 2015

Easy VPN setup accross multiple sites

I recently had a scenario where I needed to connect servers belonging to:

- Digitalocean, on data center X
- Digitalocean, on data center Y
- A private data center

and each architecture needed to be replicated on a number of "logical" environments (e.g. 'development', 'testing', 'production').

They needed to "see" each other, in a secure way.

Note that virtual machines on Digitalocean (they call them 'droplets') can belong to different data centers. When the droplets use the optional private interface there are two things to consider:
1. Traffic inside the same data center is potentially visible to any equipment on the same data center.
In other words, the fact that two droplets belong to the same customer account doesn't mean that their private traffic is isolated from any other traffic belonging to droplets on other accounts.
You are responsible to secure that traffic.
2. Droplets on different data centers cannot directly communicate via their private interfaces. This is intuitive but nevertheless important to highlight.

This of course screamed for a VPN, and I directed my attention to openvpn. As for the rest of the system and application setup I wanted to use Puppet to manage the client and server configuration with openvpn.

A quick search on Puppet Forge brought me to luxflux' openvpn module, which at that moment was the most popular. I experimented a little with it and found it working properly, on the debian virtual machines I was using.

You need just some designing of the private network addressing, and you'll be fine.

The key point to use this module is understanding that each VPN client you declare on the VPN server will cause the generation of client certificates on the server machine. You then need to copy them into the VPN client hosts.

On the server side I just needed three elements:

1. Server configuration (some elements omitted/changed on purpose):

openvpn::server { $::fqdn :
   country => '...',
   province => '...',
   city => '...',
   organization => '...',
   email => '...',
   local => '0.0.0.0',
   proto => 'udp',
   server => '10.19.0.0 255.255.0.0',
   ipp => true,
   c2c => true,
   keepalive => '10 120',
   management => true,
   management_ip => 'localhost',
   management_port => 7543,
}

2. Define the list of clients:

openvpn::client { [
  'vm1',
  'vm2',
  'vm3',
   ]: server => $::fqdn,
}

3. For each client, define the IP address, e.g.:

openvpn::client_specific_config { 'vm1':
  server =>$::fqdn,
  ifconfig => '10.19.28.101 10.19.28.102'
}

On the client side I've created a new Puppet module. It just installs the certificates and keys generated by the openvpn module running on the server host, then instructs the openvpn client to connect to the remote VPN server. (This is a fairly generic module, so it could probably be open sourced).

To make the architecture easier to understand, I assigned a different subnet for each "real" data center. So, even if the clients all belong to the same VPN, at a glance you can see what's the hosting data center.

The VPN server is on one of the data centers, and clients can connect securely on the public interface. An additional level of security was to firewall access to the openvpn listening port (with iptables), so that only authorized source IP addresses could potentially be used to build the VPN.

During this work I submitted a few patches, which the author has kindly accepted to merge:

1. Add an optional parameter to enable the client-to-client option
2. Add optional parameters to enable the openvpn management interface
3. Add an option to remove a client-specific configuration file (for example when a client is removed from the VPN).

You can see the commits here.

I hope this is useful; please let me know if you're interested in a specific or more detailed example. 

Please also see my other open source Puppet modules (puppet-asterisk and puppet-kamailio) and feel free to comment, raise issues and contribute!

A previous post about applications deployment with Puppet, related to Kamailio, can be found here.