Friday 18 December 2015

TADHack mini Paris

"I’ve been following TADHack and its related events for some time, and finally this month I got the opportunity to attend TADHack-mini Paris. Participants can join from remote too, but the personal full immersion is something different (even, ironically, when the topic is Real Time Communications, and more in particular WebRTC and Telecom APIs.
We met in central Paris [...] "
This is the beginning of the behind-the-scenes story about my TADHack participation. You can read my full article here.

Wednesday 16 December 2015

Speed up testing Kamailio routing

I was very happy to see the news of the release of a new Kamailio module, authored by Victor Sveva.
CFGT can be used to test call scenarios and see what routing logic was triggered in Kamailio.
Test calls need to be marked with a specific, configurable Call-ID pattern ('callid_prefix').
A JSON report is generated, with the possibility to choose what variables to dump into it.
This is going to greatly simplify testing, while potentially keep the logging to a minimum. Highly recommended.

Thursday 26 November 2015

Docker and Puppet for Continuous Integration

This is a topic I really care about. Please take a look at the slides (they are quite verbose) used in a seminar at a local developers group:




Thursday 15 October 2015

Building git 2.6 and enabling TLS 1.2 on CentOS 7

There are scenarios where TLS 1.2 is not just enabled, but the only one accepted.
In these cases many clients fail to connect over HTTPS.
I needed to be able to use 'git clone https://...' on CentOS 7, and since it was failing and I spent some time on a work around, I'm sharing it here.

The system is a CentOS 7 host on DigitalOcean, with kernel


Linux 3.10.0-123.8.1.el7.x86_64

git is 1.8.3, the stock version
nss is 3.19.1-5.el7_1

If I do something like

curl  --tlsv1.2 https://freeswitch.org

the connection is successful, but a command like

GIT_CURL_VERBOSE=1 git clone https://freeswitch.org/stash/scm/fs/freeswitch.git


was giving a connection error with this code:

NSS error -12190 (SSL_ERROR_PROTOCOL_VERSION_ALERT)
(freeswitch.org only accepts TLSv1.2).


Long story short, I read somewhere that git 2.6 had support for configuring TLSv1.2, and I downloaded the source code of git 2.6.0 from https://www.kernel.org/pub/software/scm/git/

Built, installed, added to my .gitconfig this:

[http] 
sslVersion = tlsv1.2

but no cigar.

So I dug in the code and commented out a dependency for a version of libcurl in http.c (I'm commenting out the #if - #endif):

  //GV#if LIBCURL_VERSION_NUM >= 0x072200           { "tlsv1.0", CURL_SSLVERSION_TLSv1_0 },           { "tlsv1.1", CURL_SSLVERSION_TLSv1_1 },           { "tlsv1.2", CURL_SSLVERSION_TLSv1_2 }, 
//GV#endif



Rebuilt and reinstalled, and this time it worked fine.



Friday 19 June 2015

Unit testing - because Puppet is worth it

The other day I was browsing the slides of "Continuous Deployment with Jenkins", from PuppetLabs. One sentence in particular I found relevant for what I was doing, and important in general:

Puppet manifests are code too.

To be honest, I don't think I need to sale this very hard, so I'll proceed to a practical consequence: unit testing for puppet modules. Unsurprisingly, there's a app tool for that: rspec-puppet.
At least this is what I've been using for some time and find very useful and easy to use. I've even created some Jenkins jobs just to unit test Puppet modules.

You can find a tutorial for rspec-puppet here. Feel free to leave this article, read the tutorial, experiment a little and come back later.

What I wanted to share is some tricks/settings that I had to use, which I haven't found in one single place so far.

As you can see in the tutorial, rspec-puppet generates a dir skeleton for you (with the command 'rspec-puppet init'), to be populated with the tests for your module, then you just need to run 'rake spec' and have the unit tests run.

What I noticed though was that 'rake spec' didn't quite work, or didn't work as expected, and eventually I ended up with installing these dependencies (alas, with reference to a CentOS 7 host):

    package { [
         'bundle',
         'puppetlabs_spec_helper',
         'puppet-lint',
         'rake',
         'rspec-puppet'
         ]:
         ensure   => present,
         provider => 'gem',

    }

Then I found a better Rakefile (although I can't remember the origin, it must come from an official Puppet forge module. If you recognize it give me a shout and I'll give full credits):



The last important bit was the .fixtures.yml file, which allows to refer to 3rd party modules required by the module under testing.
Here's an example:



which basically says: "You can find mymodule in this directory, and please use stdlib from this other directory". In fact, for stdlib you should not use the local path (because it implies that stdlib is installed, and somehow defeats the point of unit testing) but the git URL. Since this installs stdlib from git at every run, I preferred using a host with it installed and refer to the local path instead. Not perfect, but handy.

Only then I could use 'rake spec' with satisfaction.

I hope you find this useful, and if you have any type of feedback please don't hesitate to add a comment.

Tuesday 16 June 2015

git tricks - get latest tag and its distance from HEAD

Problem: "I want to get the git tag of the current project, but only if it's associated to the latest commit. If not, I want to know what's the current commit hash."

In order to achieve this I was using something like:
This works but a drawback is that I either see the tag or the latest commit.
So a coworker pointed me to this:

git describe --tags --always

which I didn't know and it's just great. It returns a string with this format:

TAG-N-gSHA

where:

TAG is the most recent tag.
N is the number of commits from the TAG.
'g' is just a formatting convention.
SHA is the latest commit (HEAD).

If the latest commit is also pointed by the tag, then it just returns TAG.

'git describe' is explained here.

Friday 6 February 2015

WebSockets over Node.js: from Plain to Secure

On a previous post I shared my experiments with node.js as a WebSocket server. This is quite useful for people working on WebRTC prototypes and familiar with node.js.

Some of the readers may have noticed that I was using plain WebSockets ('ws://' URLs). It's recommended to use Secure WebSockets instead ('wss://' URLs), so I thought of playing with the 'ws' node.js module and "add TLS".

On github there's an example in this direction (see below), but I must admit I didn't understand some implications at first.

I thought the instantiation of an HTTPS server was just coincidental and meant to provide the web pages and scripts in the example, and that the configuration of 'ws' with 'ssl: true' and certificates was independent.

It turns out it's not. The best description of my understanding is that you need an HTTPS server to "decorate" the WebSocket module. The HTTPS server will take care of connection instantiation and encryption, while the WebSocket module, "listening" on the same port, will take over when the Upgrade request [1] from the client is received.

Here's a snippet of the solution I've adopted, based on the example above:


You can see that the version for plain WebSocket (commented out) had the configuration object passed to the WebSocket constructor (well, in fact, you just need to pass '{ port: 8080 }'), while the secure solution passes the entire HTTPS server object to the WebSocket constructor.

Something similar (using express) has been described in this post.

Note, if you're using self-signed certificates, that you should first access the site and accept the security exception, or the client won't be happy.

An useful tool to debug WebSockets comes as Chrome extension: Simple WebSocket Client.

[1] The Upgrade request looks like this (from RFC 6455);

GET /chat HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==
Origin: http://example.com
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13

Wednesday 4 February 2015

FOSDEM 2015 - part II

In "FOSDEM 2015 - part I" I made an overview of this Conference and a few comments on the talk "Python, WebRTC and you", where the a peer-to-peer WebRTC service was also described.

Fast forward to a Lightning talks on Day 1, afternoon: Emil Ivov about Jitsi VideoBridge.

In this case we’re considering video-conference scenarios, possibly with many participants, and the ability to add some presentation features, like highlighting the current speaker.

The architecture behind Jitsi VideoBridge aims to avoid a centralized mixer (MCU), but at the same time prevent the complexities of a Full Mesh approach.

Enter the SFU (Selective Forwarding Unit) concept: the server component is “simply” a router of media streams among conference participants. A IETF Draft describes the behaviour expected for an SFU. Each participant receives one stream per each other user: it’s then up to the receiving client to take care of stream presentation.

The SFU doesn't need to decode, mix, and then re-encode all the streams, and for this reason it can use just a small amount of resources, and scales well.

The critical point, as Emil is aware too, is dealing with mobile devices: low available bandwidth and relatively limited computational resources make it prohibitive for applications running on such devices to handle many different streams, some of each potentially carrying high-quality video.

What can help in this case is the concept of “simulcast”: compatible applications (like Chrome) are able to generate streams at different quality levels (resolution and framerate) at the same time. There are still some difficulties in using simulcast, as witnessed in this post. Different topologies involved in WebRTC networks are also well described in this other article.

Anyway the receiver of such streams can then instruct the server about which quality it is capable of receiving. Apps on mobile devices will get the low-quality streams; browsers on desktops will require the high-quality ones.

This is a clever solution, but I’d still like to experiment and see examples of mobile apps managing in an acceptable way more than 2-3 incoming streams, regardless of the optimized bandwidth usage. And if you have any reference, please feel free to post a comment!

Tuesday 3 February 2015

FOSDEM 2015 - part I

It's that time of the year when experts of Open Source software meet in frosty Brussels for two intense days of talks, conversations, and a good quantity of beer: FOSDEM.

Being just 2 hours away from London with the Eurostar, the relevance/effort ratio is very high. Additionally, the event is free and held over the weekend, so it has a low impact on the normal job activity (although on the other hand it does have some on your family time, but you can't have everything, unless your kids are big enough to join you, which I'd recommend).

Right after settling down in a lovely flat (AirBnB is a great choice) with three friends I started planning the sessions to follow. There are about twenty parallel sessions, so you must cherry pick. For day 1 I was oriented towards Configuration Management and Lightning Talks tracks. Day 2 had Virtualisation and Testing And Automation in my radar.

As it turns out, FOSDEM is such a success that the rooms are filled incredibly fast and many have even long queues outside. It was definitely the case for the Configuration Management dev room, which I had to skip in favour of Infrastructure As A Service.

So I got a basic understanding of Apache Mesos, for resource management of distributed systems, and GlusterFS, a distributed file system and general-purpose storage platform.
Apache Mesos will come back to attention on Day 2, when I'll learn more about CoreOS Rocket for container orchestration (more on this on a following post).

This year there wasn't any track dedicated to RTC (Real Time Communication), but nevertheless the topic was around a lot (not to mention the Internet Of Things track, which I gather has been very successful).


The Python devroom though offered an interesting cross between Python and WebRTC, with Saúl's "Python, WebRTC and you" (slides here). As the reader may know already, WebRTC doesn't mandate a signalling protocol, so this talk has been a good opportunity to show how conceptually easy is to interact with WebRTC's APIs (using rtcninja as wrapper), while at the same time managing signalling between browsers with a combination of Python3 and asyncio.

This talk alone, from a teaching perspective, is very valuable. You can see:
- The power of WebRTC: as long as the browser is compatible, you're just a few dozens of lines of code away from a new application.
- You can use the web server you prefer: web content servicing and signalling are just tools around the RTC.
- Unless you need some complex feature, you can leverage pure P2P and remove the server infrastructure from the equation.
- WebSocket support is increasing [1], making the life of WebRTC architects easier.

Of course things get more complicated for multiparty conferences, but there's something about this on a following post.

[1] The popular node.js platform has an ad-hoc module for WebSocket support too. I wrote about it and its usage inside a docker container in a previous post.


Saturday 24 January 2015

Dockerize a node.js WebSocket server in 5 minutes

Docker is an incredibly useful tool to build prototypes of Linux hosts and applications.

You can easily build a network of servers inside a single virtual machine, with each server represented by a docker container. Clients can access the services on the same IP address, but different ports.

In this post I'd like to talk about a common prototype case in WebRTC platforms: a WebSocket server. This will be a node.js server and will run inside a Docker container (hosted by an Ubuntu Trusty VM).


The server logic can be as complex as you can imagine, but since it's not the point of this post I'll keep it as simple as the server example in the node.js websocket module:

The WebSocket server will listen on port 8080, accept incoming connections, send back "something" upon client connection, and log the content of the messages from the clients.

We can assume all the files in this article are in the same folder, and we're cd into it. The server logic is inside a 'server.js' file.

As explained in this interesting post from Ogi, you can find docker images with node.js all set and ready to be used, but the purpose of this post is to go a level deeper and build our own image.

Let's create a Dockerfile like this:
Even if you're not familiar with Dockerfiles, I'm sure you find this self-explanatory. The tricky bits are on line 13, where we symlink the nodejs executable to the desired '/usr/bin/node' (see here why), and line 16, where we install the node.js ws module via the npm package manager.

Line 18 tells docker what port this container is expected to receive connections to.

Line 20, the ENTRYPOINT definition, tells docker what command to execute when running.

(Remember that a docker container will run as long as there's a running command in foreground, and will exit otherwise.)

From inside the same folder, we can build our container image with:

docker build -t gvacca/nodejs_ws .

'gvacca' is my username, and 'nodejs_ws' is an arbitrary name for this container. Note the '.', which tells docker where to find the Dockerfile. You've probably noticed I've run 'docker build' without 'sudo': for practical purposes I've added docker into the sudo group.

The command above, when run for the first time, generates about 1K lines of output; you can find in this gist an example.

I can see the image is available:

gvacca@my_vm:/home/gvacca/docker/nodejs_ws$ docker images|grep nodejs_ws
gvacca/nodejs_ws            latest              332dae6a34f1        4 minutes ago       493.1 MB

Time to run the container:

docker run -d -p 8080:8080 -v $PWD:/root gvacca/nodejs_ws

This is telling docker a few things:

1. Run the container in daemonized mode (-d)
2. Map the port 8080 on the host with port 8080 on the container (and yes, they can be different)
3. Create a VOLUME, which is a mapping between a folder on the host and a folder on the container (this is handy because allows you to change files without rebuilding the image)
4. Use the 'gvacca/nodejs_ws' image.

The reason why I don't need to specify a command to be executed is that this is already enforced by the Dockerfile with the ENTRYPOINT specification.

The container is up and running:

gvacca@my_vm:/home/gvacca/docker/nodejs_ws$ docker ps|grep nodejs_ws
6ce3498a67e2        gvacca/nodejs_ws:latest         /usr/bin/node /root/   17 seconds ago      Up 16 seconds       0.0.0.0:8080->8080/tcp   ecstatic_feynman

gvacca@my_vm:/home/gvacca/docker/nodejs_ws$ sudo netstat -nap |grep 8080
tcp6       0      0 :::8080                 :::*                    LISTEN      18807/docker

Now, if you want to test quickly you can use this Chrome extension, which provides a GUI to instantiate a WebSocket connection and send and receive data through it, and play with it. The URL will be: 'ws://IP_ADDRESS:8080'.

You can also access the server's logs with:

docker logs 6ce3498a67e2

(where 6ce3498a67e2 is the first part of the container's unique identifier, as shown in the 'docker ps' output).

Once you have this in place, which takes much longer to describe than to do, you can start building your WebSocket server logic.




Sunday 11 January 2015

Easy VPN setup accross multiple sites

I recently had a scenario where I needed to connect servers belonging to:

- Digitalocean, on data center X
- Digitalocean, on data center Y
- A private data center

and each architecture needed to be replicated on a number of "logical" environments (e.g. 'development', 'testing', 'production').

They needed to "see" each other, in a secure way.

Note that virtual machines on Digitalocean (they call them 'droplets') can belong to different data centers. When the droplets use the optional private interface there are two things to consider:
1. Traffic inside the same data center is potentially visible to any equipment on the same data center.
In other words, the fact that two droplets belong to the same customer account doesn't mean that their private traffic is isolated from any other traffic belonging to droplets on other accounts.
You are responsible to secure that traffic.
2. Droplets on different data centers cannot directly communicate via their private interfaces. This is intuitive but nevertheless important to highlight.

This of course screamed for a VPN, and I directed my attention to openvpn. As for the rest of the system and application setup I wanted to use Puppet to manage the client and server configuration with openvpn.

A quick search on Puppet Forge brought me to luxflux' openvpn module, which at that moment was the most popular. I experimented a little with it and found it working properly, on the debian virtual machines I was using.

You need just some designing of the private network addressing, and you'll be fine.

The key point to use this module is understanding that each VPN client you declare on the VPN server will cause the generation of client certificates on the server machine. You then need to copy them into the VPN client hosts.

On the server side I just needed three elements:

1. Server configuration (some elements omitted/changed on purpose):

openvpn::server { $::fqdn :
   country => '...',
   province => '...',
   city => '...',
   organization => '...',
   email => '...',
   local => '0.0.0.0',
   proto => 'udp',
   server => '10.19.0.0 255.255.0.0',
   ipp => true,
   c2c => true,
   keepalive => '10 120',
   management => true,
   management_ip => 'localhost',
   management_port => 7543,
}

2. Define the list of clients:

openvpn::client { [
  'vm1',
  'vm2',
  'vm3',
   ]: server => $::fqdn,
}

3. For each client, define the IP address, e.g.:

openvpn::client_specific_config { 'vm1':
  server =>$::fqdn,
  ifconfig => '10.19.28.101 10.19.28.102'
}

On the client side I've created a new Puppet module. It just installs the certificates and keys generated by the openvpn module running on the server host, then instructs the openvpn client to connect to the remote VPN server. (This is a fairly generic module, so it could probably be open sourced).

To make the architecture easier to understand, I assigned a different subnet for each "real" data center. So, even if the clients all belong to the same VPN, at a glance you can see what's the hosting data center.

The VPN server is on one of the data centers, and clients can connect securely on the public interface. An additional level of security was to firewall access to the openvpn listening port (with iptables), so that only authorized source IP addresses could potentially be used to build the VPN.

During this work I submitted a few patches, which the author has kindly accepted to merge:

1. Add an optional parameter to enable the client-to-client option
2. Add optional parameters to enable the openvpn management interface
3. Add an option to remove a client-specific configuration file (for example when a client is removed from the VPN).

You can see the commits here.

I hope this is useful; please let me know if you're interested in a specific or more detailed example. 

Please also see my other open source Puppet modules (puppet-asterisk and puppet-kamailio) and feel free to comment, raise issues and contribute!

A previous post about applications deployment with Puppet, related to Kamailio, can be found here.

About ICE negotiation

Disclaimer: I wrote this article on March 2022 while working with Subspace, and the original link is here:  https://subspace.com/resources/i...