Sunday 30 November 2014

Bridging WebRTC and SIP with verto


Verto is a newly designed signalling protocol for WebRTC clients interacting with FreeSWITCH. It has an intuitive, JSON-based RPC which allows clients to exchange SDP offers and answers with FreeSWITCH over a WebSocket (and Secure WebSockets are supported). It’s available right now with the 1.4 stable version (1.4.14 at the moment of writing).

The feature I like the most is “verto.attach”: when a client has an active bridge on FreeSWITCH and, for any reason (e.g. a tab refresh) it disconnects, upon reconnection FreeSWITCH automatically re-offers the session SDP and allows the client to immediately reattach to the existing session. I have not seen this implemented in other places and find it extremely useful. I’ve noticed recently that this does not fully work yet when the media is bypassed (e.g. on a verto-verto call), but Anthony Minnesale, on the FreeSWITCH dev mailing list said this feature is still a work in progress, so I’m keeping an eye on it.

Initially I was expecting an integrated solution for endpoint localization, i.e. what a SIP registrar can do to allow routing a call to the right application server. On second thoughts I don’t think this is a problem and there are ways to gather on which FreeSWITCH instance an endpoint is connected, and then route a call to it.

Once a verto endpoint hits the dialplan, it can call other verto endpoints or even SIP endpoints/gateways. I’ve also verified that verto clients can join conference rooms inside FreeSWITCH, and this is not only possible but can be done for conferences involving SIP endpoints as well, transparently.

This brings me to what I think it’s the strongest proposition of verto: interoperability with SIP.

In my opinion WebRTC is an enormous opportunity, and a technology that will revolutionize communications over Internet. WebRTC has been designed with peer-to-peer in mind, and this is the right way to go, however if you want to interoperate with VoIP (either directly or as a gateway to PSTN and GSM) you can’t ignore SIP.

I’m not worried about Web-to-Web calls: there are already many solutions out there, and each day there’s something new. Many new signalling protocols are being designed, since WebRTC standardization, on purpose, hasn't mandated any specific protocol for signalling. Verto is a viable solution when on the other side you have SIP.

I've been experimenting on this for some time now. In August I presented a solution for WebRTC/SIP interoperation, based on Kamailio andFreeSWITCH, at ClueCon. In that case signalling was accomplished with SIP on both sides (using the JsSIP library on the clients); unsurprisingly, after using verto, SIP on the web browser client side looks even more redundant, over-complex, but most of all with a steeper learning curve for web developers, and this is becoming every day a stronger selling point for new signalling protocols for WebRTC applications.

Web browsers running on laptops can easily manage multiple media streams incoming from a multi-party call. This is not true for applications running on mobile devices or gateways: they prefer a single media stream for each “conference call”, for resource optimization and typical lack of support respectively (1). Verto-SIP can represent a solution to bridge the web/multistream world with the VoIP/monostream one, for example by having the participants inside a conference room.

When video is involved though, things get as usual more complicated. WebRTC applications can benefit from managing one video stream per call participant, and a web page can present the many video streams in many ways.

But this can easily become too cumbersome for applications on mobile devices. We need to be able to send one single audio stream and video stream. And whilst the audio streams are “easy” to multiplex, how do you do that for video? Do you stream only the video from the active speaker (as FreeSWITCH does by default on conferences), or do you build a video stream with one video box per participant? The Jitsi VideoBridge is a clever solution leveraging a multi-stream approach, but again, how about applications running on mobile devices?

For what concerns signalling interoperation/federation there is an interesting analysis at the Matrix project blog. The experience gathered last Friday when hacking Matrix/SIP interoperability through verto/FreeSWITCH has also shown some key points about ICE negotiation: I recommend reading it.

My view is that there are two key points that will allow a solution to be successful in the field of Web-based communications involving “traditional” Internet telephony but also mobile applications:

  1. Interoperability with SIP.
  2.  The ability to provide one single media stream per application/gateway, should they require it.


What do you think?


(1)   Yes, I know that nothing prevents a SIP client to manage multiple streams, but practically speaking it’s not common.

Saturday 22 November 2014

Don't trust the kernel version on DigitalOcean

This was tricky. I was setting up a VPN connection with a newly built DigitalOcean droplet (standard debian wheezy 64bit in the London1 data center).

The connection is based on openvpn, and it's the same on many other nodes, but openvpn wasn't starting properly (no sign of the tun interface).

Googling the problem brought me to this reported debian bug, where apparently the problem was associated to an older version of the linux kernel. But I had the latest installed:


ii  linux-image-3.2.0-4-amd64          3.2.63-2+deb7u1               amd64        Linux 3.2 for 64-bit PCs

The reason why this problem was present is that the version actually loaded was different!

Apparently


[...] it seems that grub settings are ignored
on digital ocean, and that you instead have to specify which kernel is
booted from the Digital Ocean control panel, loaded from outside the
droplet [...]

So to see what's the actual version loaded you have to go to the console, select Settings, select Kernel and choose the latest (or desired) kernel version, then click on "Change".

Mine was set to:
Debian 7.0 x64 vmlinuz-3.2.0-4-amd64 (3.2.54-2)

Changing and rebooting fixed the issue (see also this about the procedure).
I hope this may save the reader some time.

Thursday 20 November 2014

Deploying Kamailio with Puppet

Almost three years ago I started automating the deployment of all server-side applications with Puppet.

One of the key applications in Truphone's RTC platform (and arguably in most of VoIP/WebRTC platforms today) is Kamailio, so with some proper encapsulation it’s been possible to build a Puppet module dedicated to it.

This is now publicly available on PuppetForge. Puppet Forge is a public repository of Puppet modules. We use many of them, and they can be incredibly useful to get you up to speed and solve common problems. You can also fork it on github, if you want.

You can use this module in different ways:

  • To quickly deploy a kamailio instance, with default configuration files.
  • As the basis for a custom configuration, where you use the official debian packages but with your own configuration details and business logic.


The first approach is very simple. Imagine you start from and empty VM or base docker container: all you have to do is install puppet, download the trulabs-kamailio module from Puppet Forge and apply the test manifest included in the module.

$ apt-get update
$ apt-get install -y puppet
$ puppet module install trulabs-kamailio

  
Dry-run the deployment with ‘noop’ first, and see what Puppet would do:

$ puppet apply -v /etc/puppet/modules/kamailio/tests/init.pp --noop

And then run it for real, by removing ‘noop’:

$ puppet apply -v /etc/puppet/modules/kamailio/tests/init.pp


Now you can verify that Kamailio is running, listening on the default interfaces, and at the latest stable version (this was tested on debian wheezy, and it’s expected to work on Ubuntu Precise and Trusty too).

The second approach (using trulabs-kamailio as a basis for your custom configuration) requires a little bit more work.

Here’s an example, where we want to deploy kamailio as WebSocket proxy. What changes in respect to the test installation is:

  1. We need to install also ‘kamailio-websocket-modules’ and ‘kamailio-tls-modules’ (as I’m assuming the WebSockets will be Secure).
  2. Take care of the kamailio cfg files directly, and not use the vanilla ones (trulabs-kamailio contains the “sample cfg files” already available with the official debian packages).


A sensible approach is to create a new Puppet module, e.g. ‘kamailio_ws’, and inside its init.pp instantiate the ‘kamailio’ class with these options:

class kamailio_ws () {
  class { '::kamailio':
    service_manage  => true,
    service_enable  => true,
    service_ensure  => 'running',
    manage_repo     => true,
    with_tls        => true,
    with_websockets => true,
    manage_config   => false,
  }
}

(This is similar to the concept of Composition in OOP.)

You can see that ‘manage_config’ is set to false, which is the way of telling the kamailio module not to install the vanilla cfg files, and that those will be managed elsewhere (inside the kamailio_ws module in this case).

To install your cfg files it’s sufficient to add something like this to the init.pp manifest:

  file { '/etc/kamailio/kamailio.cfg':
    ensure => present,
    source => 'puppet:///modules/kamailio_ws/kamailio.cfg',
    notify => Exec['warning-restart-needed'],
  }

In this way you’re declaring that the kamailio.cfg file (to be installed in /etc/kamailio/kamailio.cfg) has to be taken from the files/ folder inside the kamailio_ws module.

As an additional action, every time that file changes, Puppet should notify the user that a restart is necessary (and then leave to the sysadmin to choose the right moment to do so – the Exec resource is just expected to print something on std out).

If you want to restart kamailio immediately after kamailio.cfg changes, then you can use:
  file { '/etc/kamailio/kamailio.cfg':
    ensure => present,
    source => 'puppet:///modules/kamailio_ws/kamailio.cfg',
    notify => Service[‘kamailio’],
  }

Note that if the file doesn't change, for example because you’re just running Puppet to verify the configuration, or updating another file, then Puppet won’t try any action associated to the ‘notify’ keyword.

You can then add all the other files you want to manage (e.g. you may have a defines.cfg file that kamailio.cfg includes) by instantiating other File resources.

A common practice for modules is to have the main class (in this case kamailio_ws) inside the init.pp manifest, and the configuration part (e.g. kamailio.cfg) inside a config.pp manifests included by init.pp.

What do you think? Would you like to see any particular aspect of automating Kamailio configuration treated here? Will you try this module?


Your opinion is welcome.

Wednesday 19 November 2014

Puppet module support on 2.7.13

Yesterday I just noticed that on Ubuntu Precise, with a stock Puppet installation (2.7.13), 'puppet module' (a tool to build and install modules) was not available. Soon found an explanation on superuser: 'puppet module' was released with 2.7.14.

There is a straightforward way to get out of this: upgrade Puppet as recommended by PuppetLabs and go straight to version 3 (unless of course you have constraints to stay on 2).

These steps will allow you to get Puppet directly from PuppetLabs repos (just choose the target distribution).

At the moment of writing this procedure would upgrade Puppet on Ubuntu Precise from '2.7.11-1ubuntu2.7' to '3.7.3-1puppetlabs1'.




About ICE negotiation

Disclaimer: I wrote this article on March 2022 while working with Subspace, and the original link is here:  https://subspace.com/resources/i...