Skip to main content

Posts

Showing posts from 2014

Testing a PR for Asterisk Puppet module on Docker

Just wanted to share what approach I'm following to test Pull Requests for the trulabs Asterisk Puppet module . Run a docker container, choosing the base target distribution, e.g. one of: docker run -i -t debian:wheezy /bin/bash docker run -i -t ubuntu:precise /bin/bash docker run -i -t ubuntu:trusty /bin/bash Inside the Docker container, set up some preconditions: apt-get update apt-get install -y git apt-get install -y puppet apt-get install -y vim Clone the git project: mkdir -p git/trulabs cd git/trulabs git clone https://github.com/trulabs/puppet-asterisk.git cd puppet-asterisk Create a new branch: git checkout -b PULL_REQUEST_BRANCH master Checkout the project from which the Pull Request is created: git pull https://github.com/CONTRIBUTOR/puppet-asterisk.git PULL_REQUEST_NAME Build and install the Asterisk Puppet module built with this Pull Request changes: puppet module build . puppet module install pkg/trulabs-asterisk-VERSION.tar.gz ...

Bridging WebRTC and SIP with verto

Verto is a newly designed signalling protocol for WebRTC clients interacting with FreeSWITCH . It has an intuitive, JSON-based RPC which allows clients to exchange SDP offers and answers with FreeSWITCH over a WebSocket (and Secure WebSockets are supported). It’s available right now with the 1.4 stable version (1.4.14 at the moment of writing). The feature I like the most is “ verto.attach ”: when a client has an active bridge on FreeSWITCH and, for any reason (e.g. a tab refresh) it disconnects, upon reconnection FreeSWITCH automatically re-offers the session SDP and allows the client to immediately reattach to the existing session. I have not seen this implemented in other places and find it extremely useful. I’ve noticed recently that this does not fully work yet when the media is bypassed (e.g. on a verto-verto call), but Anthony Minnesale, on the FreeSWITCH dev mailing list said this feature is still a work in progress, so I’m keeping an eye on it. Initially I was ex...

Don't trust the kernel version on DigitalOcean

This was tricky. I was setting up a VPN connection with a newly built DigitalOcean droplet (standard debian wheezy 64bit in the London1 data center). The connection is based on openvpn , and it's the same on many other nodes, but openvpn wasn't starting properly (no sign of the tun interface). Googling the problem brought me to this reported debian bug , where apparently the problem was associated to an older version of the linux kernel. But I had the latest installed: ii  linux-image-3.2.0-4-amd64          3.2.63-2+deb7u1               amd64        Linux 3.2 for 64-bit PCs The reason why this problem was present is that the version actually loaded was different! Apparently [...] it seems that grub settings are ignored on digital ocean, and that you instead have to specify which kernel is booted from the Digital Ocean control panel, loaded from outside the droplet [...] So to see w...

Deploying Kamailio with Puppet

Almost three years ago I started automating the deployment of all server-side applications with Puppet . One of the key applications in Truphone's RTC platform (and arguably in most of VoIP/WebRTC platforms today) is Kamailio , so with some proper encapsulation it’s been possible to build a Puppet module dedicated to it. This is now publicly available on PuppetForge . Puppet Forge is a public repository of Puppet modules. We use many of them, and they can be incredibly useful to get you up to speed and solve common problems. You can also fork it on github , if you want. You can use this module in different ways: To quickly deploy a kamailio instance, with default configuration files. As the basis for a custom configuration, where you use the official debian packages but with your own configuration details and business logic. The first approach is very simple. Imagine you start from and empty VM or base docker container: all you have to do is install pup...

Puppet module support on 2.7.13

Yesterday I just noticed that on Ubuntu Precise, with a stock Puppet installation (2.7.13), 'puppet module' ( a tool to build and install modules ) was not available. Soon found an explanation on superuser : 'puppet module' was released with 2.7.14. There is a straightforward way to get out of this: upgrade Puppet as recommended by PuppetLabs and go straight to version 3 (unless of course you have constraints to stay on 2). These steps will allow you to get Puppet directly from PuppetLabs repos (just choose the target distribution). At the moment of writing this procedure would upgrade Puppet on Ubuntu Precise from '2.7.11-1ubuntu2.7' to '3.7.3-1puppetlabs1'.

Hacking our way through Astricon

This year I was speaking at Astricon for Truphone Labs (you can see my slides here if you're interested). The week before Astricon I was invited try out respoke , a solution that allows you to build a WebRTC-based service . They provide you with a client JavaScript library, and you need a server account (with an app ID and secret key) to connect your application to the respoke server and allow clients to communicate with each other. This is an intuitive approach. As a service developer, you pay for the server usage, and you do so depending on how many concurrent clients you want. I got a testing account, and started trying out the JS library. The process of building a new application was very straightforward, and the docs guided me towards building a simple app to make audio calls, and then video calls as well. Soon I started thinking: respoke is from Digium , right, and Digium develops Asterisk . How is it possible that respoke and Asterisk cannot interconnect? W...

Removing all unused docker containers and images

docker containers are very easy to build, and the ability to re-use them is extremely handy. They do require a lot of disk space though, and if like me you need to retrieve some of that space you can delete the containers that are not running with this command: docker rm $(docker ps -a -q) In fact 'docker ps -a -q' lists all the containers, but 'docker rm' won't destroy running containers, and the command above will do the job. You still have the images though (visible with 'docker images'), and each one can be relatively big (> 400MB). I tend not to tag images, so in general I'm interested in the 'latest' ones, or the official ones (like ubuntu:14.04, where ubuntu is the repo and 14.04 is the tag). Anyway even if selecting the "untagged" ones does the job for me, I can clean up the undesired images with: docker rmi $(docker images -q --filter "dangling=true") I've seen other commands just relying on th...

Accessing the full P-Asserted-Identity header from FreeSWITCH

I hope this can save the reader some time. If you need to read the entire content of the P-Asserted-Identity header of an incoming INVITE, be aware that you should change the sofia profile by adding a param like: param name="p-asserted-id-parse" value="verbatim" FreeSWITCH will populate the variable accordingly and make it available with commands like (e.g. with lua): PAID = session:getVariable("sip_P-Asserted-Identity") If you don't add this parameter, you'll get the default behaviour, which is just filling the variable with the P-Asserted-Identity URI username part. Possible value are: "default", "user-only", "user-domain", "verbatim" , which I think are self-explanatory. A recent reference here .

Comparing Configuration Management Tools The Hard Way

I've been using Puppet for more than 2 years now, and written almost 50 custom modules, for about 4K lines of code. The more I work with it, the easier is to find a 3rd party module to solve a problem for me, and I can just focus on very specific needs. I rarely wrote any Ruby snippets, and basically relied on Puppet's DSL. A couple of times I submitted a pull request for a 3rd party module, and got my changes merged in. I started using Puppet for a very specific reason: there was already expertise in the company, and most part of the infrastructure pre-requirements were deployed using it (even if in a Master-Slave mode, while I favour the Standalone approach). All this said, I can't ignore what's going on in the Devops world; many other tools are being developed to solve automation of configuration management, and with the aim of being 1. Extremely easy to use 2. Reliable 3. Scalable. In my hunt for comparisons I've stumbled upon this article from Ryan Lane...

Build a test bed with FreeSWITCH and PJSUA (with SILK)

Build and run FreeSWITCH This is based on debian wheezy, and uses master FreeSWITCH . You can switch to stable when cloning. In my experience this is the quickest way to get to a working, vanilla setup that you can use to automate tests with PJSUA (the PJSIP command-line client). mkdir git cd git # To get master git clone https://stash.freeswitch.org/scm/fs/freeswitch.git cd git/freeswitch ./bootstrap.sh -j # If changes are needed to the list of modules to be compiled vi modules.conf ./configure --enable-core-pgsql-support make sudo make install # Launch FS (in foreground) sudo /usr/bin/freeswitch You can find here the process for building it from source recommended by the FreeSWITCH team. In /etc/freeswitch/vars.xml you may want to change the default password in: cmd="set" data="default_password=XXXXXXXX" If you're enabling SILK , add it to the list in modules.conf.xml : load module="mod_silk" and add it to global_codec_prefs ...

Favourite alias of the week: 'dico'

diff provides one interesting option, ' -y ':        -y, --side-by-side               output in two columns If you combine it with ' --suppress-common-lines ':        --suppress-common-lines               do not output common lines you get a very nice to view diff in two columns, where only the different lines are displayed. My alias is: alias dico='diff -y --suppress-common-lines'

Setting DEBEMAIL and DEBFULLNAME when using dch

When using dch -i or -v, this tool will try to infer what name and email address the maintainer wants. The gist of this short post is: check the values of DEBEMAIL and DEBFULLNAME environment variables before running the build. And of course you can set them with a simple export command. From dch man page: If either --increment or --newversion is used, the name and email for the new version will be determined as follows. If the environment variable DEBFULLNAME is set, this will be used for the maintainer full name; if not, then NAME will be checked. If the environment variable DEBEMAIL is set, this will be used for the email address. If this variable has the form "name ", then the maintainer name will also be taken from here if neither DEBFULLNAME nor NAME is set. If this variable is not set, the same test is performed on the environment variable EMAIL. Next, if the full nam...

My wish list for Beginning Puppet

A few months ago a question in the very useful 'ask puppet' site attracted my attention. The authors of Beginning Puppet were asking for feedback about the desired content. Since my answer has had some success, I'd like to report it here: These are the things I would have liked to see as soon as I started experimenting with Puppet (in no specific order - this is mainly a brain dump): What's a catalogue and how Puppet builds it Recommended file structure for Puppet manifests How to divide your infrastructure in environments A clear indication that Puppet doesn't guarantee an "order of execution", and how to define dependencies between resources and classes (i.e. the "arrow" syntax to ensure that a class is always applied before/after another one). Before ever writing a module, how to test it (and as a consequence, Continuous Integration for Puppet modules) How to publish a module on github or puppetforge A "build-up example...

Basic Docker commands

Just a list of docker commands that I've found quite useful so far. List the containers available locally: docker ps -a Just the container IDs: docker ps -a -q Filter the running containers: docker ps -a -q | grep Up or simply: docker ps -q  Build an image: docker build -t <user>/<image name> . Run a container from an image, and execute bash shell: docker run -i -t <user>/<image name> /bin/bash Run a container from an image, in background, linking ports: docker run -d -p <host port>:<container port> <user>/<image name> (You can use -p multiple times, e.g. when the container exposes more than one port) Show logs of running container: docker logs <container id> Stop/start/restart container: docker stop/start/restart <container id> UPDATE - Stop running container with a single command: docker ps -a|grep Up|awk '{ print $1 }'|xargs docker stop  UPDATE - The reason why I'm ...

Docker on Ubuntu: when apt-get update fails

I've been spending some time with Docker , and since digitalocean provides Ubuntu images with Docker already configured, setting up a new droplet and a few containers with my apps has become a matter of minutes. One of the fist commands you want to run in your Dockerfile is certainly 'apt-get update'. I'm writing this with the hope to save the reader some precious time: if 'apt-get update' fails, you may want to ensure that docker's DNS configuration is complete (and servers are accessible). In my case this meant simply adding the Google DNS servers in /etc/default/docker : DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4" and restart docker with sudo service docker restart This StackOverflow question is related.

Praise of I Don't Know

Answering "I don't know" (hereby referred to as IDK) is not an indication of ignorance. On the contrary, it may indicate that you're not willing to make wrong assumptions or provide inaccurate information. I think we've been educated in believing that whatever question we've been asked, we must answer. This is probably true in a traditional school environment, where students are expected to show successful memorization of notions. In those cases "I don't know" means "I wasn't listening to the lesson" or "I didn't study". There are other contexts though, where IDK not only is the most accurate answer, but it's probably the more useful. Imagine you're having a surgery operation, but you can hear what surgeons say. One is a doctor fresh from university, at his first operation. He/she will perform the operation. The expert surgeon asks a question before the rookie can proceed. There's probably 'the ri...