Tuesday, 2 December 2014

Testing a PR for Asterisk Puppet module on Docker

Just wanted to share what approach I'm following to test Pull Requests for the trulabs Asterisk Puppet module.
Run a docker container, choosing the base target distribution, e.g. one of:

docker run -i -t debian:wheezy /bin/bash
docker run -i -t ubuntu:precise /bin/bash
docker run -i -t ubuntu:trusty /bin/bash

Inside the Docker container, set up some preconditions:

apt-get update
apt-get install -y git
apt-get install -y puppet
apt-get install -y vim

Clone the git project:

mkdir -p git/trulabs
cd git/trulabs
git clone https://github.com/trulabs/puppet-asterisk.git
cd puppet-asterisk

Create a new branch:

git checkout -b PULL_REQUEST_BRANCH master

Checkout the project from which the Pull Request is created:

git pull https://github.com/CONTRIBUTOR/puppet-asterisk.git PULL_REQUEST_NAME

Build and install the Asterisk Puppet module built with this Pull Request changes:

puppet module build .
puppet module install pkg/trulabs-asterisk-VERSION.tar.gz --ignore-dependencies

(You typically need the concat module as a dependency:)

puppet module install puppetlabs-concat

Run puppet apply with the desired test manifest

puppet apply -v tests/TESTCASE.pp --modulepath modules/:/etc/puppet/modules --show_diff --noop

Et voilá. (Note that this will use the stock Puppet version - more to follow about safely configure Puppet 3.7 on any host).

Sunday, 30 November 2014

Bridging WebRTC and SIP with verto


Verto is a newly designed signalling protocol for WebRTC clients interacting with FreeSWITCH. It has an intuitive, JSON-based RPC which allows clients to exchange SDP offers and answers with FreeSWITCH over a WebSocket (and Secure WebSockets are supported). It’s available right now with the 1.4 stable version (1.4.14 at the moment of writing).

The feature I like the most is “verto.attach”: when a client has an active bridge on FreeSWITCH and, for any reason (e.g. a tab refresh) it disconnects, upon reconnection FreeSWITCH automatically re-offers the session SDP and allows the client to immediately reattach to the existing session. I have not seen this implemented in other places and find it extremely useful. I’ve noticed recently that this does not fully work yet when the media is bypassed (e.g. on a verto-verto call), but Anthony Minnesale, on the FreeSWITCH dev mailing list said this feature is still a work in progress, so I’m keeping an eye on it.

Initially I was expecting an integrated solution for endpoint localization, i.e. what a SIP registrar can do to allow routing a call to the right application server. On second thoughts I don’t think this is a problem and there are ways to gather on which FreeSWITCH instance an endpoint is connected, and then route a call to it.

Once a verto endpoint hits the dialplan, it can call other verto endpoints or even SIP endpoints/gateways. I’ve also verified that verto clients can join conference rooms inside FreeSWITCH, and this is not only possible but can be done for conferences involving SIP endpoints as well, transparently.

This brings me to what I think it’s the strongest proposition of verto: interoperability with SIP.

In my opinion WebRTC is an enormous opportunity, and a technology that will revolutionize communications over Internet. WebRTC has been designed with peer-to-peer in mind, and this is the right way to go, however if you want to interoperate with VoIP (either directly or as a gateway to PSTN and GSM) you can’t ignore SIP.

I’m not worried about Web-to-Web calls: there are already many solutions out there, and each day there’s something new. Many new signalling protocols are being designed, since WebRTC standardization, on purpose, hasn't mandated any specific protocol for signalling. Verto is a viable solution when on the other side you have SIP.

I've been experimenting on this for some time now. In August I presented a solution for WebRTC/SIP interoperation, based on Kamailio andFreeSWITCH, at ClueCon. In that case signalling was accomplished with SIP on both sides (using the JsSIP library on the clients); unsurprisingly, after using verto, SIP on the web browser client side looks even more redundant, over-complex, but most of all with a steeper learning curve for web developers, and this is becoming every day a stronger selling point for new signalling protocols for WebRTC applications.

Web browsers running on laptops can easily manage multiple media streams incoming from a multi-party call. This is not true for applications running on mobile devices or gateways: they prefer a single media stream for each “conference call”, for resource optimization and typical lack of support respectively (1). Verto-SIP can represent a solution to bridge the web/multistream world with the VoIP/monostream one, for example by having the participants inside a conference room.

When video is involved though, things get as usual more complicated. WebRTC applications can benefit from managing one video stream per call participant, and a web page can present the many video streams in many ways.

But this can easily become too cumbersome for applications on mobile devices. We need to be able to send one single audio stream and video stream. And whilst the audio streams are “easy” to multiplex, how do you do that for video? Do you stream only the video from the active speaker (as FreeSWITCH does by default on conferences), or do you build a video stream with one video box per participant? The Jitsi VideoBridge is a clever solution leveraging a multi-stream approach, but again, how about applications running on mobile devices?

For what concerns signalling interoperation/federation there is an interesting analysis at the Matrix project blog. The experience gathered last Friday when hacking Matrix/SIP interoperability through verto/FreeSWITCH has also shown some key points about ICE negotiation: I recommend reading it.

My view is that there are two key points that will allow a solution to be successful in the field of Web-based communications involving “traditional” Internet telephony but also mobile applications:

  1. Interoperability with SIP.
  2.  The ability to provide one single media stream per application/gateway, should they require it.


What do you think?


(1)   Yes, I know that nothing prevents a SIP client to manage multiple streams, but practically speaking it’s not common.

Saturday, 22 November 2014

Don't trust the kernel version on DigitalOcean

This was tricky. I was setting up a VPN connection with a newly built DigitalOcean droplet (standard debian wheezy 64bit in the London1 data center).

The connection is based on openvpn, and it's the same on many other nodes, but openvpn wasn't starting properly (no sign of the tun interface).

Googling the problem brought me to this reported debian bug, where apparently the problem was associated to an older version of the linux kernel. But I had the latest installed:


ii  linux-image-3.2.0-4-amd64          3.2.63-2+deb7u1               amd64        Linux 3.2 for 64-bit PCs

The reason why this problem was present is that the version actually loaded was different!

Apparently


[...] it seems that grub settings are ignored
on digital ocean, and that you instead have to specify which kernel is
booted from the Digital Ocean control panel, loaded from outside the
droplet [...]

So to see what's the actual version loaded you have to go to the console, select Settings, select Kernel and choose the latest (or desired) kernel version, then click on "Change".

Mine was set to:
Debian 7.0 x64 vmlinuz-3.2.0-4-amd64 (3.2.54-2)

Changing and rebooting fixed the issue (see also this about the procedure).
I hope this may save the reader some time.

Thursday, 20 November 2014

Deploying Kamailio with Puppet

Almost three years ago I started automating the deployment of all server-side applications with Puppet.

One of the key applications in Truphone's RTC platform (and arguably in most of VoIP/WebRTC platforms today) is Kamailio, so with some proper encapsulation it’s been possible to build a Puppet module dedicated to it.

This is now publicly available on PuppetForge. Puppet Forge is a public repository of Puppet modules. We use many of them, and they can be incredibly useful to get you up to speed and solve common problems. You can also fork it on github, if you want.

You can use this module in different ways:

  • To quickly deploy a kamailio instance, with default configuration files.
  • As the basis for a custom configuration, where you use the official debian packages but with your own configuration details and business logic.


The first approach is very simple. Imagine you start from and empty VM or base docker container: all you have to do is install puppet, download the trulabs-kamailio module from Puppet Forge and apply the test manifest included in the module.

$ apt-get update
$ apt-get install -y puppet
$ puppet module install trulabs-kamailio

  
Dry-run the deployment with ‘noop’ first, and see what Puppet would do:

$ puppet apply -v /etc/puppet/modules/kamailio/tests/init.pp --noop

And then run it for real, by removing ‘noop’:

$ puppet apply -v /etc/puppet/modules/kamailio/tests/init.pp


Now you can verify that Kamailio is running, listening on the default interfaces, and at the latest stable version (this was tested on debian wheezy, and it’s expected to work on Ubuntu Precise and Trusty too).

The second approach (using trulabs-kamailio as a basis for your custom configuration) requires a little bit more work.

Here’s an example, where we want to deploy kamailio as WebSocket proxy. What changes in respect to the test installation is:

  1. We need to install also ‘kamailio-websocket-modules’ and ‘kamailio-tls-modules’ (as I’m assuming the WebSockets will be Secure).
  2. Take care of the kamailio cfg files directly, and not use the vanilla ones (trulabs-kamailio contains the “sample cfg files” already available with the official debian packages).


A sensible approach is to create a new Puppet module, e.g. ‘kamailio_ws’, and inside its init.pp instantiate the ‘kamailio’ class with these options:

class kamailio_ws () {
  class { '::kamailio':
    service_manage  => true,
    service_enable  => true,
    service_ensure  => 'running',
    manage_repo     => true,
    with_tls        => true,
    with_websockets => true,
    manage_config   => false,
  }
}

(This is similar to the concept of Composition in OOP.)

You can see that ‘manage_config’ is set to false, which is the way of telling the kamailio module not to install the vanilla cfg files, and that those will be managed elsewhere (inside the kamailio_ws module in this case).

To install your cfg files it’s sufficient to add something like this to the init.pp manifest:

  file { '/etc/kamailio/kamailio.cfg':
    ensure => present,
    source => 'puppet:///modules/kamailio_ws/kamailio.cfg',
    notify => Exec['warning-restart-needed'],
  }

In this way you’re declaring that the kamailio.cfg file (to be installed in /etc/kamailio/kamailio.cfg) has to be taken from the files/ folder inside the kamailio_ws module.

As an additional action, every time that file changes, Puppet should notify the user that a restart is necessary (and then leave to the sysadmin to choose the right moment to do so – the Exec resource is just expected to print something on std out).

If you want to restart kamailio immediately after kamailio.cfg changes, then you can use:
  file { '/etc/kamailio/kamailio.cfg':
    ensure => present,
    source => 'puppet:///modules/kamailio_ws/kamailio.cfg',
    notify => Service[‘kamailio’],
  }

Note that if the file doesn't change, for example because you’re just running Puppet to verify the configuration, or updating another file, then Puppet won’t try any action associated to the ‘notify’ keyword.

You can then add all the other files you want to manage (e.g. you may have a defines.cfg file that kamailio.cfg includes) by instantiating other File resources.

A common practice for modules is to have the main class (in this case kamailio_ws) inside the init.pp manifest, and the configuration part (e.g. kamailio.cfg) inside a config.pp manifests included by init.pp.

What do you think? Would you like to see any particular aspect of automating Kamailio configuration treated here? Will you try this module?


Your opinion is welcome.

Wednesday, 19 November 2014

Puppet module support on 2.7.13

Yesterday I just noticed that on Ubuntu Precise, with a stock Puppet installation (2.7.13), 'puppet module' (a tool to build and install modules) was not available. Soon found an explanation on superuser: 'puppet module' was released with 2.7.14.

There is a straightforward way to get out of this: upgrade Puppet as recommended by PuppetLabs and go straight to version 3 (unless of course you have constraints to stay on 2).

These steps will allow you to get Puppet directly from PuppetLabs repos (just choose the target distribution).

At the moment of writing this procedure would upgrade Puppet on Ubuntu Precise from '2.7.11-1ubuntu2.7' to '3.7.3-1puppetlabs1'.




Tuesday, 28 October 2014

Hacking our way through Astricon

This year I was speaking at Astricon for Truphone Labs (you can see my slides here if you're interested).

The week before Astricon I was invited try out respoke, a solution that allows you to build a WebRTC-based service.
They provide you with a client JavaScript library, and you need a server account (with an app ID and secret key) to connect your application to the respoke server and allow clients to communicate with each other.
This is an intuitive approach. As a service developer, you pay for the server usage, and you do so depending on how many concurrent clients you want.
I got a testing account, and started trying out the JS library. The process of building a new application was very straightforward, and the docs guided me towards building a simple app to make audio calls, and then video calls as well.

Soon I started thinking: respoke is from Digium, right, and Digium develops Asterisk. How is it possible that respoke and Asterisk cannot interconnect? What I’d like to do is place a call from web, and in some circumstances route it to a SIP client, or a PSTN line, or mobile phone.

It turned out that my expectation was quite justified: 36 hours before the beginning of the Astricon Hackathon, Digium announced chan_respoke, a new module for Asterisk (13) that allows Asterisk to connect as a respoke client and communicate with JS clients.

So that was the good news. The bad news was that we didn't have any time to prepare before the Hackathon, so we had about 8 hours to get up to speed and build something… sexy!
The other service that the Astricon Hackathon was encouraging to use was Clarify, which provides APIs to upload audio recording and is able to detect some specific “tag words” from the recordings.

Among the people discussing the formation of a team the most complete and compelling idea, and one that probably did require all 5 people working together, was GrannyCall. You can see some details here, with the list of team members.

GrannyCall was thought as a system for kids to call their granny (or daddy, mummy, etc.) from a simple web page, and get a score depending on how their vocabulary was appropriate and rich.
For example, we wanted to give some positive score for words like “love” or “cookie”, and perhaps a negative score for… well, you can guess some words that would score badly for a kid talking to his/her granny.
The project was quite ambitious, because the originating call would have been from a web page built with voxbone’s webrtc library, reach Asterisk over SIP, and then ring the granny on a web page built with the respoke client.

We used an Ubuntu VM from DigitalOcean to host Asterisk and the web servers (nginx) for the two web pages, and an external web server to interconnect to the Clarify APIs for uploading the recordings (with the desired tags).

Asterisk needed to be version 13, and chan_respoke was built and configured. The DigitalOcean box was on public IP so there wasn’t the need for any specific networking.

The part “kid to voxbone to asterisk” allowed for some preparation and went smoothly right after the time to build the web server and upload the client page.
While Asterisk was being built and configured, we built the granny web page with the respoke library. Again in this case it was quite easy and quick to have a call between two respoke clients, peer to peer, just to test the client application on the browsers.
The tricky part was originating the call from Asterisk to the granny web page, using chan_respoke.
For the sake of testing the connection and media establishment, we made some calls from the respoke client to Asterisk, hitting an announcement and an echo test. That worked almost immediately, and it was great!

Now it was the key moment: can we do the full flow (kid – Voxbone –asterisk – respoke – granny)? In terms of establishing the call, i.e. signalling, that worked too just after a few tweaks to chan_respoke’s configuration. But what about audio?

It turned out that there were some problems in the ICE negotiation between the respoke client and Asterisk: we had audio only in one direction. We were using Chrome at that moment, and moving to Firefox didn’t help, so we did think there could be possibly a bug in the libraries.
Considering the maturity of the libraries, this looked completely understandable, and the respoke guys spent a lot of time helping us investigating the problem and trying to find a proper solution before the submission deadline (this resulted in a patch on the server side being applied the next hours).

Honestly, I was happy that the Hackaton was scheduled for only a relatively short time as eight hours: would it had been any longer, we probably wouldn’t have dinner or had a proper sleep (and the 8 (or 9) hours jet lag was not particularly helpful!).

At submission time, we didn't have two-way audio. Also the debugging ate precious time to prepare the presentation to the judges, and this could be the reason why we weren't awarded any prize. Honestly, given the intensity of the effort and the complexity of the project, I was hoping for at least an honorable mention, but I hope we can gather again the same team in a different occasion and bring different results!

Jokes apart, it’s been an extremely useful experience. No documentation or remote communication can replace the live interaction and working on a proof of concept – in particular if you have a crazy deadline and the body full of caffeine and sugar (and a few hours' sleep in the last 36 hours).

Of course we took some shortcuts, like removing any firewall from the host, use a common linux user, authenticated via password and not SSH keys, edited files in place, etc. We did those things knowing they weren't best practices but aiming to complete a proof of concept as quickly as possible.

The takeaway from all this is very simple: if you’re developing a new technology, or a new solution oriented to developers, do whatever you can to involve the developers in a productive, challenging way. Hackathons represent a great solution, even if confined within a company, department or team. The excitement and the feedback (and debugging) you'll help to generate will have a tremendous value.

Removing all unused docker containers and images

docker containers are very easy to build, and the ability to re-use them is extremely handy.
They do require a lot of disk space though, and if like me you need to retrieve some of that space you can delete the containers that are not running with this command:

docker rm $(docker ps -a -q)

In fact 'docker ps -a -q' lists all the containers, but 'docker rm' won't destroy running containers, and the command above will do the job.

You still have the images though (visible with 'docker images'), and each one can be relatively big (> 400MB). I tend not to tag images, so in general I'm interested in the 'latest' ones, or the official ones (like ubuntu:14.04, where ubuntu is the repo and 14.04 is the tag).

Anyway even if selecting the "untagged" ones does the job for me, I can clean up the undesired images with:

docker rmi $(docker images -q --filter "dangling=true")

I've seen other commands just relying on the tag name (and assuming 'TAG' represents and undesired image), but the filter property above should be more reliable.



Monday, 27 October 2014

My presentation at Astricon 2014

Accessing the full P-Asserted-Identity header from FreeSWITCH

I hope this can save the reader some time.
If you need to read the entire content of the P-Asserted-Identity header of an incoming INVITE, be aware that you should change the sofia profile by adding a param like:

param name="p-asserted-id-parse" value="verbatim"

FreeSWITCH will populate the variable accordingly and make it available with commands like (e.g. with lua):

PAID = session:getVariable("sip_P-Asserted-Identity")

If you don't add this parameter, you'll get the default behaviour, which is just filling the variable with the P-Asserted-Identity URI username part.

Possible value are: "default", "user-only", "user-domain", "verbatim", which I think are self-explanatory.

A recent reference here.

Monday, 20 October 2014

Comparing Configuration Management Tools The Hard Way

I've been using Puppet for more than 2 years now, and written almost 50 custom modules, for about 4K lines of code. The more I work with it, the easier is to find a 3rd party module to solve a problem for me, and I can just focus on very specific needs.
I rarely wrote any Ruby snippets, and basically relied on Puppet's DSL. A couple of times I submitted a pull request for a 3rd party module, and got my changes merged in.

I started using Puppet for a very specific reason: there was already expertise in the company, and most part of the infrastructure pre-requirements were deployed using it (even if in a Master-Slave mode, while I favour the Standalone approach).

All this said, I can't ignore what's going on in the Devops world; many other tools are being developed to solve automation of configuration management, and with the aim of being 1. Extremely easy to use 2. Reliable 3. Scalable.

In my hunt for comparisons I've stumbled upon this article from Ryan Lane.

In my opinion, this is the way comparisons should be made: the hard way, with a 360 degrees approach. Ryan explains how his team moved away from Puppet and chose between Ansible and Salt.

I strongly recommend you take the time to go through it, as it has precious insights that are typically shared only within a team or organization.

And if you have other examples of such thorough analysis, I'd be grateful if you could comment here or drop me a note.

Disclaimer: the opinions here are my own, and I'm not affiliated with either Puppet, Ansible or Salt.

Tuesday, 7 October 2014

Build a test bed with FreeSWITCH and PJSUA (with SILK)

Build and run FreeSWITCH

This is based on debian wheezy, and uses master FreeSWITCH. You can switch to stable when cloning.
In my experience this is the quickest way to get to a working, vanilla setup that you can use to automate tests with PJSUA (the PJSIP command-line client).

mkdir git
cd git
# To get master
git clone https://stash.freeswitch.org/scm/fs/freeswitch.git
cd git/freeswitch
./bootstrap.sh -j
# If changes are needed to the list of modules to be compiled
vi modules.conf
./configure --enable-core-pgsql-support
make
sudo make install
# Launch FS (in foreground)
sudo /usr/bin/freeswitch

You can find here the process for building it from source recommended by the FreeSWITCH team.

In /etc/freeswitch/vars.xml you may want to change the default password in:
cmd="set" data="default_password=XXXXXXXX"

If you're enabling SILK, add it to the list in modules.conf.xml:
load module="mod_silk"

and add it to global_codec_prefs and outbound_codec_prefs inside vars.xml, e.g.:
cmd="set" data="global_codec_prefs=SILK@24000h@20i,SILK@16000h@20i,SILK@8000h@20i,speex@16000h@20i,speex@8000h@20i,PCMU,PCMA"
cmd="set" data="outbound_codec_prefs=SILK@24000h@20i,SILK@16000h@20i,SILK@8000h@20i,speex@16000h@20i,speex@8000h@20i,PCMU,PCMA"


Build and run PJSUA

Note that you need to retrieve a copy of the SILK SDK, once available from skype's developer site (and put it in ~/silk-1.0.9.zip):

mkdir pjsip && cd pjsip
svn co http://svn.pjsip.org/repos/pjproject/trunk/@4806 pjproject
cp ~/silk-1.0.9.zip .
unzip silk-1.0.9.zip && cd SILK_SDK_SRC_FLP_v1.0.9 && make clean all
cd ../pjproject
./configure --with-silk=pjsip/SILK_SDK_SRC_FLP_v1.0.9
make dep && make

Now create a configuration file for PJSUA in pjproject dir, e.g. pjsua_test.cfg:

--id sip:1001@192.168.0.10
--registrar sip:192.168.0.10
--realm 192.168.0.10
--username 1001
--password THEPASSWORD
--local-port 5066
# SRTP - 0:disabled, 1:optional, 2:mandatory (def:0)
--use-srtp 0
# SRTP require secure SIP? 0:no, 1:tls, 1:sips (def:1)
--srtp-secure=0
# Disable the sound device. Calls will behave normally, except that no audio will be transmitted or played locally.
--null-audio
# Automatically stream the WAV file to incoming calls.
--auto-play
--play-file tests/pjsua/wavs/input.8.wav

where 192.168.0.10 is FreeSWITCH's listening IP address.
PJSUA can be local (i.e. run on the same host as FreeSWITCH), or remote (with obvious network reachability constraints).

Run PJSUA from pjproject with:
./pjsip-apps/bin/pjsua-x86_64-unknown-linux-gnu --config-file=pjsua_test.cfg

and let's the tests begin!

You may want to increase FreeSWITCH verbosity by enabling SIP traces and debug messages in fs_cli with:

sofia global siptrace on
console loglevel debug
It's also easy to use the same configuration and have PJSUA as one of the parties (caller or callee) and have a softphone on the other side, like Blink or X-Lite.
The advantage of PJSUA is that you can easily automate its execution and result parsing, for example with its python module.

Friday, 12 September 2014

Favourite alias of the week: 'dico'

diff provides one interesting option, '-y':

       -y, --side-by-side
              output in two columns

If you combine it with '--suppress-common-lines':

       --suppress-common-lines
              do not output common lines

you get a very nice to view diff in two columns, where only the different lines are displayed.

My alias is:
alias dico='diff -y --suppress-common-lines'

Monday, 8 September 2014

Setting DEBEMAIL and DEBFULLNAME when using dch

When using dch -i or -v, this tool will try to infer what name and email address the maintainer wants.

The gist of this short post is: check the values of DEBEMAIL and DEBFULLNAME environment variables before running the build.

And of course you can set them with a simple export command.

From dch man page:

       If  either  --increment or --newversion is used, the name and email for
       the new version will be determined  as  follows.   If  the  environment
       variable  DEBFULLNAME is set, this will be used for the maintainer full
       name; if not, then NAME will be checked.  If the  environment  variable
       DEBEMAIL  is  set,  this  will  be used for the email address.  If this
       variable has the form "name ", then  the  maintainer  name  will
       also  be  taken  from  here if neither DEBFULLNAME nor NAME is set.  If
       this variable is not set, the same test is performed on the environment
       variable  EMAIL.  Next, if the full name has still not been determined,
       then use getpwuid(3) to determine the name from the password file.   If
       this  fails,  use the previous changelog entry.  For the email address,
       if it has not been set from DEBEMAIL or EMAIL, then look in  /etc/mail-
       name,  then  attempt  to build it from the username and FQDN, otherwise
       use the email address in the previous changelog entry.  In other words,
       it's  a  good  idea  to  set  DEBEMAIL  and DEBFULLNAME when using this
       script.

Tuesday, 19 August 2014

My wish list for Beginning Puppet

A few months ago a question in the very useful 'ask puppet' site attracted my attention. The authors of Beginning Puppet were asking for feedback about the desired content.

Since my answer has had some success, I'd like to report it here:

These are the things I would have liked to see as soon as I started experimenting with Puppet (in no specific order - this is mainly a brain dump):
  • What's a catalogue and how Puppet builds it
  • Recommended file structure for Puppet manifests
  • How to divide your infrastructure in environments
  • A clear indication that Puppet doesn't guarantee an "order of execution", and how to define dependencies between resources and classes (i.e. the "arrow" syntax to ensure that a class is always applied before/after another one).
  • Before ever writing a module, how to test it (and as a consequence, Continuous Integration for Puppet modules)
  • How to publish a module on github or puppetforge
  • A "build-up example" throughout the book. In other words, start building a module at the beginning, and keep adding elements as topics are discussed. For example, an apache module: start with making sure the apache package and its pre-requirements are installed, then add a file, then a template, then prepare the module for different environments, different OSs, etc.
  • Best practices for separating the module logic from the data, i.e. how to ensure that modules can be reused on different platforms/environments/OSs just with proper variable settings and without changes to the module logic.
  • Whether the hiera approach is recommended or not, and how to design a module with that in mind.
  • Best practices for dividing modules in classes
  • Variables scope, and best practices for variable names
  • Best practices for indentation, single/double quoting, module documentation
  • How to run Puppet in standalone/agent mode (so that the manifests can be verified easily locally)
  • Where Puppet looks for modules - how to install and use a 3rd party module
  • Common security problems
  • Load testing (e.g. with Gatling)
  • Continuous Integration (managing your modules with Jenkins or similar)
  • The 'facter' and what 'facts' are available and commonly used

Basic Docker commands

Just a list of docker commands that I've found quite useful so far.
List the containers available locally:
docker ps -a
Just the container IDs:
docker ps -a -q
Filter the running containers:
docker ps -a -q | grep Up
or simply:
docker ps -q 
Build an image:
docker build -t <user>/<image name> .
Run a container from an image, and execute bash shell:
docker run -i -t <user>/<image name> /bin/bash
Run a container from an image, in background, linking ports:
docker run -d -p <host port>:<container port> <user>/<image name>
(You can use -p multiple times, e.g. when the container exposes more than one port)
Show logs of running container:
docker logs <container id>
Stop/start/restart container:
docker stop/start/restart <container id>

UPDATE - Stop running container with a single command:
docker ps -a|grep Up|awk '{ print $1 }'|xargs docker stop 
UPDATE - The reason why I'm not using sudo to run these commands is that I added my user to the docker group, e.g.:
sudo usermod -aG docker

I hope you find this useful.

(Docker 1.1.2 on Ubuntu 14.04)

My presentation at ClueCon 2014

Back from ClueCon 2014, I've described the content of my presentation in this Truphone Labs Blog post.

The slides are here.

Any feedback is welcome, as usual.

Docker on Ubuntu: when apt-get update fails

I've been spending some time with Docker, and since digitalocean provides Ubuntu images with Docker already configured, setting up a new droplet and a few containers with my apps has become a matter of minutes.

One of the fist commands you want to run in your Dockerfile is certainly 'apt-get update'. I'm writing this with the hope to save the reader some precious time: if 'apt-get update' fails, you may want to ensure that docker's DNS configuration is complete (and servers are accessible).

In my case this meant simply adding the Google DNS servers in /etc/default/docker:

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"

and restart docker with

sudo service docker restart

This StackOverflow question is related.

Friday, 2 May 2014

Praise of I Don't Know

Answering "I don't know" (hereby referred to as IDK) is not an indication of ignorance. On the contrary, it may indicate that you're not willing to make wrong assumptions or provide inaccurate information.

I think we've been educated in believing that whatever question we've been asked, we must answer. This is probably true in a traditional school environment, where students are expected to show successful memorization of notions. In those cases "I don't know" means "I wasn't listening to the lesson" or "I didn't study".

There are other contexts though, where IDK not only is the most accurate answer, but it's probably the more useful.

Imagine you're having a surgery operation, but you can hear what surgeons say. One is a doctor fresh from university, at his first operation. He/she will perform the operation. The expert surgeon asks a question before the rookie can proceed. There's probably 'the right answer', and a plethora of sub-optimal others. And perhaps a few answers that incorrectly give the impression that the doctor knows what he/she is doing. If the rookie is unsure, what would you like to hear?
Different scenario: captain and second pilot on a Boeing 737. There is an emergency: they don't get readings about air speed and altitude. The captain asks... Well, you get the point.

Other cases, surely less time critical, can benefit from IDK. Engineering, and software engineering in particular are the contexts I wanted to talk about.

IDK means you're not satisfied by your current knowledge, you don't want to make any assumptions that may reveal to be inaccurate, but most importantly you're communicating it clearly.

IDK is not the end of a technical conversation: it's the beginning.

If you work in a friendly and trusting environment, IDK means "I need to consider additional data/details", " We should ask an expert", "We should not make any assumptions here until we've verified this and that".
This brings the technical analysis to a different level.

Many believe bugs in the code are dangerous. Surely they are, but not as much as bad assumptions.

A collateral effect of embracing, accepting or even encouraging the IDK approach is that people won't feel the pressure to answer as quickly as possible with the best approximation at hand. People will talk more, read more, and ultimately provide better answers.

Considering my University years, it's been more than two decades of technical conversations, and I've learned to distinguish between people pretending to know and those who end up teaching you something. The latter, which contribute to your knowledge, and at the same time improve theirs, have been the ones with the IDK attitude. "I don't know. Let's understand/discover/explain this."

In the era of Twitter and Skype, we should remember that there's a moment for quick, immediate interaction, and a moment where communication, in particular technical, requires time in order to deliver value.
Try this: the next time you're tempted to provide a quick but perhaps inaccurate answer, embrace the IDK spirit. Say IDK, ask questions, talk to people, read. Then look at the outcome: isn't it better, more precise, rich, useful, than the very first thing you had in mind? What else have you learned in the process? Did it take so much time?
Of course this is not always true, and I'm sure the reader can identify cases where a good enough answer provided quickly is the best thing.

What I wanted to say, because I genuinely believe it, is that we need to value good, accurate answers/conversations against approximate, superficial ones.