Tuesday 27 November 2018

You need to slow down

This blog has been historically focused on technical topics in areas related to Real Time Communications, but I'm taking the liberty to digress a little.

I've been reading recently about the dynamics of performance in running.

I'm just an amateur runner, and I always train (and race) alone, so I felt the need to make it more interesting than just reading training tables. I've been studying what is it that limits performance. "Train more", unfortunately, not only doesn't always work, but needs to take into account injuries, and overtraining in general.

One of the first concepts that struck me is that it's been proven that fatigue, and the consequential slow down, does not mean that the body is unable to continue with that effort. What's behind slowing down is a sort of protective mechanism in our nervous system, which wants to prevent the body to reach exhaustion.

Our nervous system is constantly getting feedback signals from the body, including the perception of adverse weather conditions, and computing for how longer the current effort can be sustained. Even knowing how long is left to run is a form of external feedback.

When this computation detects that the effort is too high, the nervous system takes control and doesn't fire as many muscle fibres as the athlete wants to. The athlete thinks the muscles can't continue to work at that level, but in reality they are entering a protective status. Without that, people could literally run until body exhaustion or even death.

I find this fascinating. Evolution has given us a sophisticated algorithm aimed to prevent body exhaustion by generating fatigue symptoms and consequently reduce the actual effort.

What's also fascinating is that it seems this system can be "tricked". One way of doing is by training. Training is a way of educating your nervous system that a certain effort is OK. "There's no need to shut me down, brain, I know what I'm doing. I did that thousands of times in my training sessions."
So while the body adapts to the stress of training, the nervous system too becomes more familiar with that stress, and gives up a little.

Another way of tricking that system is by providing false external feedback. It seems that if you get false information about elements like the air temperature, or even the remaining time in your training session or race, then the nervous system acts accordingly. If it believes it's not as hot as it is, it will not intervene to shut the muscle fibre activation.

Similarly, if the athlete thinks the race is close to the end, the nervous system will allow a prolonged effort. This is why elite marathoners, who clearly haven't underperformed for the first 40 km, can run the last 2 km even faster.

Of course, all these tricks have limits, and the improvements that can be tricked are in the order of small percentages. But still this shows the importance of external feedback.

This is properly explained by Steve Magness in his "Science of running" book. I then read another book from this author, "Peak performance". To be perfectly honest, I was expecting something strictly related to endurance sport, but in this second case the concept of performance was wider.

There seems though to be an analogy between the shutting down of muscle fibres from the nervous system during running, and something that happens behind a desk and is more widely known: mental burnout. From this point of view, mental burnout can be seen as a way of saying "You can't keep going at this (perceived) level of effort. You need to sleep, hydrate, rest, but you keep working. I'm going to take control and shut you down.".


As I'm making my own little experiments, in the future I'd like to write more about this, and in particular about the relationship between effort and rest. 

Monday 19 November 2018

Docker from scratch

Some time ago I prepared an introductory seminar on Docker, which I called "Docker from scratch".

The audience was a local group of heterogeneous developers. As it typically happens, preparing that material was a great opportunity to understand better some of the aspects.

I then published the slides in Slideshare. I notice now that they got a decent number of views and downloads, so I'm linking those slides here as well for reference.

One big change in respect to 2016 is represented by the choice of abandoning Docker Toolbox in favour of running Docker directly on macOS. I have to say I liked the sandboxing that came with Docker Toolbox, where the docker engine ran inside VirtualBox, now only available for older versions.


Tuesday 29 May 2018

On Kamailio World 2018, part II

In the first part of my brain dump about this year's edition of Kamailio World I focused mainly on testing. Core developers and application designers want to be able to test the behaviour of Kamailio-based architectures with minimal effort and fast feedback.

A different dimension to testing, that I haven't mentioned in my previous post, was related to Fuzz testing. There were two presentations focused on this: Sandro Gauci's (The easiest way to understand who Sandro is: listen on port 5060 on the public Internet and wait a couple of minutes. You'll see a SIP request from a tool called sipvicious (aka friendly-scanner), a penetration testing tool Sandro wrote (and others misuse)) and Henning Westerholt, historical member of the Kamailio community.

Sandro's presentation focused around fuzzing approaches for RTC in general (slides), while Henning was more specifically focused on Kamailio.

Fuzzing is a sophisticated technique to verify the robustness of a software application, by sending input that can vary greatly from the typical or expected usage. The objective is to find weaknesses that can lead to crashes or other malfunctions, so that they can be fixed. Of course testing a server like Kamailio is even trickier than testing an application that can read from a file. It is a fascinating topic.

Kamailio proved to be very robust: Henning reported an average of  about 1 message every 44 million required to make Kamailio misbehave. The video of Henning's presentation is here (by the way, Pascom have done a great work this year too, providing a flawless video streaming and recording. It feels like we are a little spoiled, because we give it for granted and barely notice all the work behind it).

In terms of learning opportunities for architects and administrators of Kamailio-based infrastructure, I found very valuable Daniel's presentations around high-level scripting (with KEMI) to build the routing logic (Video and slides).

Remember that Lua may not be the most popular - apparently - but it's the one estimated to give you performances closest to the native routing language.

Another valuable presentation was around the Least Cost Routing techniques that the Kamailio environment makes available. (Video, and slides). Some solutions use out of the box modules (like lcr, carrierroute, drouting), some are more indirect (pdt, mtree, dialplan, prefix_route), and others are a combination of them. Must-see if you're working in that area.

Another learning goldmine has been Lorenzo Miniero's (author of Janus, a WebRTC conferencing framework (this definition is mine)) lecture about Privacy, Security and Authentication for WebRTC. (Video and slides) Lorenzo does talk fast, but no word is spoken in vain. Worst case, you can watch the video at 0.5 speed (smile). Interesting the case of double encryption for media.



I guess there's enough for a part III in the near future! To be continued. 

Thursday 24 May 2018

On Kamailio World 2018, part I

This was my fifth time in a row attending Kamailio World in Berlin. The weather was warmer and sunnier than usual.

Apart from the obvious focus on Kamailio, as usual the RTC ecosystem was well represented (with Janus, Asterisk, FreeSWITCH, Homer, RTPEngine, and many others).

Attendance from the other side of the Atlantic Ocean gave stronger emphasis to the "World" term in the title.

My personal mission this year was to talk about a framework for testing Kamailio as a tool for developers and maintainers of the project: kamailio-tests. The main concept was that early tests that are not focused on a specific business logic (as we all have in our projects) and can be automated will be beneficial to Kamailio's reliability. We want to defer end-to-end testing to later stages, because they are expensive.

To provide a uniform infrastructure where to run the tests, without requiring permanent test environments, we use Docker for this. This is, of course, not the only possible approach, e.g. you could dynamically spawn VMs, AWS EC2s, etc. But Docker can run on your laptop as well as on a full-fledged CI environment, and this makes it easier to use for the developers.

Please take a look at the slides for more details. The feedback has been great so far, and this proved various points:

1. Conferences for developers are not paid holidays for IT guys, but opportunities for knowledge sharing and collaboration (I would say, in particular if Open Source is in the equation).

2. "Functional" or "component" testing is needed by many, but we haven't a mature solution yet.

3. Docker in RTC is less a fancy technology borrowed by other IT areas and more an everyday tool.

Some have already volunteered to help me improve kamailio-tests, and their point of view will be very useful. More on this project in the future.

Around the topic of testing, in this case not Kamailio itself but more the business logic built around it, there have been interesting insights from Sebastian Damm (sipgate) and Alex Sosic (evosip). 

Sebastian presented an approach that benefits from moving the Kamailio routing logic from the native language to KEMI with Lua (https://github.com/sipgate/lua-kamailio). Alex presented a way to verify the routing logic is going through the expected paths, again with Docker, and sipp.

KEMI is an extension of Kamailio that allows developers to write the routing logic in high level languages, like Lua, Python, JS and others. Anedoctical experience made me think Lua was the most popular, while apparently Python is. For what concerns Lua in the RTC world, I wrote a few notes in February: http://www.giacomovacca.com/2018/02/the-interesting-case-of-lua-in-rtc-world.html


The advantages of working with a high level language are obvious: easier to read and maintain, it's easier to test the functions in isolation, and also easier to involve developers without specific knowledge in Kamailio's routing logic script. They will still need to understand how Kamailio works though, and the underlying protocols, so unless you're doing something extremely basic, it's not a complete abstraction from how Kamailio manages its role as "programmable SIP Proxy".

I have tons of notes from Kamailio World, but if I wait to go through all of them before writing something here, there will be the 2019 edition to talk about. So here's at least a part I.




Monday 5 February 2018

The interesting case of Lua in RTC world

An interesting pattern that caught my attention is the role that Lua is gaining in the RTC (Real-Time Communications) world.

Lua is a small-footprint programming language, powerful while keeping a simple syntax.

I’ve been using Lua to script dialplan actions for FreeSWITCH since about 2014. It has provided me with a way to define relatively complex logic and speed up the definition of FS’ behaviour.

Delegating this type of logic to a scripting language had several advantages, such as:
  • It’s easier to read and understand than native dialplans or native routing logic.
  • Makes unit testing of the dialplan possible/easier.
  • Allows changing some pieces of logic easily, in many cases preventing expensive reload of modules or restart of applications.

I’ve been using Lua for Kamailio as well. Kamailio is an open source programmable SIP Proxy. In a specific case, some bits of the routing logic required regex processing and was expecting to change often: an ideal case for an external script to do that work.
When the logic changes, it’s sufficient to instruct Kamailio to reload the script, and from that moment on the new requests being processed will use the new logic.

Recent versions of Kamailio though add a framework called KEMI. This opens up new possibilities, and also provides support for many other scripting languages, python being the most popular, JS, Squirrel. Still, Lua appears to provide the fastest implementations (with no observable performance degradation) while others have limitations. Python, as you can imagine, provides a rich set of functions and libraries, but it’s not as performant and the reload mechanism currently has some issues.

Wireshark, a tool to capture and analyse network activity, exposes a useful API for Lua. You can use the API to define your own Wireshark dissector (which you’ll need to install as a plugin). This has performance limitations in comparison with dissectors written in C - and so it’s recommended for prototyping only - but still can solve your problem perfectly. Out of need, I wrote a Wireshark dissector for HEP, a binary protocol used in the Homer environment. Homer is an open source framework for the monitoring and analysis of Real-Time Communications.

Last weekend a new interesting case was presented by Lorenzo Miniero at FOSDEM. The target application was Janus, an open source framework to build WebRTC gateways.

Janus allows to build applications by defining the transport and business logic as plugins, on top of the core that implements the WebRTC stack.
It’s written in C and so far users needed to write plugins with that language. The Janus developers have introduced the possibility to write plugins in Lua.

In his presentation Lorenzo explains also in detail what approach is best to use for a Real-Time application to interact with single threaded language like Lua in an asynchronous context.

Just a funny note: Lua uses a double dash for commenting out a line: '--'. Be careful when you watch diffs in a terminal because a removed comment will start with '- --‘ and may not the easiest thing to interpret (experiences may vary depending on the terminal, of course!).



Sunday 21 January 2018

Cache busting when building Docker images

One of the handiest features of the docker build system is the caching system.

'docker build' tries to reuse the layers already built until something changes inside the Dockerfile. In this way, we can save several minutes when rebuilding an image if the changes happen further down the list in the Dockerfile.

Sometimes, though, we do want to invalidate the cache and ensure the next build won't use it.

To do this an option is to pass the '--no-cache' argument to 'docker build'.

When dealing with 'apt-get install' instructions though there are other tricks. I found this document on Dockerfile best practices very useful.

First of all an observation. If you have 'RUN apt-get update' as a single line of a Dockerfile, followed by the installation of a package, e.g.:

RUN apt-get update
RUN apt-get install -y nginx

then changing the list of packages and running again the build command won't trigger an 'apt-get update': that line hasn't changed so docker build reuses the cache. It might not be what you want.

To force cache invalidation for this specific case the recommendation is to use those commands in the form:

RUN apt-get update && apt-get install -y nginx

This will always install the latest version of the packages. It even has a name: "cache busting".

Another recommendation I like is to put each package on a single line, and have them in alphabetical order: this will ease visual inspection and prevent duplicates or other undesired conditions.

Of course, you can also specify exact versions for the packages as you would normally do with 'apt-get install'. That's "version pinning" and it invalidates the cache too.

You can find all this on the linked page on Dockerfile best practices; this is just my digested interpretation.

Just one more thing: a way to limit the size of a built image is to clean up the content of '/var/lib/apt/lists' in the same RUN command, e.g.:

RUN apt-get update && apt-get install -y \
    aufs-tools \
    automake \
    build-essential \
&& rm -rf /var/lib/apt/lists/* 

The command above will build an image layer that doesn't contain the apt cache.

If you had instead used this:

RUN apt-get update && apt-get install -y \
    aufs-tools \
    automake \
    build-essential
RUN rm -rf /var/lib/apt/lists/*

you would have had not only a larger layer, containing the apt cache, but also an additional layer generated by the second RUN command.


Saturday 13 January 2018

SIP - ACK loose routing

If you've ever worked with SIP, you must have stumbled upon a trace with 200 OK to INVITE being retransmitted for about 30" and then the call just being set up fail.

The ACK was never received.

Then comes the interesting part: discovering why.

Here are some notes about what should happen, in particular when there are multiple proxies along the path, and with a little additional complexity of one of the proxies with two network interfaces. All this assuming loose routing everywhere. The main reference here of course is RFC 3261.

Isn't an image worth a thousand words? Then here's a sequence diagram:


All Record-Route headers are assumed to carry loose routing URIs (they have the ;lr attribute).

B, C and D, working as proxies that want to stay in the path, record route themselves. For this reason E, the UAS and "callee", receives an INVITE with a list of Record-Route headers with B, C and D.
In particular for B there will be two Record-Route headers, since B is using two separate interfaces, one facing A and the other facing C.

In typical cases the two interfaces represent the interaction with the public Internet on a side and a private infrastructure on the other. But it's not important for this discussion.

Omitting provisional responses for simplicity, let's assume E responds immediately with a 200 OK. This response will have the same list of R-R headers, in the same order, as received by E.
E will also add its URI in the Contact header of the 200 OK.

In this loose routing context the IP address in E's Contact's URI will be relevant only for D in the future.

D, C and B don't modify the list of Record-Route headers, and A receives it as sent by E.

Apart from the operations related to the media session set up, A will send the ACK to the 200 OK.
This ACK will have a Request URI with E's Contact URI (stripped of anything that can't se inside a Request URI), and a Route header list which is basically the received Record-Route header list inverted (see images).

A is saying: "Route this ACK to E, routing it via this list of hops".

When B receives that ACK, it must recognise that the topmost Route headers are B itself, remove them from the Route list, and pick b2 as the interface to deliver the ACK to C.

C and D will have an easier task to remove a single Route header, the one representing them, and deliver the ACK to the next route.

For D, the next route will be in fact E, and the ACK will be routed using only the Request URI, as the Route headers have all been eliminated. This is the only step where the IP address that E has set in the Contact of its 200 OK response needs to be visible by another entity, namely D.

ACK routing as explained in RFC 3665


To further reiterate this concept, let's look at a somewhat simpler example in RFC 3665.


The ACK part is:

You can see there's no requirement for Proxy 1 to be able to reach the UAC contact (client.biloxi.example.com might be completely unreachable from Proxy 1).
It's Proxy 2's responsibility to route the ACK in the last hop towards Bob.
Proxy 1 must leave the R-URI as is (see below for more details on proxy behaviour), strip itself from the list of Routes and route the ACK to the new topmost Route (Proxy 2).
Proxy 2 will strip itself from the Route list, being the topmost Route, and forward the ACK to Bob. There are no more Route headers.
Only at the last hop Bob's contact reachability is relevant, and it is for Proxy 2 only.

More about the behaviour of the proxies to corroborate the ACK routing


From RFC 3261, 16.4:

“  If the first value in the Route header field indicates this proxy,
  the proxy MUST remove that value from the request.”


From RFC 3261, 16.5:

“     A proxy can only change the Request-URI of a request during

      forwarding if it is responsible for that URI.”


APPENDIX - Why is the ACK to 200 OK to INVITE a separate transaction?


From RFC 3261, ch. 17:

     The reason for this separation is rooted in the importance of
      delivering all 200 (OK) responses to an INVITE to the UAC.  To
      deliver them all to the UAC, the UAS alone takes responsibility
      for retransmitting them (see Section 13.3.1.4), and the UAC alone
      takes responsibility for acknowledging them with ACK (see Section
      13.2.2.4).  Since this ACK is retransmitted only by the UAC, it is
      effectively considered its own transaction.






Tuesday 9 January 2018

Copying a file from a Docker container to the host

Often things are more convoluted than you expect. Sometimes they are easier than you fear.

Here's an example: you're generating files inside a Docker container (with no volumes configured) and you realise you need some of those files available in the host.

A simple solution is to use 'docker cp', with a format like

docker cp CONTAINER_ID:FILE_PATH HOST_FILE_PATH

Official documentation.
A Stackoverflow question with more discussions.

About ICE negotiation

Disclaimer: I wrote this article on March 2022 while working with Subspace, and the original link is here:  https://subspace.com/resources/i...