Tuesday, 24 November 2020

Dissecting traces with DTMF tones

I'm sure I belong to the large group of people who love to analyse network traces with tools like Wireshark. Being able to see the details of a packet or datagram down to the level of the bits is not only extremely useful, but also fascinating.


Time ago I wrote a dissector for Wireshark, using the Lua interface, and that was fun (I see it's still available here). The official recommendation is to use Lua only for prototyping and testing, but when performances are not key and there isn't the intent to add the dissector to the official distribution, it's fast and effective.

In order to parse network traces with audio and extract it into the payload first, and then decode it into a WAV file, C is a viable solution. I wrote about a program that does that here and since it attracted some attention and feedback I wrote an updated version later.

More recently I wanted to identify programmatically the presence (and value) of DTMF tones - as RTP Events, RFC 2833 - in network traces. This time rather than using C, I wanted to integrate it with python, and scapy seemed a good choice.

scapy is quite complete, but interestingly it doesn't have a parser for the RTP Event extension. So I thought of mapping the raw content in the RTP payload to a structure, with the help of C types.

This it the core of the program:

def process_pcap(file_name, sut_ip):

  for (pkt_data, pkt_metadata,) in RawPcapReader(file_name):

    ether_pkt = Ether(pkt_data)

    # A little housekeeping to filter IPv4 UDP packets goes here
    ...

    if ether_pkt.haslayer(UDP):
      udp_pkt = ether_pkt[UDP]

    # Get the raw UDP packet into an RTP structure
    rtp_pkt = RTP(udp_pkt["Raw"].load)

    ptype = rtp_pkt.payload_type

    # Assume payload type 96 or 101 are used for RTP events
    if (ptype == 96 or ptype == 101):
      rtpevent_content = rtp_pkt.payload

    # map the payload into an RTPEvent object
    rtpevent_struct = RTPEvent.from_buffer_copy(rtpevent_content.load)

The RTPEvent class looks like this:

class RTPEvent(ctypes.BigEndianStructure):
  _fields_ = [
    ('event_id', ctypes.c_uint8),
    ('end_of_event', ctypes.c_uint8, 1),
    ('reserved', ctypes.c_uint8, 1),
    ('volume', ctypes.c_uint8, 6),
    ('duration', ctypes.c_uint16)
  ]

so once mapped, the rtpevent_struct object will have its DTMF-specific details, in particular with the digit contained in rtpevent_struct.event_id, and the indication whether it's the marker of end of the event in the end_of_event bit.

All the other information (source/destination IP address and port, timestamp, SSRC) is obviously available in the UDP and RTP portion, so it's easy to adapt to your needs and filter out the DTMF tones for the streams you're interesting in.


Monday, 23 November 2020

Kubernetes role-based authorisation for controller applications

There are many scenarios where an application running inside a Kubernetes environment may need to interact with its API.

For example, an application running inside a Pod may need to retrieve real time information about the availability of other applications' endpoints.

This may be a form of service discovery that integrates or extend the native Kubernetes internal service discovery. In most cases, DNS records are associated to a Service and provide the list of active Endpoints for that Service, with a proper TTL. There are situations though where those DNS records are not available, an application is not able to use them directly, or what's needed is more than the private IP addresses associated with the Endpoints.

If interacting with the Kubernetes API from inside an application is needed, then there are two main areas to consider: Authentication and Authorisation.

Every Pod has a Service Account associated to it, and applications running inside that Pod can use that Service Account. Without a specific configuration the Service Account will default to a generic namespace and generic authorisation.

It's possible instead to define a more specific Service Account, with fine grained permissions to access the API. This Service Account can then be linked to a Pod with a Role-based approach.


You can get a list of available Service Accounts with an intuitive:

# kubectl get serviceaccounts

which is likely to show you a single 'default' service.

Service Accounts may have a namespace scope. You can check what Service Accounts are associated to a pod with a command like:

# kubectl -n NAMESPACE get pods/PODNAME -o yaml | grep serviceAccountName

Service Account authentication can use the token reachable from inside a Pod, in the /var/run/secrets/kubernetes.io/serviceaccount directory, under the namespace-specific directory, e.g. /var/run/secrets/kubernetes.io/serviceaccount/NAMESPACE/token.

Those tokens are also visible as Mounts in the related containers.

For internal requests, Kubernetes provides a local default HTTPS endpoint at https://kubernetes.default.svc - so a way to discover the details of a Service Account for a given namespace could be:

#!/bin/bash

# Point to the internal API server hostname
APISERVER=https://kubernetes.default.svc

# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount

# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)

# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)

# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt

# Explore the API with TOKEN
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/${NAMESPACE}/pods


Accessing the internal API programmatically


With common python libraries as https://github.com/kubernetes-client/python (available on debian with the 'python3-kubernetes' package) it's extremely easy to automate the invocation of the internal APIs.

Most of the examples assume you're running your program as a user, and refer to the local kube config file, but when running inside a container it's possible to inherit the Service Account token associated with the hosting Pod.

For this, instead of using

config.load_kube_config()

use

config.load_incluster_config()

Then you can instantiate your API client object:

v1 = client.CoreV1Api()

and either do a single request, like getting a list of all pods inside any namespace:

ret = v1.list_pod_for_all_namespaces(watch=False)

or a list of pods belonging to a namespace and matching a specific application label, like:

ret = v1.list_namespaced_pod(namespace, label_selector=app_name, watch=False)

or you can "watch" some resources, which basically means subscribing to such resource updates and getting a notification at each change:

w = watch.Watch()
for event in w.stream(v1.list_namespace, _request_timeout=60):
    ...

Each event can be ADDED, DELETED and MODIFIED, and carries a rich set of information associated to the current status of the resource.

Defining your specific ServiceAccount


Before getting to that point, though, you need to define your non-default Service Account and assign specific permissions to it. To achieve this, the role-based approach can be used.

First of all define a ServiceAccount resource:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: mycomponent
    name: mycomponent-serviceaccount

Then define a role associated to this resource:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: myrole
  namespace: mynamespace
labels:
[...]
annotations:
[...]
rules:
[...]
  - apiGroups:
  - ""
  resources:
    - pods
    - endpoints
    verbs: ["get", "list", "watch"]

This example adds the permission to get, list, or watch the list of pods and endpoints in the given namespace.

Create a role binding:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: myrolebinding
namespace: mynamespace
labels:
[...]
annotations:
[...]
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: myrole
subjects:
  [...]
  - kind: ServiceAccount
    name: mycomponent-serviceaccount
    namespace: mynamespace

Whenever a Role cannot be restricted to a namespace, for example if it needs to access cluster-wide resources like Nodes, then the ClusterRole resource is available.

References and other sources

"Access clusters using the Kubernetes API", https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/

An interesting post about asynchronous watches with Python: https://medium.com/@sebgoa/kubernets-async-watches-b8fa8a7ebfd4

"Kubernetes Patterns", an ebook by Redhat: https://www.redhat.com/cms/managed-files/cm-oreilly-kubernetes-patterns-ebook-f19824-201910-en.pdf





Friday, 20 November 2020

Testing SIP platforms and pjsip

There are various levels of testing, from unit to component, from integration to end-to-end, not to mention performance testing and fuzzing.

When developing or maintaining Real Time Communications (RTC or VoIP) systems,  all these levels (with the exclusion maybe of unit testing) are made easier by applications explicitly designed for this, like sipp.

sipp has a deep focus on performance testing, or using a simpler term, load testing. Some of its features allow to fine tune properties like call rate, call duration, simulate packet loss, ramp up traffic, etc. In practical terms though once you have the flexibility to generate SIP signalling to negotiate sessions and RTP streams, you can use sipp for functional testing too.
sipp can act as an entity generating a call, or receiving a call, which makes it suitable to surround the system under test and simulate its interactions with the real world.

What sipp does can be generalised: we want to be able to simulate the real world that surrounds (or will surround) our system in Production. From this point of view sipp is not the only answer, and projects often use other tools, or a combination of other tools.

One simple and effective approach is re-using RTC applications and build the testing tool around them. When a system is built around an application, it's likely that the people working on it are familiar enough with the application to re-use it to mock the external world. This is often achieved with Asterisk or FreeSWITCH. They both expose an API for originating calls, and surely can play the role of called party (or "absorbers", or "parrots" depending on their main scope and terminology).
Kamailio also can be used to generate calls, even though its core focus on signalling makes it slightly more complex to use in generic cases.

Unless behavioural changes are put in place, such solutions imply compromising on the SIP stacks in use. Asterisk or FreeSWITCH won't make it too easy to generate an INVITE with a wrongly formatted SIP header, for example, while sipp is much more flexible, ad the SIP messages can be mocked down to the single character. What typically happens is that sipp is used to generate or receive calls when specific syntax requirements for the signalling are needed, while Asterisk and FreeSWITCH can be used in more permissive cases, where what's important is a generic session establishment.

When dealing with media (typically, RTP streams) is necessary, then sipp provides at least two methods: re-playing an RTP stream from a trace (pcap file), or encoding a WAV file into a stream. Recently sipp added the ability to play RTP Events separately (DTMF tones as in RFC 2833 -  I think the first patch with this functionality was this). sipp is not able to transcode or generate non-PCM streams, but still it can play a non-PCM stream with just some limitations, which covers most of typical cases.

Less generic scenarios where RTC applications like Asterisk and FreeSWITCH can be useful are the ones requiring SRTP (encrypted RTP). Even though sipp can be used to negotiate SRTP,  by adapting the SDP portion of the offer/answer, it doesn't provide a solution to generate SRTP streams.

In this case a very useful item to add to your toolbox is pjsip, which is a SIP stack library (used also by Asterisk and chan_pjsip being the current recommended SIP channel, as opposed to the older chan_sip) that exposes an API and also a command-line option (pjsua). pjsua can be used directly, with either command line arguments or a configuration file, or it's possible to use pjsip library to write programs with languages like python: this makes it very flexible and helps its integration in existing and new testing systems.

With pjsip, it's possible to generate calls that play audio and DTMF tones, in a similar way than sipp, but also encrypt RTP and establish SRTP streams.

pjsua


The easiest approach is to build the pjsip project and use the pjsua binary (you can see a procedure in the Appendix).
pjsua accepts command-line arguments, but can receive arguments from a configuration file, which makes it easier to read. For example you could just

#  pjsua --config-file pjsua.cfg

where pjsua.cfg contains just the caller and callee:

sip:bob@example.com
--id=sip:alice@example.com

A more sophisticated configuration file contains instructions on codecs and encryption, e.g.

sip:bob@example.com
--id=sip:alice@example.com
--use-srtp=0
--srtp-secure=0
--realm=*
--log-level=6
--no-vad
--dis-codec GSM
--dis-codec H263
--dis-codec iLBC
--dis-codec G722
--dis-codec speex
--dis-codec pcmu
--dis-codec pcma
--dis-codec opus
--add-codec pcma
--null-audio
--auto-play
--play-file /some_audio.wav

Since I mentioned SRTP as a possible key element for using pjsip, let's look into the related options:

--use-srtp=0
--srtp-secure=0

'use-srtp' can be 0, 1 or 2, and means "disabled", "optional" and "mandatory", respectively.

With "optional" pjsua offers both plain and encrypted RTP at the same time, and the callee entity can decide. With "mandatory" it will only offer SRTP, and the callee will have to either accept or reject.

'srtp-secure' refers to the use of TLS, and can also be 0, 1 or 2, meaning "not required", use "tls", or use "sips" respectively. Needless to say, in normal scenarios you want to protect the SRTP crypto information carried in the SDP, so you want to encrypt signalling too. SIP over TLS is the typical solution. For testing purposes you may prefer making it easier to check the content of signalling, and use 'srtp-secure=0'.

'no-vad' formally should be used to disable silence detection; in practice you want this option when generating a call from a machine that doesn't have a sound card.

Similarly, 'null-audio' disables the requirement to play the audio, required when the calls are generated from a host with no sound interfaces.

'dis-codec' is used to disable a codec from the negotiation, and 'add-codec' instead selects a codec to be added to the offer. This adds flexibility, and it's also worth noting that video codecs are available too.


Using pjsip library with python


It's possible to use the pjsip library's API with high level programming languages like python. This makes test automation quite versatile, and I remember seeing this approach as early as 2012, where the project I was working on had the client applications built on top of pjsip: it was extremely valuable to simulate programmatically the clients from linux machines.

Being designed for interactive applications, pjsip comes with a nice event-based model, so in principle you need to trigger the desired actions and register callback functions that will be called at the proper moment.

A complete reference to the python library can be found here.

In general, after you import the library:

import pjsua as pj

then the library is imported in an object, the configuration objects are populated, and a call is triggered, e.g.:

    lib = pj.Lib()
    media_cfg = pj.MediaConfig()
    media_cfg.no_vad = 0
    lib.init(log_cfg = pj.LogConfig(level=3, callback=log_cb), media_cfg=media_cfg)
    lib.set_null_snd_dev()

    lib.set_codec_priority("GSM", 0)
    lib.set_codec_priority("iLBC", 0)
    lib.set_codec_priority("G722", 0)
    lib.set_codec_priority("speex", 0)
    lib.set_codec_priority("pcmu", 0)
    lib.set_codec_priority("pcma", 1)


    transport = lib.create_transport(pj.TransportType.UDP)
    lib.start()
    acc = lib.create_account_for_transport(transport)

    call = acc.make_call(sys.argv[1], MyCallCallback(), hdr_list=custom_headers)

You can see that set_codec_priority to 0 is equivalent to the --dis-codec command line option.

MyCallCallback() is the callback function that will be invoked at each change of call state, with an event object passed as argument. You'll have something like:

class MyCallCallback(pj.CallCallback):
    def __init__(self, call=None):
        pj.CallCallback.__init__(self, call)

    def on_state(self):
        ...
        if self.call.info().state == pj.CallState.CONFIRMED:
            # The call has been answered
            # Here you can create a player to generate audio into an RTP stream, send DTMF, log information, etc
            # You can even invoke other APIs to interact with more complex systems
...
    def on_media_state(self):
          global lib
          if self.call.info().media_state == pj.MediaState.ACTIVE:
                 ...
                 # Media is now flowing, so you can connect it to the internal conference object
                 # Connect the call to sound device
                 call_slot = self.call.info().conf_slot
                 lib.conf_connect(call_slot, 0)
                 lib.conf_connect(0, call_slot)
                 print "on_media_state - MediaState ACTIVE"

As it can be expected, exceptions can be caught and errors displayed:

except pj.Error, e:
    print "Exception: " + str(e)
    lib.destroy()
    lib = None
    sys.exit(1)

If you happen to need DTMF tones, pjsip offers the dial_dtmf() function, as part of the Call object, e.g.:

self.call.dial_dtmf("0")

Just remember that these calls are asynchronous, non-blocking: you need to add explicitly a delay to separate the beginning of a tone from other actions.
pjsip will generate proper RTP Event packets of the given duration, inside the existing RTP stream (and so they will have the same SSRC and proper timestamp reference).

I'll write about analysing pcap traces to extract information on RTP events in a separate article.

Wrap up


This article is somehow what I would have wanted to read on the topic some time ago, but I had to infer from various sources and after various experiments. I hope it will be useful to some of the readers.

Appendix - pjsua build and install


To build pjsua on debian you can do something like:

apt install python-dev gcc make gcc binutils build-essential libasound2-dev wget
wget http://www.pjsip.org/release/2.6/pjproject-2.6.tar.bz2
tar -xjf pjproject-2.6.tar.bz2
cd pjproject-2.6
export CFLAGS="$CFLAGS -fPIC"
./configure && make dep && make

The executable will be available at pjsip/bin/pjsua2-test-x86_64-unknown-linux-gnu, which of course you can link to something easier to use, or copy to a directory in the PATH.

Friday, 22 November 2019

kamailio-tests, a testing framework for Kamailio developers

Kamailio is an open source VoIP server, widely used in the VoIP industry for its performance and feature set.

kamailio-tests is a project that aims to provide a level of automated testing for developers.

The main idea is that simple things like loading a module or calling a core or module function can be tested without building an entire infrastructure around Kamailio.

Also, we want to be able to perform the tests against various versions of Kamailio, and on various OS distributions. In this way a backward incompatible function or a regression can be discovered by a developer even though their system uses one single version and OS combination.

This includes also third party libraries, e.g. you may think of testing the HTTP clients with different curl libraries, while Kamailio and OS stay at the same version.

kamailio-test doesn't perform tests at function level (but it requires the application to run) or "integration testing" (because in order to remain generic and compact it doesn't cover complex architectures), but aims to test Kamailio as an isolated application.

In order to achieve this, kamailio-tests relies on Docker to allow developers to experiment and perform their tests in perfect isolation. For example, once you've built the base Docker images you can add and modify tests while travelling on a plane.

Most VoIP platforms have extensive and complex test scenarios that involve setting up accounts, other components, specific logging systems, etc. We can't provide a compact, generic solution that would replace that, but instead we can focus on self-contained (pun intended), simple tests that give us key feedback on issues (and also the opportunity to debug them in that same infrastructure).

The internal structure of the project and how to use it is described in its README, but here I'd like to reiterate how tests are run and how new tests can be added.

Of course you'll need Docker, and it can be a Desktop or server version.

Currently CentOS 7, Debian 9 and Debian 10 are supported via dedicated Dockerfiles.

What version of Kamailio to test can be chosen by simply checking out the desired git branch or tag from Kamailio github repo.

The installation process is described in the README, but let's see comment it here too:

First create a directory where to store the resources and go to it, e.g.:

mkdir kamailio-testing
cd kamailio-testing

Clone the kamailio-tests git repository inside that work directory:

git clone https://github.com/kamailio/kamailio-tests

Clone the kamailio git repository inside that work directory:

git clone https://github.com/kamailio/kamailio

Here you can git checkout the branch or tag you want to test, e.g.

cd kamailio
git checkout 5.3.1
cd ../

Copy the desired Dockerfile from kamailio-tests in the current folder, based on the OS distribution you want, e.g. for Debian Stretch:

cp kamailio-tests/docker/Dockerfile.debian9 Dockerfile

Build the Docker image. Give it the name you want: you'll just have to refer to it when launching the tests. e.g.:

docker build -t kamailio-tests-deb9x .

Run the tests. If you're happy with the default behaviour you can just:

docker run kamailio-tests-deb9x

and all not explicitly excluded tests will be executed.

Some tests may be excluded from execution by listing them in the etc/excludeunits.txt.DISTRIBUTION file, e.g. etc/excludeunits.txt.centos7.

How to add a test?

Create a new folder in units/, with a name starting with 't', followed by a string indicating the module, like 'geoip', and four digits, e.g. 'tgeoip0001'.

Inside this folder the minimum amount of data is a shell script that the test framework will execute and a kamailio.cfg file. See for example this test for geoip module.

The script must start with:

#!/bin/bash
. ../../etc/config
. ../../libs/utils

to prepare configuration and utility commands.

Then you can launch Kamailio with the configuration required, e.g.:

${KAMBIN} -P ${KAMPID} -w ${KAMRUN} -Y ${KAMRUN} -f ./kamailio-tgeoip0001.cfg -a no -ddd -E 2>&1 | tee /tmp/kamailio-tgeoip0001.log &

You can see that logs are being sent to a local file; this is typically how the test result can be verified. Of course you can use a different approach.

Inside the container you have sipsak available, so you can launch a test call to trigger call processing in Kamailio with something like:

sipsak -s sip:alice@127.0.0.1
To help triggering the tests also tools like sipp and pjsua can be considered, but are not currently installed in the docker images.

After waiting for the necessary time, you can just kill Kamailio with

kill_pidfile ${KAMPID}

and check the test result by parsing the log file, e.g.:

grep "ip address is registered in US" /tmp/kamailio-tgeoip0001.log
ret=$?
if [ ! "$ret" -eq 0 ] ; then
    exit 1
fi

'exit 1' will make the test fail.

End the test script with

exit 0

to make it pass.

Adding a README.md inside that same folder is highly recommended.

You can add one or more tests and run them before committing the changes.


Once happy please fork the kamailio-tests repo with your changes and raise a pull request.

Thanks for reading; feedback is welcome.

Sunday, 17 November 2019

My notes on Kamailio Developer Meeting - November 2019

The Kamailio Developers Meeting is a two-day event held in Dusseldorf, currently at the second edition. 

As described in https://www.kamailio.org/w/developers-meeting/, 
"The purpose of the event is to support the interaction between developers and to offer a great environment to work together on relevant topics related to the Kamailio project. It is intended for participants that want to write code for Kamailio and its tools or improve the documentation. There will be no formal presentations, only open discussions, coding or documentation writing sessions."

The sipgate offices offered a very welcoming environment. Noticeable to have a kitchen with chefs for breakfast and lunch, and a private pub, where the social event was held. I noticed the presence of art works and learned that those offices are also part of an art itinerary in Dusseldorf.

We started listing the topics that we wanted to tackle during the event, then discussed a plan to go through them.

The first important activity was the release of kamailio version 5.3.1 (minor bug fix). The release process includes running the tests in kamilio-tests repo (more on this later in a dedicated post), and an issue was found and resolved during the release.

Then it was the time to discuss RPM packages building: Sergey Safarov has made an incredible amount of work to ensure that RPM packages for kamailio are available, and now they are available at https://rpm.kamailio.org/. Work is in progress to move all the building phases to infrastructure belonging to the kamailio projects and avoiding personal accounts to access cloud services as much as possible.

A documentation review was carried out: it was reported that often the return values of module functions are not documented, and work is encouraged in that direction, including a review of documentation of function parameters. Modules may make available several pseudovariables, and not always they are documented. There's also a dedicated wiki page on pseudovariables that's useful as reference.

Daniel made a briefing on the Kemi framework, explaining how to structure the wrapper functions (two steps, one the parameters manipulation for the standard cfg file, and then encapsulate the code into a separate function. Also in this way executing a function from Kemi doesn't require executing all the typical wrapper manipulations required by cfg functions, impacting positively on performance). Developers need to ensure that when you develop a new function the body of the function is separate from the code that manipulates the parameters.

An open topic focused on the best process to handle "dialog failover". This may be necessary if a kamailio-based component disappears during the dialog lifetime, or if the architecture allows for in-dialog messages to be processed by different entities during a call, even in the typical case where record-routing applies. Currently it's possible to load dialog information on the fly from a DB, see dlg_db_load_callid(), but when an instance "takes over" the dialog information there's typically the need to manipulate the record-routing, which is complex and not exactly "standard" behaviour.

Modules like evapi and http_async_client allow for the management of asynchronous events by spawning dedicated workers which will execute portions of the routing logic when defined events happen. An open discussion is about the best method for managing asynchronous events and at the same time return execution to the main workers when an event is triggered.

Debian packaging; a PGP signed key was created, to benefit from the physical presence of various holders of PGP signatures. An open point is related to the ability to keep earlier versions of the kamailio packages every time a new version is released. This is a known behaviour that may limit the options in some environments, and I've experienced it directly. This is caused by reprepro and what seems to be the right way to go is moving from reprepro and use aptly instead. We'll update on this.

Use of TLSv1.3: the only limitation appears to be in libssl library. It's possible to allow the usage of version 1.3, but only by choosing tlsv1.2+, which admits using v1.2, and so doesn't enforce v1.3 only. When this will be allowed, kamailio administrators will just need to update the configuration and reload the tls module. Remember that only configurations in tls.cfg can be reloaded, while modparam declarations require a restart.

Finally, we know that a common pattern of usage of kamailio involves querying APIs and basing call processing on the API responses. A simple but powerful solution sees the use of http_async_client in collaboration with rtjson; see for example this article from wazo developers: https://wazo-platform.org/blog/kamailio-routing-with-rtjson-and-http-async-client We are discussing whether extending rtjson may be generally helpful for scenarios like this; in that case I'll post an update.

For me this event was a great experience, both from a friendship point of view and as a learning experience. I'm very happy about how the kamailio project is managed, decisions are taken and information is shared.


Here's my report; for obvious reasons this can only be a limited account of what happened during the event, and the information provided is based on my recollection and notes. I'm sure I'm omitting other important material, and I'll be happy to integrate with another post. 

Tuesday, 27 November 2018

You need to slow down

This blog has been historically focused on technical topics in areas related to Real Time Communications, but I'm taking the liberty to digress a little.

I've been reading recently about the dynamics of performance in running.

I'm just an amateur runner, and I always train (and race) alone, so I felt the need to make it more interesting than just reading training tables. I've been studying what is it that limits performance. "Train more", unfortunately, not only doesn't always work, but needs to take into account injuries, and overtraining in general.

One of the first concepts that struck me is that it's been proven that fatigue, and the consequential slow down, does not mean that the body is unable to continue with that effort. What's behind slowing down is a sort of protective mechanism in our nervous system, which wants to prevent the body to reach exhaustion.

Our nervous system is constantly getting feedback signals from the body, including the perception of adverse weather conditions, and computing for how longer the current effort can be sustained. Even knowing how long is left to run is a form of external feedback.

When this computation detects that the effort is too high, the nervous system takes control and doesn't fire as many muscle fibres as the athlete wants to. The athlete thinks the muscles can't continue to work at that level, but in reality they are entering a protective status. Without that, people could literally run until body exhaustion or even death.

I find this fascinating. Evolution has given us a sophisticated algorithm aimed to prevent body exhaustion by generating fatigue symptoms and consequently reduce the actual effort.

What's also fascinating is that it seems this system can be "tricked". One way of doing is by training. Training is a way of educating your nervous system that a certain effort is OK. "There's no need to shut me down, brain, I know what I'm doing. I did that thousands of times in my training sessions."
So while the body adapts to the stress of training, the nervous system too becomes more familiar with that stress, and gives up a little.

Another way of tricking that system is by providing false external feedback. It seems that if you get false information about elements like the air temperature, or even the remaining time in your training session or race, then the nervous system acts accordingly. If it believes it's not as hot as it is, it will not intervene to shut the muscle fibre activation.

Similarly, if the athlete thinks the race is close to the end, the nervous system will allow a prolonged effort. This is why elite marathoners, who clearly haven't underperformed for the first 40 km, can run the last 2 km even faster.

Of course, all these tricks have limits, and the improvements that can be tricked are in the order of small percentages. But still this shows the importance of external feedback.

This is properly explained by Steve Magness in his "Science of running" book. I then read another book from this author, "Peak performance". To be perfectly honest, I was expecting something strictly related to endurance sport, but in this second case the concept of performance was wider.

There seems though to be an analogy between the shutting down of muscle fibres from the nervous system during running, and something that happens behind a desk and is more widely known: mental burnout. From this point of view, mental burnout can be seen as a way of saying "You can't keep going at this (perceived) level of effort. You need to sleep, hydrate, rest, but you keep working. I'm going to take control and shut you down.".


As I'm making my own little experiments, in the future I'd like to write more about this, and in particular about the relationship between effort and rest. 

Monday, 19 November 2018

Docker from scratch

Some time ago I prepared an introductory seminar on Docker, which I called "Docker from scratch".

The audience was a local group of heterogeneous developers. As it typically happens, preparing that material was a great opportunity to understand better some of the aspects.

I then published the slides in Slideshare. I notice now that they got a decent number of views and downloads, so I'm linking those slides here as well for reference.

One big change in respect to 2016 is represented by the choice of abandoning Docker Toolbox in favour of running Docker directly on macOS. I have to say I liked the sandboxing that came with Docker Toolbox, where the docker engine ran inside VirtualBox, now only available for older versions.


Dissecting traces with DTMF tones

I'm sure I belong to the large group of people who love to analyse network traces with tools like Wireshark. Being able to see the detai...