Sunday, 7 May 2017

Monitoring FreeSWITCH with Homer - adding non-SIP events with hepipe.js

FreeSWITCH (from now on FS) provides a very powerful tool to interact with it: the Event Socket (ESL), made available via the mod_event_socket module (https://freeswitch.org/confluence/display/FREESWITCH/mod_event_socket).

ESL is a TCP socket where applications can connect to, and perform two types of action:
1. Send commands.
2. Subscribe to events.

The applications subscribing to events will receive the expected notifications through the same TCP connection.
A simple protocol and transport made it possible for various libraries in various languages to be written.

Events from FS can serve multiple purposes. In this article I'm interested in monitoring and event correlation.

Homer (http://sipcapture.org/) is a widely used, open source tool to monitor RTC infrastructures. It has a multitude of features, but the core is the ability to collect SIP signalling and other events from RTC applications, and perform a form of correlation. In particular, it's able to correlate the SIP signalling involved in a call with other events like RTCP reports or log lines associated to the same call.

While FS, through the sofia module, has native support for transmitting SIP signalling to Homer, the acquisition of other events can happen by collecting these events from the ESL, filtering them, and sending them to Homer with the proper formatting.

This is what hepipe.js (https://github.com/sipcapture/hepipe.js) does. hepipe.js is a simple nodejs application that is able to:
- connect to FS via ESL
- subscribe to specific event categories
- format the events into HEP messages. HEP is a binary protocol used to transmit data to Homer.

hepipe.js is easy to use:
- Clone it
- Run 'sudo npm install' to install the required dependencies
- Set configuration
- Run it ('sudo node hepipe.js', or 'sudo nodejs hepipe.js')

The configuration is organized in "modules", and for this example you'll have to configure at a minimum the esl module and the hep module.
Edit a config.js file in the same folder as hepipe.js with something like:

var config = {
  hep_config: {
    debug: true,
    HEP_SERVER: '10.0.0.17',
    HEP_PORT: 9060
  },
  esl_config: {
    debug: true,
    ESL_SERVER: '127.0.0.1',
    ESL_PORT: 8021,
    ESL_PASS: 'ClueCon',
    HEP_PASS: 'multipass',
    HEP_ID: 2222,
    report_call_events: true,
    report_rtcp_events: true,
    report_qos_events: true
  }
};

module.exports = config;

This will configure the hep module to send data to a Homer instance listening on UDP, IP address 10.0.0.17, port 9060, and will try to connect to a FS' ESL on localhost, via TCP port 8021 and using the default password. See also other configuration examples in the examples/ folder.

Please note that the ESL requires at least two levels of authorization: a password and an ACL. You can check conf/autoload_config/event_socket.conf.xml in the FS configuration folder to ensure the ACL in use, if any, is compatible to the source IP address of hepipe.js when connecting to FS.
e.g.:
or

Once config.js will be ready, launch hepipe.js and look at the events being sent to Homer.
Note that you can filter out event types by setting to false some of these:
    report_call_events: true,
    report_rtcp_events: true,
    report_qos_events: true

Assuming FS is configured to send SIP signalling to the same Homer instance, you'll be able to see, associated to its SIP call flows, also the events captured by hepipe.js.

See for example below log lines created by FS, sent to Homer, and then presented together with the SIP signalling in Homer:



Enjoy!






Sunday, 15 January 2017

Analysing Opus media from network traces

VoIP/RTC platforms have typically many elements processing audio. When an issue is reported it's important to be able to restrict the investigation field, to save time and resources.

A typical scenario is bad or missing audio perceived on the client side. As I've done previously (here for Opus and here for SILK) I'd like to share some practical strategies to extract audio from a pcap trace (to verify the audio received/sent was "correct") and to "re-play" the call inside a test bed (to verify that the audio was good but also carried correctly by the RTP stream). Of course a lot can be inferred by indirect data, for example the summary of RTCP reports showing the number of packets exchanged, packets lost, the latency. But sometimes those metrics are perfect while the issue is still there.

Focusing in this case on Opus audio, and starting from a pcap file with the network traces for a call under investigation, let's see how to decode the Opus frames carried by the RTP packets into an audible WAV file.

You don't even need to have captured the signalling: it's sufficient to have the UDP packets carrying the RTP. If signalling is not visible by Wireshark it may not recognize that the UDP packets carry RTP, but you give it a hint by right-clicking on a frame and "Decode as..." and selecting "RTP".

It's typically easy to find the relevant RTP stream in Wireshark ("Telephony -> RTP -> RTP Streams"), select it, and prepare a filter. Then you can Export the packets belonging to that stream into a dedicated pcap file ("File --> Export Specified Packets...").

I've then modified opusrtp from a fork of opus-tools in order to be able to extract the payload from a given pcap, creating an Opus file. e.g.:

./opusrtp --extract trace.pcap

This will output a rtpdump.opus file, which can be converted into a WAV file directly with opusdec, still part of opus-tools:

./opusdec --rate 8000 rtpdump.opus audio.wav

You can listen to the wav file and verify whether at least the carried RTP payload was valid.

The network trace with the RTP can also be used to re-play the call, injecting the same RTP as in the call under investigation. With the help of sipp you can set up a rudimentary but very powerful test bed. Use the standard UAS scenario (e.g. in uas.xml), but with an additional part:

right after the ACK is received. If you launch sipp with a command like:

sipp -sf uas.xml -i MEDIA_IP_ADDRESS

you'll be able to call sipp. It will answer the call, as the scenario mandates, and will play the RTP contained in rtp_opus.pcap. The stream SSRC, timestamps, even Marker bits will be preserved. This will give you quite an accurate simulation of the stream received by the client in the original call.

It should be straightforward to reach all these components. For opus-tools, on a debian-based machine, you can just:

sudo apt-get install libogg-dev libpcap-dev
git clone https://github.com/giavac/opus-tools.git
cd opus-tools
./autogen.sh
./configure
make

For sipp:
sudo apt-get install sip-tester

I hope this will save the reader some time in future investigations.

UPDATE: The fork of opus-tools was merged into the original repo, so you don't need my repo.

UPDATE 2: This only works if the opus payload in the RTP is not encrypted. Also it may need a patch when the extension header for volume indications are used (e.g. 'urn:ietf:params:rtp-hdrext:ssrc-audio-level', see RFC-6464). Don't forget that at the moment the payload type is harcoded to 120. You may need to rebuild opusrtp with the type your trace has, e.g. 96 (It should be easy to pass it as command line argument, something for a quiet moment).



Friday, 13 January 2017

VoIP calls encoded with SILK: from RTP to WAV, update

Three and a half years ago (which really sounds like a lot of time!) I was working with a VoIP infrastructure using SILK. As it often happens to server-side developers/integrators, you have to prove whether the audio provided by a client or to a client is correctly encoded :-)

Wireshark is able to decode, and play, G.711 streams, but not SILK (or Opus - more on this later). So I thought of having my own tool handy, to generate a WAV file for a PCAP with RTP carrying SILK frames.

The first part requires extracting the SILK payload and writing it down into a bistream file. Then you have to decode the audio using the SILK SDK decoder, to get a raw audio file. From there to a WAV file it is very easy.

As I tried to describe in this previous post, I had to reverse engineer the test files contained in the SDK, to see what a SILK file looked like.

Since the SILK payload is not constant, all that was needed was to insert 2 Bytes with the length of the following SILK frame. At the beginning of the file you have to add a header containing "#!SILK_V3", and voilĂ .

This is accomplished by silk_rtp_to_bistream.c (from https://github.com/giavac/silk_rtp_to_bitstream), a small program based on libpcap that extracts the SILK payload from a PCAP and writes it properly into a bistream file.

Build the binary with:

gcc silk_rtp_to_bitstream.c -lpcap -o silk_rtp_to_bitstream

(you'll need libpcap-dev installed)

Create the bistream with:

./silk_rtp_to_bitstream input.pcap silk.bit

Now you can decode, using the SILK SDK, from bitstream into raw audio with:

$SILK_SDK/decoder silk.bit silk.raw

Raw audio to WAV can be done with sox:

sox -V -t raw -b 16 -e signed-integer -r 24000 silk.raw silk.wav

This works fine with single channel SILK at 8000 Hz.


More to come: an update on how to accomplish the same but for Opus.