Saturday, 5 March 2011

It's not that hard to manage expectations (with Perl)

Developers with a background in Ruby on Rails and PHP are familiar with the concepts of mocking objects and setting expectations on them.

The good news is that these powerful techniques for unit testing are available for Perl as well. Should I add you can find them on CPAN?

Before an example though, just a simple explanation about the topic.

Unit testing “is a method by which individual units of source code are tested to determine if they are fit for use” (Wikipedia). It’s a common practice to perform unit testing in isolation; in other words you focus testing on the source code, limiting as much as possible the interaction across modules or systems.

It’s almost always practically impossible to test a class without instantiating other classes on which it depends or interacts. What can be done is mocking objects: creating “empty objects” that emulate the external behaviour of real objects. They must be able to “fool” the class under test and allow the creation of an exhaustive set of tests around it.

Since the class under tests “expects” the other classes to do something, here comes the term expectation: the unit test expects that the class under test uses the mock object by calling a specific method, optionally in a specific order and optionally with specific arguments and return values.

An example with PHPUnit, where the class under test Person depends on a class Company, which is mocked:

$company = $this->getMock(‘Company’);
$person = new Person();
$person->setCompany($company);
$company->expects( $this->atLeastOnce() )->method(‘giveLaptop’);
$this->assertTrue($person->startFirstDay());


In this silly example, we are testing Person, and we want to verify that on the first day of work with a company, that person has a laptop assigned.
Person is actually instantiated, but Company, possibly a bigger and more complicated class, with other dependencies, is just mocked.
What we check is that inside Person::startFirstDay(), there’s at least one call to Company::giveLaptop().

As mentioned at the beginning of this article, expectations are available on Perl too, with the module Test::Expectation. The equivalent of the example before could be:

my $person = new Person();
my $company = Test::MockObject->new();

$person->setCompany($company);

it_is_a('Company');

it_should "give a laptop", sub {
Company->expects('giveLaptop');
is(1, $person->startFirstDay());
};


(Note that Test::Expectation uses internally Test::More with a plan, so if you’re using Test::Expectation AND Test::More you can’t set a plan with the latter, as perl will complain that there’s already a plan set by the former)

Unfortunately Test::Expectation is not available as a standard debian package, so if needed you may debianize it (just download, untar, dh-make-file and debuild. I wrote this article with some basic instructions).

Disclaimer:
- The code in this article can't work as is, i.e. it needs modules to be installed and configured, and more lines to include those modules.
- The code in this article has not been tested, and it doesn't come with any warranty
- I'm not implying that a company should provide each employee with a laptop during their first day

A gentle introduction to G.729

G.729 is an 8 Kpbs audio codec, standardized by ITU, and called also CS-ACELP: Coding of Speech using Coniugate-Structure Algebraic-Code-Excited Linear Prediction, as that’s the algorithm used for audio compression.

It has two extended versions: G.729A (optimized algorithm, slightly lower quality) and G.729B (extended features, higher quality), and it's popular in the VoIP world because combines very low bitrate with good quality (but alas is not royalty-free).

G.729A takes as input frames of voice of 10 msec of duration, sampled at 8KHz and with each sample having 16 bit:

This gives a frame size of 80 samples:
8000 sample/sec * 10 * 10e-3 sec/frame = 80 samples/frame

The output is 8 Kbps, so each encoded frame is represented by 10 Bytes:
8000 bit/sec * 10 * 10e-3 sec/frame = 80 bit/frame = 10 Bytes/frame

Quality
Considering its bit rate, G.729 has an excellent perceived quality (MOS).
Under normal network conditions G.729A has MOS 4.04 (while G.711 u-law, 64 kbps, has 4.45)
Under stressed network conditions G.729A has MOS 3.51 (while G.711 u-law, 64 kbps, has 4.13)
Perfect quality has MOS 5.

G.729 doesn’t support (reliably) DTMF (RFC 2833)

Algorithm delay and complexity
The delay between input and encoded output is 15 msec: 1 frame (10 msec) + 5 msec required by the look-ahead prediction algorithm.

Not surprising, such a low bitrate associated with high quality, G.729 has relatively high complexity, 15 (while G.711 has 1, and on the other extreme side, G.723.1 has 25).

VAD and CNG
G.729B has been extended with VAD (Voice Activity Detection, which causes silence suppression), and generates CNG (Comfort Noise Generation) packets. This helps the receiving end in two key elements:
1. Recover synchronization in condition of high latency network
2. Generate Comfort Noise (which in case of silence from the transmitting end, tells the receiver that the call is still up)

Next to come, a gentle description of ACELP and G.723.

Tuesday, 1 March 2011

Test::Harness, or a lesson about wheels not to be reinvented

Let’s say you have your unit tests in place, using something not particularly esoteric as Test::More. Good.

Now you want something to give some color to your output, so green is Good, red is Bad and you have a quicker feedback.

Time ago I wrote a quick shell script to run all the unit tests, interpret the output (in TAP format), print on screen some color output and stop the tests if something fails.

The core was:
function check_test_result() {
red='\e[0;31m'
green='\e[0;32m'
end='\033[0m'

result=`perl $1`;

echo "$result"

perl -e '{ my $input = join(" ", @ARGV); if ($input =~ /not ok/m) { exit 0; } exit 1; }' $result

if [ $? -eq 0 ]
then
echo -e "$red Not all tests passed. FAILURE$end"
exit -1
else
echo -e "$green All tests passed. SUCCESS$end"
fi
}


This works as soon as the unit tests fail (printing ‘not ok…’), but doesn’t quite work if something else is wrong (and you see the typical “your test died just after…” at the end).

Rather than improving this script, I was looking for a low hanging fruit, and eventually the easiest way I've found is to use Test::Harness prove, which BTW has color output by default.

So instead of the above (which needs also a few more lines to get the list of .t files), I can just use:

$ prove -v t/*.t


I mentioned Test::Harness prove in this post too, where I was using the JUnit module to convert from TAP to JUnit format, and get some nice code coverage report on Hudson.

Monday, 28 February 2011

Build-Depends-Indep is useless

The words in the title are not mine, but only a quote from Debian Wiki.

However, if you're reading a debian/control file and wondering what Build-Depends-Indep means, starting from the assumption that it is useless may help.

Otherwise you can follow the rule to put in Build-Depends all those packages that are absolutely necessary to build architecture-dependent files, while the rest goes into Build-Depends-Indep.

Saturday, 26 February 2011

Use the Warnings, Luke

Browsing StackOverflow on perl-related topics, I have two main considerations:
1. Answers are typically very good, concise and useful
2. Questions are submitted with code that doesn't have 'warnings' enabled

I'd say that a considerable portion of the questions submitted would not be posted, or would be less generic, if authors used 'use warnings;' in their code.

Then add a pinch of the great perl critic, and possibly only half of the questions would really be submitted.

If you're using perl, or plan to use it, I strongly recommend to:
1. Always set 'use warnings;'
2. Always submit your code to perl critic (and keep a copy of Perl Best Practices handy).
3. First create the tests, then write the code. That's the only reasonable way (unless you're working on a one-liner for a quick admin task). TDD is your friend.

See more on Perl Critic here.

Tuesday, 1 February 2011

debian - cleaning up stale configuration files

As suggested in Debian Cleanup Tip #1: Get rid of useless configuration files, it's worth using grep-status to retrieve information about configuration files left behind a package removal or upgrade.

For example, on my Squeeze VM:
$ grep-status -n -sPackage -FStatus config-files
libjack-jackd2-0


You can confirm with 'dpkg -l' that's a package in 'rc' status:
$ dpkg -l | grep libjack-jackd2-0
rc libjack-jackd2-0 1.9.6~dfsg.1-2 JACK Audio Connection Kit (libraries)


You may probably want to remove definitely those configuration files; just purge the package. For example:
$ dpkg -P libjack-jackd2-0


See also about debian configuration files, an important assumption Debian takes.

Wednesday, 12 January 2011

Recommended tool: Putty Session Manager

If you have more than a few entries on putty, you may consider using Putty Session Manager.
You can organize your sessions in folders (and start all the sessions inside a folder with a click).

Please leave a comment if you know something better!

Monday, 10 January 2011

debian, tab completion failing

Not a big deal, but since the solution to this problem wasn't immediate to find, it's probably worth writing it down.

Symptom: when you try to tab complete a command, you see a message like this:

vi /et-sh: <( compgen -d -- '/et' ): No such file or directory

Cause: You're using sh, not bash, so even if tab completion is enabled on ~/.bashrc, it won't be available to you.

Solution: change the default shell for the current user from sh to bash with
chsh -s /bin/bash

That's it.