Friday, 29 October 2010

Test reports on Hudson in 30 seconds (well, let's say quickly)

There are so many articles on how to generate test reports on Hudson for perl modules or applications, that I thought it was somehow complicated. It's not.

The underlying problem is that Hudson has support for test results in JUnit format. To solve this you need to convert from TAP to JUnit the test results.

Of course there are some pre-requirement:
  • Your perl code must have tests (duh)
  • You must have Hudson set up to run those tests
  • You need to install TAP::Harness::JUnit on the building box
  • You need to install 'prove' on the building box

Then all you have to do is add this command in the shell script you execute to run the tests:
prove --harness=TAP::Harness::JUnit
and configure the "Publish JUnit Test Result Reports" section of your project to point to the resulting XML file.

No mumbo jumbo.

Since TAP::Harness::JUnit is not available as a debian package, if you need to install it properly, just follow the usual procedure to generate debian packages from CPAN modules:

  1. wget http://search.cpan.org/CPAN/authors/id/L/LK/LKUNDRAK/TAP-Harness-JUnit-0.32.tar.gz
  2. tar -pzxvf TAP-Harness-JUnit-0.32.tar.gz
  3. dh-make-perl TAP-Harness-JUnit-0.32/
  4. cd TAP-Harness-JUnit-0.32
  5. debuild

There you go.

Saturday, 23 October 2010

A rather serious matter – deciding what to read

If you are not interested in the full post, the concept of it is: “What are the 500 books that are worth reading before I die?” What do you recommend?

Here’s the full post:

I’m a keen reader since I was a child, and I’ve recently become an enthusiastic user of Kindle.

Infinite (well, I know they are finite, but allow me this mathematically inconsistent emphasis) opportunities have suddenly become available: I can have thousands of books on my Kindle in seconds. Kindle books cost less their paper version, and often are even free (what’s slightly concerning is how easily you can buy a Kindle book: you just click on a button and your Amazon account is charged automatically - that’s why I set up a password to lock the device).

Anyway the point of this post is about decisions: what to read.

Let’s start with estimating how much I can read in my life, to give more sense on what follows. I need about one month to read a technical book of 400-500 pages, about two weeks to read an interesting novel of 200-300 pages.

I’m not sure I should consider technical books as a separate topic, because I typically like them, and so they are relevant not only for my job but in general for me (or I just can’t manage to read them, so the time spent is 0).

For sure, they require considerably more time than novels (mainly because it’s more “studying” than “reading”).

I’ll leave them out to focus more on “what I should read as a human being, rather than as engineer”, and to simplify my point.

So in a month I could read 2-3 interesting books. Holidays will probably tend to increase the average, but first I don’t have so many holidays, and second there are working months in which I don’t find the time to read as much as I’d like.

Let’s say 2 books/month. It’s 24 per year; 20 with a more conservative estimate.

20 books per year means that from today until I die – depending on how things go – I could read from 0 (damn double decker on a Sunday morning) to about 800 books.

If you just refer to some “1000 books to read before you die” articles, you see that I already can’t read 200 of them (I think I can safely remove the “Love” section for now though), and still there are thousands of options out there. Furthermore, each year new books worth reading are published – let’s say between 5 and 10? This means that even if I want to keep reading books that are really worth the time, I have to pick up now only about 600 existing books. 500 makes it a nicer number.

So the question is: “What are the 500 books that are worth reading before I die?”

What do you suggest?

(hint: consider that 40 good books will keep me quiet for a couple of years, so no need to completely drain your energies on the other 460 for the moment).

I know what books I liked so far though: I’ll write a separate post on it.

Saturday, 16 October 2010

A forgotten note

Almost a year ago I attended "Literature and Freedom : Writers in Conversation", a discussion hosted by Tate Modern which presented among the others Tahar Ben Jelloun.

Now I wish I took more notes during that discussion, but this afternoon I found one - I'd rather write it here before losing even that piece of paper:

Behind every work of fiction lies a tragedy.

It's probably something you may easily believe reading "This blinding absence of light", which definitely impressed me this Spring, and I faced like a long work of poetry rather than a novel. Here's a review from The Guardian.

Need some constructive criticism on your code? Ask perlcritic.

perlcritic is "a static source code analysis engine", a CPAN module which can also be used as a command.

An example:
$ perlcritic -severity 3 src/AModule.pm

Subroutine does not end with "return" at line 24, column 1. See page 197 of PBP. (Severity: 4)

Return value of eval not tested. at line 32, column 5. You can't depend upon the value of $@/$EVAL_ERROR to tell whether an eval failed.. (Severity: 3)

As you can see you can get extremely useful information about the sanity of your code and how much it complies to good coding practices. There are 5 different severity levels (being 5 the "less critic").

Most Policy modules are based on Damian Conway's book Perl Best Practices (PBP), but it's not limited to it.

I run perlcritic (using typically severity 3) automatically on all the source code files, every time I run the unit tests and/or build a package, in order to get as soon as possible a valuable feedback after code changes. Since the same build scripts are used by a CI tool, I'm tempted to make the build fail if policies at level 4 or 5 are violated (as I do whenever even a single unit test fails), but I'm still not sure.

Anyway I strongly recommend this tool to anyone writing perl code, even if they are just scripts with a few lines (which are very likely to grow in the future, as a law on the increase of code complexity in the time - which I can't recall now - says ;-) ).

Personally, even if I don't agree 100% on what PBP recommends, I'd rather follow it and have a documented approach than do what I prefer (maybe just because I'm used to it) without a specific, well-documented reason.

If you just want to have a quick look, upload some perl code here, and see what feedback you get.

A note on the term constructive criticism: I chose it in the title as a mere translation from something I could have written in Italian, but then reading its definition on Wikipedia I think I've learned something more. Here's a quote:
Constructive criticism, or constructive analysis, is a compassionate attitude towards the person qualified for criticism. [...] the word constructive is used so that something is created or visible outcome generated rather than the opposite.

Wednesday, 6 October 2010

Merging hashes with Perl - easy but (maybe) tricky

Merging hashes with Perl is extremely easy: you can just "list" them.
For example try this:

use strict;
use warnings;

use Data::Dumper;

my %hash1 = ('car' => 'FIAT', 'phone' => 'Nokia');
my %hash2 = ('laptop' => 'Dell', 'provider' => 'Orange');

my %mergedHash = (%hash1, %hash2);

print "hash1: " . Dumper(\%hash1);
print "hash2: " . Dumper(\%hash2);
print "merged hash: " . Dumper(\%mergedHash);
You'll see that %mergedHash contains the result of merging the two hashes.

This is the simplest but not most efficient way to do it: here Perl Cookbook presents a different method, which uses less memory - interesting if you're merging big hashes (and have memory constraints).

Now the tricky bit is that you can have this merge even when maybe you don't expect it...
For example add this code to the previous lines:

useHashes(%hash1, %hash2);

sub useHashes {
my(%hashInput1, %hashInput2) = @_;
print "hashInput1: " . Dumper(\%hashInput1);
print "hashInput2: " . Dumper(\%hashInput2);
}

What would you expect? Try it out.

What actually happens is that the two hashes are merged inside the call to useHashes(), and all works as if useHashes() received a single argument (the merged hash), instead of two separate arguments (the two original hashes).