Skip to main content

Easy VPN setup accross multiple sites

I recently had a scenario where I needed to connect servers belonging to:

- Digitalocean, on data center X
- Digitalocean, on data center Y
- A private data center

and each architecture needed to be replicated on a number of "logical" environments (e.g. 'development', 'testing', 'production').

They needed to "see" each other, in a secure way.

Note that virtual machines on Digitalocean (they call them 'droplets') can belong to different data centers. When the droplets use the optional private interface there are two things to consider:
1. Traffic inside the same data center is potentially visible to any equipment on the same data center.
In other words, the fact that two droplets belong to the same customer account doesn't mean that their private traffic is isolated from any other traffic belonging to droplets on other accounts.
You are responsible to secure that traffic.
2. Droplets on different data centers cannot directly communicate via their private interfaces. This is intuitive but nevertheless important to highlight.

This of course screamed for a VPN, and I directed my attention to openvpn. As for the rest of the system and application setup I wanted to use Puppet to manage the client and server configuration with openvpn.

A quick search on Puppet Forge brought me to luxflux' openvpn module, which at that moment was the most popular. I experimented a little with it and found it working properly, on the debian virtual machines I was using.

You need just some designing of the private network addressing, and you'll be fine.

The key point to use this module is understanding that each VPN client you declare on the VPN server will cause the generation of client certificates on the server machine. You then need to copy them into the VPN client hosts.

On the server side I just needed three elements:

1. Server configuration (some elements omitted/changed on purpose):

openvpn::server { $::fqdn :
   country => '...',
   province => '...',
   city => '...',
   organization => '...',
   email => '...',
   local => '0.0.0.0',
   proto => 'udp',
   server => '10.19.0.0 255.255.0.0',
   ipp => true,
   c2c => true,
   keepalive => '10 120',
   management => true,
   management_ip => 'localhost',
   management_port => 7543,
}

2. Define the list of clients:

openvpn::client { [
  'vm1',
  'vm2',
  'vm3',
   ]: server => $::fqdn,
}

3. For each client, define the IP address, e.g.:

openvpn::client_specific_config { 'vm1':
  server =>$::fqdn,
  ifconfig => '10.19.28.101 10.19.28.102'
}

On the client side I've created a new Puppet module. It just installs the certificates and keys generated by the openvpn module running on the server host, then instructs the openvpn client to connect to the remote VPN server. (This is a fairly generic module, so it could probably be open sourced).

To make the architecture easier to understand, I assigned a different subnet for each "real" data center. So, even if the clients all belong to the same VPN, at a glance you can see what's the hosting data center.

The VPN server is on one of the data centers, and clients can connect securely on the public interface. An additional level of security was to firewall access to the openvpn listening port (with iptables), so that only authorized source IP addresses could potentially be used to build the VPN.

During this work I submitted a few patches, which the author has kindly accepted to merge:

1. Add an optional parameter to enable the client-to-client option
2. Add optional parameters to enable the openvpn management interface
3. Add an option to remove a client-specific configuration file (for example when a client is removed from the VPN).

You can see the commits here.

I hope this is useful; please let me know if you're interested in a specific or more detailed example. 

Please also see my other open source Puppet modules (puppet-asterisk and puppet-kamailio) and feel free to comment, raise issues and contribute!

A previous post about applications deployment with Puppet, related to Kamailio, can be found here.

Popular posts from this blog

Troubleshooting TURN

  WebRTC applications use the ICE negotiation to discovery the best way to communicate with a remote party. I t dynamically finds a pair of candidates (IP address, port and transport, also known as “transport address”) suitable for exchanging media and data. The most important aspect of this is “dynamically”: a local and a remote transport address are found based on the network conditions at the time of establishing a session. For example, a WebRTC client that normally uses a server reflexive transport address to communicate with an SFU. when running inside the home office, may use a relay transport address over TCP when running inside an office network which limits remote UDP targets. The same configuration (defined as “iceServers” when creating an RTCPeerConnection will work in both cases, producing different outcomes.

Extracting RTP streams from network captures

I needed an efficient way to programmatically extract RTP streams from a network capture. In addition I wanted to: save each stream into a separate pcap file. extract SRTP-negotiated keys if present and available in the trace, associating them to the related RTP (or SRTP if the negotiation succeeded) stream. Some caveats: In normal conditions the negotiation of SRTP sessions happens via a secure transport, typically SIP over TLS, so the exchanged crypto information may not be available from a simple network capture. There are ways to extract RTP streams using Wireshark or tcpdump; it’s not necessary to do it programmatically. All this said I wrote a small tool ( https://github.com/giavac/pcap_tool ) that parses a network capture and tries to interpret each packet as either RTP/SRTP or SIP, and does two main things: save each detected RTP/SRTP stream into a dedicated pcap file, which name contains the related SSRC. print a summary of the crypto information exchanged, if available. With ...

Testing SIP platforms and pjsip

There are various levels of testing, from unit to component, from integration to end-to-end, not to mention performance testing and fuzzing. When developing or maintaining Real Time Communications (RTC or VoIP) systems,  all these levels (with the exclusion maybe of unit testing) are made easier by applications explicitly designed for this, like sipp . sipp has a deep focus on performance testing, or using a simpler term, load testing. Some of its features allow to fine tune properties like call rate, call duration, simulate packet loss, ramp up traffic, etc. In practical terms though once you have the flexibility to generate SIP signalling to negotiate sessions and RTP streams, you can use sipp for functional testing too. sipp can act as an entity generating a call, or receiving a call, which makes it suitable to surround the system under test and simulate its interactions with the real world. What sipp does can be generalised: we want to be able to simulate the real world tha...