Multiple IPs and ENIs on EC2 in a VPC

aws-logoBack in 2012, Amazon Web Services launched support for multiple private IP addresses for EC2 instances, within a VPC.

This is particularly useful if you host several SSL websites on a single EC2 instance, as each SSL certificate must be hosted on it’s own (private) IP address. Then you can associate the private IP address with an Elastic IP address to make the SSL website accessible from the internet.

Multiple IPs and Limits

This AWS blog entry briefly describe the multiple IPs management: http://aws.typepad.com/aws/2012/07/multiple-ip-addresses-for-ec2-instances-in-a-virtual-private-cloud.html

When you create a VPC, you are by default limited to 5 elastic IP addresses. However it is easy to request for an increase by completing this form https://aws.amazon.com/support/createCase?type=service_limit_increase&serviceLimitIncreaseType=elastic-ips

Note that a single Elastic Network Interfaces (ENI) can have multiple secondary IP addresses, for example on a m1.small instance type, you can have up to 4 IPs, which in Linux would be the eth0, eth0:0, eth0:1 and eth0:2 interfaces.

There is also a limit on the number of ENIs and IPs for each instance type, see the documentation at:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI

Asymmetric Routing

When you add a second ENI, the AWS documentation is missing a fundamental note on how to configure the instance O.S. for handling the network routes.

If you attach the second ENI, associate it with an Elastic IP and bring it up (with ifup) in Linux after adding to /etc/network/interfaces, your network will very likely be performing asymmetric routing. Try and ping the Elastic IP of eth1, you get no response. This is because the response packets leaving the instance do not get sent out via the correct gateway.

Asymmetric routing is explained in depth in this article http://www.linuxjournal.com/article/7291

Route configuration with additional ENIs

The fix is to add additional routes for the new ENIs. This guide assumes that so far you have followed this documentation for adding a second ENI http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#attach_eni_launch

We’re assuming the instance has an interface eth0 with the private address 10.0.1.10 from a 10.0.1.0/24 subnet and we want to add an ENI using a different subnet 10.0.2.0/24 with an IP address of 10.0.2.10

The /etc/network/interfaces file should look like this after adding eth1:

Then bring up eth1 interface:

Let’s check the route:

There is one default gateway at 10.0.1.1 (which is bound to VPC the internet gateway) and will route any traffic from eth0. However any traffic from eth1 with a destination outside of 10.0.2.0/24 will be dropped, so we need to re-configure the routing to the default gateway for the 10.0.2.0/24 subnet.

Firstly, add an entry “2 eth1_rt” to the route table:

Next we need to add a default route to the gateway for eth1:

Verify that the route is added:

Finally we need to add a rule which will tell the route table to route traffic with a source of 10.0.2.0/24 via the rt_eth1 table:

Verify that the rule is added:

Now from your machine, try and ping the Elastic IP associated with eth1 and it should now work, asymmetrical routing has been fixed !

To make the route changes permanent so that they can survive a reboot, add them to the interfaces file:

If you wish to associate an private IP from the 10.0.1.0/24 subnet to eth1 (same subnet as eth0 network), just replace the gateway and subnet values to 10.0.1.1 and 10.0.1.0/24 respectively.

Systems Orchestration with MCollective

Introduction

marionetteWhen you have hundreds and thousands of servers, you need to be able to make quick changes to them in one go rather than ssh-ing into every server and executing repetitive commands. This is inefficient and time consuming.

Marionette Collective aka MCollective is a great tool for centralised server orchestration.

Now owned by Puppet Labs, it integrates well with Puppet, but also Chef.

What it can do

MCollective can remotely work with several system components:

  • puppet: manage Puppet agents (run a test, enable / disable, get statistics etc…)
  • package: install, uninstall a package
  • apt: upgrade packages, list number of available upgrades
  • service: start, stop, restart a service
  • nettest: check ping and telnet connectivity
  • filemgr: touch, delete files
  • process: list, kill process
  • nrpe: run nrpe commands (check_load, check_disks, check_swap)
    and more

How it works

Using a message queue, which all the MCollective agents on the servers listen to, the MCollective client (your desktop or management server) can send tasks.
The tasks can only be sent to certain agents thanks to discovery filters which can either be:

  • facts: any fact returned by Facter such as country, OS name or version, domain, ip address, mac address etc…
  • identity: the server’s hostname or fqdn
  • classes: the Puppet classes applied to the server

Filters can be combined and regular expressions can be used as well.

MCollective presentations

Watch an Introduction to Systems Orchestration with MCollective from PuppetConf 2013

Slideshares by the architect of MCollective; R.I.Pienaar


Vagrant MCollective framework
The easiest way to quickly try MCollective is to use the Vagrant MCollective framework at the bottom (just run 2 commands and it builds a Vagrant cluster !).
https://github.com/ripienaar/mcollective-vagrant

Installing MCollective

We’ll be installing and configuring MCollective for Ubuntu 12.04 LTS.

Setup Apt Repositories

By default MCollective works with ActiveMQ, however I’d recommend RabbitMQ over AcitveMQ.
To use the latest RabbitMQ packages, use the official RabbitMQ apt as the Ubuntu one is quite old:

We also need to use the PuppetLabs apt to use the latest MCollective packages:

Finally get the packages update:

RabbitMQ Configuration

The RabbitMQ connector uses the STOMP rubygem to connect to RabbitMQ servers.

Install rabbitmq-server:

Enable Stomp and management plugins then restart RMQ:

 Download the rabbitmqadmin script to set some settings:

Create the RMQ user, vhost, permissions and exchanges:

Add the stomp listener to the RabbitMQ config by editing /etc/rabbitmq/rabbitmq.config

Restart RabbitMQ

MCollective Agents Configuration

On any server you wish to orchestrate remotely via MCollective, you must install the mcollective-agent-* packages. Lets start with the package, service and puppet agents:

Edit the MCollective configuration on the agents at /etc/mcollective/server.cfg with the details of the RabbitMQ/Stomp server and authentication details previously set.
Remove the connector and plugin.stomp settings and replace with:

Restart MCollective

MCollective Client Configuration

On your desktop or management server, install the base MCollective and ruby-stomp packages:

Plus the client packages to communicate with the package, service and puppet agents:

Edit the MCollective client configuration at /etc/mcollective/client.cfg with the same settings as server.cfg configured on the agents:

Restart MCollective

Running MCollective

Use mco help to see the available commands. And for help on a mco command run mco  help 

The easiest way to see which servers are discoverable is to run a ping:

Get the status of a package (can can also install/uninstall/update/purge):

Get the status of the ssh service (you can also start/stop/restart):

Execute a Puppet agent run on all nodes with a concurrency of 4:

Using Filters

Before using filters you need to to know the facts and classes on a server:

Identity filter
To run mco with a server identity use:

Class filter
If you have a class apache deployed on the web servers, you can restart apache on just those servers using a class filter:

Fact filter
To update the puppet package on all Ubuntu 12.04 servers using a fact filter:

Conclusion

MCollective is a very useful tool which will save sys admins lots of time. It will help deploy applications and maintain servers a lot quicker.

There are many plugins that can be added to MCollective at http://projects.puppetlabs.com/projects/mcollective-plugins/wiki

Be sure to checkout the official documentation for MCollective at http://docs.puppetlabs.com/mcollective/deploy/install.html

Packages caching with Approx

If you have a number of Ubuntu or Debian servers, especially if many of them are running within a private LAN with not direct access to the internet, then you should consider having a central place to cache all packages.

Approx is my favourite tool for caching packages,  it’s lightweight and very simple to configure.

Installation & configuration

First lets install the package

Next lets do some configuration in /etc/approx/approx.conf

The important part is the unique alias you give to each apt repository URL, in this case ubuntu  will be the alias for the normal packages and secure for security packages.

Now edit /etc/apt/sources.list on all the servers inside the LAN:

Replace the url with the approx server address and approx’s port 9999 then the alias specified in approx.conf

Now you’re ready to run apt-get update and install packages from the proxy server!

Where’s the approx daemon?

Some people may get confused with which daemon runs approx. It is invoked by inetd.
So if you want to start/stop approx, you’d need to invoke the openbsd-inetd service  (for Ubuntu 12.04).

Refreshing the cache

Running and apt-get update which points to Approx will trigger approx to check for new packages.

Using approx with a proxy server

There may be a case where the server which is running approx, doesn’t have direct access to the internet and must go via a proxy server. Approx does not have proxy settings for approx.conf
The trick is to export an http_proxy environment variable for the  inetd service (assuming you don’t have any other services invoked by inetd which you don’t want to let them use a proxy).

Under Ubuntu 12.04 edit inetd’s default file /etc/default/openbsd-inetd

Restart the inetd service and you’re done!

Beware of system wide environment variables!

I’ve come across a case where someone put the http_proxy environment variable inside the system wide environment file /etc/environment
This caused approx to not work at all because this meant on the approx server, an apt-get didn’t fetch the packages via approx, instead it tried to connect to the approx server address directly via the proxy server which obviously is wrong!

Make sure your proxy is running before doing an apt-get update

I’ve also come across a situation where an user attempted to install a package but  but the proxy server wasn’t running, as a consequence Approx created a local cache of a 0 byte sized .deb package. So periodically check for those kind of bad files.