Infrastructure automation with Terraform

TerraformHashicorp, the cool guys behind Vagrant, Packer etc… have done it again and come up with a simple way build, modify and destroy cloud infrastructure. It is pure infrastructure as code which can be version controlled. But the best part is that it can work with many cloud platforms: AWS, Google Cloud, Digital Ocean, Heroku etc…

Getting Started

The official documentation can be found here.

Download the zip package from http://www.terraform.io/downloads.html for your O.S., extract and move the executables into your /usr/local/bin (or add the folder to your $PATH).

Verify the installation by executing:

We‘ll be using AWS as the provider so you will need an access and secret key for your user account.

Create a folder where you will keep all your .tf files. You can later add this folder to Git version control.

Provisioning a single EC2 instance

Create a file called web-instance.tf with the following:

The provider line tells Terraform to use the aws provider using the keys and region provided.
The aws_instance instructs Terraform to create a instance with a tag “web” using the Ubuntu 14.04 LTS AMI.

Before building infrastructure Terraform can give you a preview of the changes it will apply to the infrastructure:

It uses Git diff syntaxing to show the differences with the current state of the infrastructure. Currently there’s no aws_instance tagged “web”, hence the line has a +
The “computed” values means that they will be known after the resource has been created.

To apply the .tf run:

Now check the AWS console, the instance has successfully been created !

The terraform.tfstate file which gets generated is very important as it contains the current state of the infrastructure in AWS. For this reason it’s important to version control it in Git if others are working on the infrastructure (not at the same time).

Modifying the infrastructure

Edit your web-instance.tf file and changed the instance type from t2.micro to t2.small then check what will change:

The value for instance_type has changed, apply the new changes:

Instead of stopping the existing instance and upscaling it, it destroys and creates a new aws_instance resource.

Destroying the infrastructure

Before destroying infrastructure, Terraform needs to generate a destroy plan which lists the resources to destroy, to a file here named destroy.tplan

The line with – confirms that the aws_instance resource tagged web will be destroyed, lets apply the destruction plan:

Provisioning a multi-tiered VPC

Terraform can provision a VPC using most of the available VPC resources, except a few which are missing.

Here we provision 2 web instances, 2 application instances and a nat instance within 2 tier VPC, complete with appropriate security groups, load balancers, route tables, internet gateway, elastic IP etc…

The gist can be viewed here.

Conclusion

Terraform is an use out of the box ready and easy to use tool to build infrastructure using only a couple of lines. It’s very practical to quickly build a proof of concept infrastructure.

Using the AWS provider, it’s missing a couple of resources such as Elastic Network Interfaces, Access Control Lists, Security Groups egress rules, VPC DHCP options plus it does not support User Data (it can only execute remote ssh commands). However at version 0.2.1 it is still very early days and no doubt we will be seeing lots of improvements to Terraform in the future !

Automated Nagios monitoring with Puppet exported resources

NagiosOne of the coolest things Puppet can do is create exported resources. In plain words it means that you can include a manifest on all your nodes which then gets customised and applied for each node thanks to the node facts.
A popular usage of exported resources is automating Nagios monitoring. Instead of manually creating a Nagios configuration with the basic checks such as load, disk usage etc… then duplicating it for each server and changing the hostname, an exported resource can do it all for you by including a single manifest.

PuppetDB configuration

Exported resources requires storing facts for the Puppet nodes so we need PuppetDB installed on the PuppetMaster.

Create /etc/puppet/puppetdb.conf

Add to /etc/puppet/puppet.conf under [main]

Create /etc/puppet/routes.yaml

Start PuppetDB and restart the PuppetMaster:

Nagios Server Configuration

Create a Nagios module with the following manifests:

nagios/manifests/init.pp

nagios/manifests/install.pp

Note that at the time of writing, Nagios 4 isn’t available as a package for Ubuntu 12.04 LTS so we’re assuming you built your own one which installs it under /etc/nagios4

nagios/manifests/service.pp

There is a bug in Puppet when the exported resources are created, they do not have the correct permissions to allow the nagios user on the server to read them so declare a fix-permissions exec resource.

nagios/manifests/import.pp

The exported resource operator <<||>> (not to be confused with the spaceship <||> operator) is the resource which will realize all @@ virtual exported resources. How it works is described further down.

Nagios NRPE nodes

For all nodes which you want to monitor with nagios, we need the Nagios NRPE server installed

nagios/manifests/nrpe.pp

Then the export manifest where the @@nagios virtual resources are declared

nagios/manifests/export.pp

To include Nagios on all nodes to monitor, just add to the default node on manifests/nodes.pp

How it works

When you declare an exported virtual resource on the node, after the puppet agent run, the exported configurations are stored into PuppetDB. Then when you run the puppet agent on the Nagios server, it collects all the nodes exported resources from PuppetDB and subsequently creates the Nagios .cfg files. Beware that if you have a lot of nodes that use exported resources, it can create a long catalogue compilation time, so consider extending the run interval of the puppet agent.

Therefore all nodes must trigger a puppet run before the server can import the configurations.

Note that if you remove nagios::export class from a node, it will not remove the exported configurations in PuppetDB, the resources will still be created on export. You need to keep the virtual exported resources and set ensure to absent. Or if you’re completely removing the node, on the PuppetMaster you can deactivate it with

Exported resources can have many other uses such as managing sshkey know_hosts resources or for dynamically adding/removing load balancer members to Apache for example.

Iteration in Puppet using the future parser

People often ask me how do you iterate through an array in Puppet when you can do it easily in a template ?

Puppet uses declarative programming and is a domain specific language therefore it cannot do loops which you are probably used to doing in most languages.

Define iteration

For a while it has been possible to sort of iterate in Puppet manifests by using a define resource:

Which produces:

The name of the foo resource takes an array where each element calls the foo define.

Define resources are commonly used for apache configurations for example (see  http://docs.puppetlabs.com/learning/definedtypes.html).

However since every resource name must be unique, it is impossible to call a define twice, using the same values in the array.

Also if you want to iterate through an array inside a define, you’d have to call another define which can get messy.

Future Parser

Since Puppet version 3.2 experimental features are available http://docs.puppetlabs.com/puppet/3/reference/experiments_overview.html and it includes an array iteration feature !

Important note:

The future parser is an experimental feature and is not officially supported by Puppet Labs. It is not recommended for production environments, so enable it at your own risk and ensure you have gone through the new language restrictions at http://docs.puppetlabs.com/puppet/3/reference/experiments_future.html#new-language-restrictions
The non-capitalized variables compatibility and using variables in templates without @ prefixing (or scope.lookupvar) are two areas that need attention since there are quite a few Puppet Forge modules that still use them.

To enable the future parser on your puppetmaster, edit /etc/puppet/puppet.conf and add:

Restart the puppetmaster and check the syslog for any issues compiling catalogues for nodes. This is where you can find future parser compatibility issues.

To test array iteration with a puppet apply, it’s as simple as:

If you’re coding your Puppet modules in Geppetto, set the “Puppet target version” to “future” in the Preferences, to avoid the bad syntax highlighting.

Happy iterating !