Provisioning a custom Amazon VPC NAT instance with Puppet

If your VPC has a private subnet with instances which need to access the internet, then you need a NAT instance.

You may be using the default Amazon built AMI ami-vpc-nat which by defaults allows all traffic from the private instances to go out via the NAT.

But chances are you want to have greater control on the NATing rules. For example you may want to only allow the private instances to access the repositories for OS upgrades.
Also you may want to use your favorite Linux OS instead of the default Amazon Linux OS.

NATing can be configured using iptables or ufw but there’s a great Puppetlabs firewall module which makes NATing easy to configure http://forge.puppetlabs.com/puppetlabs/firewall

Initial VPC setup

Create the VPC, Security Group and Route Table as explained in the Amazon documentation at:
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html

Launch Instance

Launch an instance using your favorite Linux OS, in this example we’ll be using Ubuntu 12.04 LTS.

Launch it into your public subnet. Assign an Elastic IP address. Then update the Route Table as explained in the documentation link.

Important: remember to disable the Change Source / Dest Check option on the instance.

Enable IP Forwarding

To enable NATing you need to enable IP forward on the OS. This Puppet manifest snippet will enable it:

To make the change more permanent, change the setting in sysctl.conf

You can put those two manifest resources in a module called sysctl.

Install Puppet plugin

On the Puppetmaster server, install the module:

You must enable plugin sync on all Puppet agents and the master. Add the following to puppet.conf

To clear any unmanaged firewall rules on the instance, add the following to your site.pp or any similar top-scope file.

NATing Manifest

Here’s what the NATing manifest would look like if you want to allow just NATing for the ubuntu apt repositories:

Remember to include the sysctl ip_forward settings and the firewall module which are required for each firewall resource.

Conclusion

The Puppetlabs firwall modules makes it quick and easy to add new firewall rules and ensure greater security with refined NATing rules.

If your NAT instance is critical in allowing outgoing traffic for your production systems, consider implementing NAT high availability described in this blog entry: /aws-high-availability-on-nat-instances/

PCI Compliance tips for Sys Admins

pciTrying to achieve PCI Compliance for your infrastructure can be a bit daunting as it involves a lot of documentation and configuration on the servers. Having gone through several PCI audits, I’m going to list some useful resources and tools that can help system administrators, prepare their infrastructure for PCI compliance.

If you have any other suggestions, please feel free to comment!

Hardening Servers

It’s a requirement to harden the servers with industry-accepted system hardening standards.

For Debian based Linux distributions, there’s a guide from the CIS at http://benchmarks.cisecurity.org/tools2/linux/CIS_Debian_Benchmark_v1.0.pdf. There’s also a generic Linux one from the SANS which is a bit shorter at http://www.sans.org/score/checklists/linuxchecklist.pdf

Puppet

PCI compliance requires applying many configuration changes to your servers: hardening settings, installing up an anti-virus, centralised logging etc… To avoid repetitive tasks on all servers, it is essential to develop infrastructure as code modules in Puppet (or Chef) to maintain the settings on the servers.

It is generally acceptable to show the Puppet modules to the auditor to demonstrate what settings are applied to the PCI servers.

Tip: Puppet let’s you create “environments”, by classifying nodes into a pci environment, you can get all PCI modules applied to the nodes within the PCI zone.

More information on Puppet: http://puppetlabs.com/puppet/puppet-open-source

Amazon Web Services

The reason why I mention AWS is because a PCI Compliance package is now available with AWS.

The AWS EC2 security settings in a VPC (Virtual Private Cloud) using Security Groups and Access Control Lists, plus the ability to design the network the way you like, makes it possible to isolate your PCI zone within a VPC.

How the Security Groups, Access Control Lists, Subnets, Route Tables etc… are designed within a VPC can be tricky to describe to the auditor. The best you can do is dump all the settings using the ec2 api tools or the new aws cli then take an example instance and match it with it’s Security Groups, Subnets and ACL(s).

It is a requirement for PCI that changes to the firewall (Security Groups) are logged. Using the ec2 api tools, you can write a passive audit script. I will describe how to achieve this in a separate post.

More information on the AWS PCI compliance package: http://blogs.aws.amazon.com/security/post/Tx2BOQ6RM0ACYGT/2013-PCI-Compliance-Package-available-now

Splunk

PCI requires a good system to monitor logs and Splunk is the best centralised logging software by a mile, it is one of the most useful tools for PCI compliance.

By forwarding from Linux servers the syslog and auth.log at least, you can generate and export many reports for requirement 10 in particular.

Splunk has it’s own audit trail which is another requirement.

If Splunk is used to index logs from non-PCI servers as well, you can have a separate index for PCI logs and thanks to user roles in Splunk, only allow PCI users to view PCI logs.

There is also a PCI Compliance for Splunk app which comes at an extra cost. I have not yet had a chance to try it out.

More information on how Splunk can help for PCI Compliance: http://www.splunk.com/view/pci-compliance/SP-CAAACPS

Ossec

Ossec is a Host Intrusion Detection System and a File Integrity Monitor which is a requirement for PCI. It integrates well with Splunk by using the Splunk for Ossec app which can create some cool pie charts, generate reports for the different alerts etc… You can configure Splunk to only email an alert for specific alert levels.

More information on how OSSEC can help for PCI compliance: http://www.ossec.net/files/ossec-PCI-Solution-2.0.pdf

Auditd for Linux

Despite having the logs being sent to Splunk for indexing, the integrity of logs need to be maintained on the servers. For Linux, auditd can provide detailed audit trails for anything on the system. A good start is to add the syslog and auth.log to auditd to ensure they are not being tampered. Auditd can also be configured to audit all files under /etc and executable paths such as /bin, /usr/bin etc…

Manual page for auditd on Ubuntu: http://manpages.ubuntu.com/manpages/precise/man8/auditd.8.html

Mod Security

It is required for the web servers to have a web application firewall installed.  Mod Security can run on Apache, Nginx, and IIS web servers.

It is mandatory that the latest version of the Core Rules Set is also used with Mod Security. The Linux distribution packages are often left behind so you will have to compile Mod Security yourself and integrate the CRS manually.

Mod Security will log into the error log, it will initially become very noisy, fortunately some rules IDs can be ignored in the Virtual Host configurations.

More information on Mod Security: http://www.modsecurity.org
Core Rules Set: https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project

Windows Servers

If your infrastructure has a couple of Windows servers, the good news is that all of the softwares mentioned previously (except auditd) are compatible and can run on Windows!

Load testing on EC2 using Bees with machine guns

beesYou may be familiar with the common tools for load testing websites such as Apache’s ab, Siege, JMeter etc… The trouble when running benchmarks from your machine is that all requests come from a single source IP address which isn’t efficient enough for testing things like load balancing. Many load balancers, such as AWS’s Elastic Load Balancers, will send the requests to the same balanced node (in a short period of time), making it difficult to check that you’re achieving Round Robin load balancing, with a fairly even number of requests served on all your balanced nodes. So we need a tool that will distribute the load testing.

Enter bees with machine guns…

Bees with machine guns is “A utility for arming (creating) many bees (micro EC2 instances) to attack (load test) targets (web applications)”.
In other words, from a couple of commands only, you simulate traffic originating from several different sources to hit the target. Each source is a “bee” which is an EC2 micro instance, which will load test the target using Apache’s ab. But because we are launching multiple “bees”, we should be able to see proper load balancing thanks to the different source IPs of the bees.

Installation

Install the python package using pip:

Configuration

Bees with machine guns uses boto to communicate with EC2, configure your ~/.boto file with your AWS credentials
and set your region:

Ensure you have a security group in EC2 that allow ssh port 22 incoming.

Choose an AMI which is available in your chosen region and which has Apache’s ab tool installed (or you can install it manually in 1 command which I’ll show how to).

Prepare bees

 Bees up launches the ec2 micro instances. You must specify:
  • -s for the number of bees/instances to launch
  • -g the name of the security group
  • –key the name of your AWS private ssh keypair
  • –i the AMI
  • –zone the availability zone where to launch the micro instances
  • –login the default username the AMI uses for ssh (ubuntu for ubuntu AMI)

Wait for a couple of minutes until the bees have loaded their guns, meaning the ec2 instances are ready.

Installing ab

If the AMI doesn’t have Apache’s ab tool installed, you can remotely install it.  First gather all the public IPs of the bees:

Install cluster ssh and execute the installation of apache2-utils on all bees IP addresses:

Bees attack

Now we’re ready to start the load testing by instructing the bees to hit the target.
Here we’re going to instruct the hive to send 10,000 requests to the -u target URL performing 100 concurrent requests:

Note: the target URL must be a static path ending with a /

Bees down

Once you are done with the load testing, remember to terminate your ec2 instances by issuing:

Conclusion

Bees with machine guns is a cheap quick and easy way for distributed load testing.

Beware that bees with machine guns can be a distributed denial of service attack, so ensure the target URL is yours or you could get into serious trouble!