PCI Compliance tips for Sys Admins

pciTrying to achieve PCI Compliance for your infrastructure can be a bit daunting as it involves a lot of documentation and configuration on the servers. Having gone through several PCI audits, I’m going to list some useful resources and tools that can help system administrators, prepare their infrastructure for PCI compliance.

If you have any other suggestions, please feel free to comment!

Hardening Servers

It’s a requirement to harden the servers with industry-accepted system hardening standards.

For Debian based Linux distributions, there’s a guide from the CIS at http://benchmarks.cisecurity.org/tools2/linux/CIS_Debian_Benchmark_v1.0.pdf. There’s also a generic Linux one from the SANS which is a bit shorter at http://www.sans.org/score/checklists/linuxchecklist.pdf


PCI compliance requires applying many configuration changes to your servers: hardening settings, installing up an anti-virus, centralised logging etc… To avoid repetitive tasks on all servers, it is essential to develop infrastructure as code modules in Puppet (or Chef) to maintain the settings on the servers.

It is generally acceptable to show the Puppet modules to the auditor to demonstrate what settings are applied to the PCI servers.

Tip: Puppet let’s you create “environments”, by classifying nodes into a pci environment, you can get all PCI modules applied to the nodes within the PCI zone.

More information on Puppet: http://puppetlabs.com/puppet/puppet-open-source

Amazon Web Services

The reason why I mention AWS is because a PCI Compliance package is now available with AWS.

The AWS EC2 security settings in a VPC (Virtual Private Cloud) using Security Groups and Access Control Lists, plus the ability to design the network the way you like, makes it possible to isolate your PCI zone within a VPC.

How the Security Groups, Access Control Lists, Subnets, Route Tables etc… are designed within a VPC can be tricky to describe to the auditor. The best you can do is dump all the settings using the ec2 api tools or the new aws cli then take an example instance and match it with it’s Security Groups, Subnets and ACL(s).

It is a requirement for PCI that changes to the firewall (Security Groups) are logged. Using the ec2 api tools, you can write a passive audit script. I will describe how to achieve this in a separate post.

More information on the AWS PCI compliance package: http://blogs.aws.amazon.com/security/post/Tx2BOQ6RM0ACYGT/2013-PCI-Compliance-Package-available-now


PCI requires a good system to monitor logs and Splunk is the best centralised logging software by a mile, it is one of the most useful tools for PCI compliance.

By forwarding from Linux servers the syslog and auth.log at least, you can generate and export many reports for requirement 10 in particular.

Splunk has it’s own audit trail which is another requirement.

If Splunk is used to index logs from non-PCI servers as well, you can have a separate index for PCI logs and thanks to user roles in Splunk, only allow PCI users to view PCI logs.

There is also a PCI Compliance for Splunk app which comes at an extra cost. I have not yet had a chance to try it out.

More information on how Splunk can help for PCI Compliance: http://www.splunk.com/view/pci-compliance/SP-CAAACPS


Ossec is a Host Intrusion Detection System and a File Integrity Monitor which is a requirement for PCI. It integrates well with Splunk by using the Splunk for Ossec app which can create some cool pie charts, generate reports for the different alerts etc… You can configure Splunk to only email an alert for specific alert levels.

More information on how OSSEC can help for PCI compliance: http://www.ossec.net/files/ossec-PCI-Solution-2.0.pdf

Auditd for Linux

Despite having the logs being sent to Splunk for indexing, the integrity of logs need to be maintained on the servers. For Linux, auditd can provide detailed audit trails for anything on the system. A good start is to add the syslog and auth.log to auditd to ensure they are not being tampered. Auditd can also be configured to audit all files under /etc and executable paths such as /bin, /usr/bin etc…

Manual page for auditd on Ubuntu: http://manpages.ubuntu.com/manpages/precise/man8/auditd.8.html

Mod Security

It is required for the web servers to have a web application firewall installed.  Mod Security can run on Apache, Nginx, and IIS web servers.

It is mandatory that the latest version of the Core Rules Set is also used with Mod Security. The Linux distribution packages are often left behind so you will have to compile Mod Security yourself and integrate the CRS manually.

Mod Security will log into the error log, it will initially become very noisy, fortunately some rules IDs can be ignored in the Virtual Host configurations.

More information on Mod Security: http://www.modsecurity.org
Core Rules Set: https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project

Windows Servers

If your infrastructure has a couple of Windows servers, the good news is that all of the softwares mentioned previously (except auditd) are compatible and can run on Windows!

Load testing on EC2 using Bees with machine guns

beesYou may be familiar with the common tools for load testing websites such as Apache’s ab, Siege, JMeter etc… The trouble when running benchmarks from your machine is that all requests come from a single source IP address which isn’t efficient enough for testing things like load balancing. Many load balancers, such as AWS’s Elastic Load Balancers, will send the requests to the same balanced node (in a short period of time), making it difficult to check that you’re achieving Round Robin load balancing, with a fairly even number of requests served on all your balanced nodes. So we need a tool that will distribute the load testing.

Enter bees with machine guns…

Bees with machine guns is “A utility for arming (creating) many bees (micro EC2 instances) to attack (load test) targets (web applications)”.
In other words, from a couple of commands only, you simulate traffic originating from several different sources to hit the target. Each source is a “bee” which is an EC2 micro instance, which will load test the target using Apache’s ab. But because we are launching multiple “bees”, we should be able to see proper load balancing thanks to the different source IPs of the bees.


Install the python package using pip:


Bees with machine guns uses boto to communicate with EC2, configure your ~/.boto file with your AWS credentials
and set your region:

Ensure you have a security group in EC2 that allow ssh port 22 incoming.

Choose an AMI which is available in your chosen region and which has Apache’s ab tool installed (or you can install it manually in 1 command which I’ll show how to).

Prepare bees

 Bees up launches the ec2 micro instances. You must specify:
  • -s for the number of bees/instances to launch
  • -g the name of the security group
  • –key the name of your AWS private ssh keypair
  • –i the AMI
  • –zone the availability zone where to launch the micro instances
  • –login the default username the AMI uses for ssh (ubuntu for ubuntu AMI)

Wait for a couple of minutes until the bees have loaded their guns, meaning the ec2 instances are ready.

Installing ab

If the AMI doesn’t have Apache’s ab tool installed, you can remotely install it.  First gather all the public IPs of the bees:

Install cluster ssh and execute the installation of apache2-utils on all bees IP addresses:

Bees attack

Now we’re ready to start the load testing by instructing the bees to hit the target.
Here we’re going to instruct the hive to send 10,000 requests to the -u target URL performing 100 concurrent requests:

Note: the target URL must be a static path ending with a /

Bees down

Once you are done with the load testing, remember to terminate your ec2 instances by issuing:


Bees with machine guns is a cheap quick and easy way for distributed load testing.

Beware that bees with machine guns can be a distributed denial of service attack, so ensure the target URL is yours or you could get into serious trouble!

Fast Puppet or Chef development using Vagrant

vagrantIf you are developing infrastructure-as-code using Puppet modules or Chef cookbooks then you must checkout Vagrant.

Vagrant has been around for a few years but is today, one of the DevOps favourite tools for automated provisioning of development environments or, for testing infrastructure-as-code, before it gets pushed to production environments.

In this example we’ll be focusing on using Puppet as the provisioner.
It uses puppet apply, saving the need of having to setup a puppetmaster and certificates. (If you prefer, you can provision with a puppetmaster).


Vagrant uses VirtualBox for provisioning VMs so firstly you need to install it:

Then go to http://downloads.vagrantup.com to download and install the latest version of Vagrant.

Adding a box

Next you need to import a “box” which is a ready to use base image of your favourite Linux distribution; copy the url of a box at http://www.vagrantbox.es or (my preference) use Canonical Ubuntu 12.04 LTS official box. You must also give the box a base name.


Next prepare an initial vagrant configuration:

This creates a configuration file named Vagrantfile, open it with your favourite editor

This is the minimal required to get vagrant to boot an Ubuntu 12.04 LTS VM and provision it with Puppet.

By default the provisioner will apply Puppet manifests declared inside manifests/default.pp So create manifests folder and default.pp manifest file

In this example we’re going to install apache and setup a VirtualHost using the apache module from sourceforge which can be installed this way:

Inside ~/vagrant/manifests/default.pp include the apache module

Boot the VM

Now we’re ready to boot the VM using vagrant up

The VM is created within a minute! Then you’ll start to see the verbose output of apache and the vhost being installed.

ssh into the box

Perform some checks:

All good!

 Other Vagrant commands

Stops the execution of the VM and saves it’s state.

Boots up the VM after a suspend.

Destroys the VM.

Re-executes the Puppet or Chef provisioner.
This is probably what you will be running the most  when you’re developing modules/cookbooks.

Shutdowns then starts the VM. If you change your path to Puppet module or manifests you will need to do a reload or destroy/up those two paths are mounted into the box using NFS.

AWS provisioner

Vagrant can also be used to provision AWS instances instead of local virtualbox VMs.

Install the plugin:

Next you will need to create a dummy box, set your AWS credentials, choose an AMI etc…follow the steps explained in the plugin Quick Start at https://github.com/mitchellh/vagrant-aws


Vagrant is a great tool for quickly testing deployments. Vagrant files can be customised to do many things, checkout the official documentation at http://docs.vagrantup.com/v2/ to see what all the options are.