PCI Compliance tips for Sys Admins

pciTrying to achieve PCI Compliance for your infrastructure can be a bit daunting as it involves a lot of documentation and configuration on the servers. Having gone through several PCI audits, I’m going to list some useful resources and tools that can help system administrators, prepare their infrastructure for PCI compliance.

If you have any other suggestions, please feel free to comment!

Hardening Servers

It’s a requirement to harden the servers with industry-accepted system hardening standards.

For Debian based Linux distributions, there’s a guide from the CIS at http://benchmarks.cisecurity.org/tools2/linux/CIS_Debian_Benchmark_v1.0.pdf. There’s also a generic Linux one from the SANS which is a bit shorter at http://www.sans.org/score/checklists/linuxchecklist.pdf

Puppet

PCI compliance requires applying many configuration changes to your servers: hardening settings, installing up an anti-virus, centralised logging etc… To avoid repetitive tasks on all servers, it is essential to develop infrastructure as code modules in Puppet (or Chef) to maintain the settings on the servers.

It is generally acceptable to show the Puppet modules to the auditor to demonstrate what settings are applied to the PCI servers.

Tip: Puppet let’s you create “environments”, by classifying nodes into a pci environment, you can get all PCI modules applied to the nodes within the PCI zone.

More information on Puppet: http://puppetlabs.com/puppet/puppet-open-source

Amazon Web Services

The reason why I mention AWS is because a PCI Compliance package is now available with AWS.

The AWS EC2 security settings in a VPC (Virtual Private Cloud) using Security Groups and Access Control Lists, plus the ability to design the network the way you like, makes it possible to isolate your PCI zone within a VPC.

How the Security Groups, Access Control Lists, Subnets, Route Tables etc… are designed within a VPC can be tricky to describe to the auditor. The best you can do is dump all the settings using the ec2 api tools or the new aws cli then take an example instance and match it with it’s Security Groups, Subnets and ACL(s).

It is a requirement for PCI that changes to the firewall (Security Groups) are logged. Using the ec2 api tools, you can write a passive audit script. I will describe how to achieve this in a separate post.

More information on the AWS PCI compliance package: http://blogs.aws.amazon.com/security/post/Tx2BOQ6RM0ACYGT/2013-PCI-Compliance-Package-available-now

Splunk

PCI requires a good system to monitor logs and Splunk is the best centralised logging software by a mile, it is one of the most useful tools for PCI compliance.

By forwarding from Linux servers the syslog and auth.log at least, you can generate and export many reports for requirement 10 in particular.

Splunk has it’s own audit trail which is another requirement.

If Splunk is used to index logs from non-PCI servers as well, you can have a separate index for PCI logs and thanks to user roles in Splunk, only allow PCI users to view PCI logs.

There is also a PCI Compliance for Splunk app which comes at an extra cost. I have not yet had a chance to try it out.

More information on how Splunk can help for PCI Compliance: http://www.splunk.com/view/pci-compliance/SP-CAAACPS

Ossec

Ossec is a Host Intrusion Detection System and a File Integrity Monitor which is a requirement for PCI. It integrates well with Splunk by using the Splunk for Ossec app which can create some cool pie charts, generate reports for the different alerts etc… You can configure Splunk to only email an alert for specific alert levels.

More information on how OSSEC can help for PCI compliance: http://www.ossec.net/files/ossec-PCI-Solution-2.0.pdf

Auditd for Linux

Despite having the logs being sent to Splunk for indexing, the integrity of logs need to be maintained on the servers. For Linux, auditd can provide detailed audit trails for anything on the system. A good start is to add the syslog and auth.log to auditd to ensure they are not being tampered. Auditd can also be configured to audit all files under /etc and executable paths such as /bin, /usr/bin etc…

Manual page for auditd on Ubuntu: http://manpages.ubuntu.com/manpages/precise/man8/auditd.8.html

Mod Security

It is required for the web servers to have a web application firewall installed.  Mod Security can run on Apache, Nginx, and IIS web servers.

It is mandatory that the latest version of the Core Rules Set is also used with Mod Security. The Linux distribution packages are often left behind so you will have to compile Mod Security yourself and integrate the CRS manually.

Mod Security will log into the error log, it will initially become very noisy, fortunately some rules IDs can be ignored in the Virtual Host configurations.

More information on Mod Security: http://www.modsecurity.org
Core Rules Set: https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project

Windows Servers

If your infrastructure has a couple of Windows servers, the good news is that all of the softwares mentioned previously (except auditd) are compatible and can run on Windows!

AWS has landed Down Under!

Today I attended the Amazon Web Services Customer Appreciation Day http://aws.amazon.com/apac/cad-anz/ , where the VP of AWS announced the launch of the new Sydney region.

There was a big audience (I’m guessing over 1000) listening to the keynotes and client testimonials and taking part in breakout sessions covering many of the services AWS provides.

The launch of a region in Australia is great news for Australian business, which now has access to AWS services with a fantastic latency. I ran some tests and got an average of 5ms (compared to approx 110ms with the previously closest region, Singapore).  In other words, the average time for servers to respond to information requests has been speeded up more than 20 times!

What’s even better news is that the prices are (almost) the same as the Singapore region.

I foresee lots of businesses soon taking advantage of the new AWS Sydney.

Contact me if you need help to get your infrastructure onto AWS.

AWS micro instance in a VPC

Amazon Web Services have finally announced the availability of the ec2 micro instance running inside a Virtual Private Cloud (VPC).

This enables users to take advantage of the AWS free usage tier inside a VPC (you can run a micro instance 24/365 for your first year for free!).

The micro instance is a limited resources server (613mb memory and up to 2 EC2 cores) but it’s great for some non production servers, such as admin servers which don’t use too many resources.
It’s also suitable to use as development servers.

Remember that an ec2 instance type can be upgraded from micro to a more powerful instance type any time.

Personally I run my Puppet master and Approx package caching on micro ec2 instances.