Load testing on EC2 using Bees with machine guns

beesYou may be familiar with the common tools for load testing websites such as Apache’s ab, Siege, JMeter etc… The trouble when running benchmarks from your machine is that all requests come from a single source IP address which isn’t efficient enough for testing things like load balancing. Many load balancers, such as AWS’s Elastic Load Balancers, will send the requests to the same balanced node (in a short period of time), making it difficult to check that you’re achieving Round Robin load balancing, with a fairly even number of requests served on all your balanced nodes. So we need a tool that will distribute the load testing.

Enter bees with machine guns…

Bees with machine guns is “A utility for arming (creating) many bees (micro EC2 instances) to attack (load test) targets (web applications)”.
In other words, from a couple of commands only, you simulate traffic originating from several different sources to hit the target. Each source is a “bee” which is an EC2 micro instance, which will load test the target using Apache’s ab. But because we are launching multiple “bees”, we should be able to see proper load balancing thanks to the different source IPs of the bees.

Installation

Install the python package using pip:

Configuration

Bees with machine guns uses boto to communicate with EC2, configure your ~/.boto file with your AWS credentials
and set your region:

Ensure you have a security group in EC2 that allow ssh port 22 incoming.

Choose an AMI which is available in your chosen region and which has Apache’s ab tool installed (or you can install it manually in 1 command which I’ll show how to).

Prepare bees

 Bees up launches the ec2 micro instances. You must specify:
  • -s for the number of bees/instances to launch
  • -g the name of the security group
  • –key the name of your AWS private ssh keypair
  • –i the AMI
  • –zone the availability zone where to launch the micro instances
  • –login the default username the AMI uses for ssh (ubuntu for ubuntu AMI)

Wait for a couple of minutes until the bees have loaded their guns, meaning the ec2 instances are ready.

Installing ab

If the AMI doesn’t have Apache’s ab tool installed, you can remotely install it.  First gather all the public IPs of the bees:

Install cluster ssh and execute the installation of apache2-utils on all bees IP addresses:

Bees attack

Now we’re ready to start the load testing by instructing the bees to hit the target.
Here we’re going to instruct the hive to send 10,000 requests to the -u target URL performing 100 concurrent requests:

Note: the target URL must be a static path ending with a /

Bees down

Once you are done with the load testing, remember to terminate your ec2 instances by issuing:

Conclusion

Bees with machine guns is a cheap quick and easy way for distributed load testing.

Beware that bees with machine guns can be a distributed denial of service attack, so ensure the target URL is yours or you could get into serious trouble!

AWS High Availability on NAT instances

If your Amazon Web Services VPC has instances in a private subnet that requires accessing the internet, you may be using a NAT instance with the VPC route table (containing a route for 0.0.0.0/0 pointing to the NAT instance). Have you considered the possibility of this creating a single point of failure?

Consider the following scenario: you have all your instances mirrored for high availability, across two availability zones. Even with a NAT instance in each availability zone, the 0.0.0.0/0 route to the NAT instance is a single point of failure.

So how to reduce this risk? If most of your outgoing traffic consists of simple http requests, such as running upgrading Operating System packages, one solution is to use proxy servers. By using Squid (or Tinyproxy) running on an instance in the public subnet in both availability zones, then adding an internal Elastic Load Balancer, you provide high availability for your outgoing traffic.

AWS NAT High Availability with Heartbeat

However, if you need NATing for production-critical applications where a proxy server isn’t enough for outgoing traffic, AWS has come up with a simple shell script that can achieve NAT high availability. It works with the two NAT instances (one in each availability zone) pinging each other. If one doesn’t respond, the active NAT instance takes over the route for 0.0.0.0/0 in the availability zone in which the NAT has failed, then attempts to reboot the unresponsive instance. It uses the ec2 api tools and IAM roles and requires each availability zone to use its own route table for the private subnets.

This is how a high level diagram would look like:

aws_nat

The steps are described in the article at http://aws.amazon.com/articles/2781451301784570

Some Improvement tips

Ensure you execute the script with bash instead of sh otherwise the conditional expressions will fail.

It is preferable to run the script as an upstart script rather than a cronjob, so you can easily stop the service when you need to perform maintenance and reboot a NAT instance. Puppet also works better with scripts running as a service so it can ensure they are always running.

Conclusion

There are many tweaks that can be done to improve the script. AWS will reportedly be launching a highly available NAT service in the future but for now this script does the job.

AWS micro instance in a VPC

Amazon Web Services have finally announced the availability of the ec2 micro instance running inside a Virtual Private Cloud (VPC).

This enables users to take advantage of the AWS free usage tier inside a VPC (you can run a micro instance 24/365 for your first year for free!).

The micro instance is a limited resources server (613mb memory and up to 2 EC2 cores) but it’s great for some non production servers, such as admin servers which don’t use too many resources.
It’s also suitable to use as development servers.

Remember that an ec2 instance type can be upgraded from micro to a more powerful instance type any time.

Personally I run my Puppet master and Approx package caching on micro ec2 instances.