How to wing the AWS certification exam

Solutions-Architect-AssociateA couple of months ago I undertook the AWS Solutions Architect – Associate exam, which I happily passed with a score of 85%.

Whilst I went into the exam with almost no preparation at all, I’ve put together some tips to best prepare yourself for the exam.

Please note that when you undertake the exam, you are required to sign a NDA, which forbids from sharing the contents of the exam questions.

The certification

AWS certifications are valid for 2 years and are useful to test your knowledge, boost your credentials plus you get access to the Amazon Partner Network.

The next level after the associate exam is the professional exam. The main differences between the two are:

Associate level

  • Technical
  • Troubleshooting
  • Common scenarios

Professional level

  • Much more in depth
  • Complex scenarios

The associate exam duration is 80 minutes whilst the professional exam duration is 170 minutes. You are taken into the exam room (you cannot bring anything at all with you), the questions are multiple choice answers. At the end of the exam you are immediately presented with the results on the screen.

If you fail the exam, you’ll have to wait 30 days before you can try again.

Preparation

The exam questions are well written, it’s not an exam you can just study for and hope you’ll pass, you need to have plenty of hands-on experience. And the best experience you can get is in your profession.

Some tips to best prepare yourself:

  • Practice, practice, practice !
  • Read the AWS whitepapers aws.amazon.com/whitepapers
  • Sign up to Cloud Academy cloudacademy.com
  • Sign up to Linux Academy linuxacademy.com
  • Read the AWS sample questions and discuss them with your colleagues
  • Undertake the AWS practice exam (US$20: 20 questions / 30 mins)

The Cloud and Linux Academy have online courses and lots of quizzes. The official AWS practice exam is useful to undertake last, as you get to practice against the timer which can be distracting.

AWS Solutions Architect – Associate exam

The scoring breakdown for the exam I undertook is:

  • Designing highly available, cost efficient, fault tolerant, scalable systems
  • Implementation / Deployment
  • Security
  • Troubleshooting

My impressions are:

  • It’s not easy, the questions are well composed for architects with plenty of experience
  • Some of the questions are long
  • AWS states one year minimum experience, it depends on how many services you got exposed to. Despite having used AWS extensively since 2008, I found some of the questions challenging
  • The questions are high level but also hands-on
  • The exam covers most main AWS services

For the AWS services covered, whilst each exam is different, they cover roughly:

  • 75% EC2 (ELB, EBS, AMI…), VPC & IAM roles
  • 25% other services (Storage Gateway, Route53, Cloudfront, SQS, RDS, SES, DynamoDB etc…)

Exam gotchas

For the Solutions Architect exam:

  • There are architecture for totally different scenarios
  • Be mindful of cost effective vs best design vs default architecture
  • Security is very important to know (e.g. Security groups, /ACL statefull/stateless etc…)
  • Good practice with troubleshooting is essential
  • Some questions can be easily answered by elimination

Exam tips

Some tips for when you sit the exam:

  • Prepare yourself
  • Take your time
  • Don’t pay too much attention to the timer
  • Read the questions carefully
  • Mark questions for review later
  • Leave at least 10/15 mins to review

The AWS certification exam can be stressful but also fun, good luck if you intend to undertake it !

 

Infrastructure automation with Terraform

TerraformHashicorp, the cool guys behind Vagrant, Packer etc… have done it again and come up with a simple way build, modify and destroy cloud infrastructure. It is pure infrastructure as code which can be version controlled. But the best part is that it can work with many cloud platforms: AWS, Google Cloud, Digital Ocean, Heroku etc…

Getting Started

The official documentation can be found here.

Download the zip package from http://www.terraform.io/downloads.html for your O.S., extract and move the executables into your /usr/local/bin (or add the folder to your $PATH).

Verify the installation by executing:

We‘ll be using AWS as the provider so you will need an access and secret key for your user account.

Create a folder where you will keep all your .tf files. You can later add this folder to Git version control.

Provisioning a single EC2 instance

Create a file called web-instance.tf with the following:

The provider line tells Terraform to use the aws provider using the keys and region provided.
The aws_instance instructs Terraform to create a instance with a tag “web” using the Ubuntu 14.04 LTS AMI.

Before building infrastructure Terraform can give you a preview of the changes it will apply to the infrastructure:

It uses Git diff syntaxing to show the differences with the current state of the infrastructure. Currently there’s no aws_instance tagged “web”, hence the line has a +
The “computed” values means that they will be known after the resource has been created.

To apply the .tf run:

Now check the AWS console, the instance has successfully been created !

The terraform.tfstate file which gets generated is very important as it contains the current state of the infrastructure in AWS. For this reason it’s important to version control it in Git if others are working on the infrastructure (not at the same time).

Modifying the infrastructure

Edit your web-instance.tf file and changed the instance type from t2.micro to t2.small then check what will change:

The value for instance_type has changed, apply the new changes:

Instead of stopping the existing instance and upscaling it, it destroys and creates a new aws_instance resource.

Destroying the infrastructure

Before destroying infrastructure, Terraform needs to generate a destroy plan which lists the resources to destroy, to a file here named destroy.tplan

The line with – confirms that the aws_instance resource tagged web will be destroyed, lets apply the destruction plan:

Provisioning a multi-tiered VPC

Terraform can provision a VPC using most of the available VPC resources, except a few which are missing.

Here we provision 2 web instances, 2 application instances and a nat instance within 2 tier VPC, complete with appropriate security groups, load balancers, route tables, internet gateway, elastic IP etc…

The gist can be viewed here.

Conclusion

Terraform is an use out of the box ready and easy to use tool to build infrastructure using only a couple of lines. It’s very practical to quickly build a proof of concept infrastructure.

Using the AWS provider, it’s missing a couple of resources such as Elastic Network Interfaces, Access Control Lists, Security Groups egress rules, VPC DHCP options plus it does not support User Data (it can only execute remote ssh commands). However at version 0.2.1 it is still very early days and no doubt we will be seeing lots of improvements to Terraform in the future !

Recurring Schedule Auto Scaling with EC2

In my previous post, I described how to get started with auto scaling on EC2 using Cloud Watch metrics to automatically scale.

Auto scaling on a recurring schedule is another method of scaling by setting a minimum and maximum of instances at a specified time. For example you can a set a maximum number of instances to run during business hours then reduce it for outside of business hours when there is less web traffic.

Another useful use of recurring schedule auto scaling use, is for processing one time jobs. You launch instances to run a script at a specified time, then when the job has completed, the instance terminates. This is the case I’m going to cover today.

Prerequisites

Install and configure the awscli as explained in this post.

We’ll be creating our auto scaling group within EC2 classic.

Since we’ll be using an user data script, there is no need to build your own AMI as you can install and configure packages in the script.

IAM Role

As the instance needs to be able to terminate itself, create an IAM instance profile to allow the instance to authenticate and use the CLI and execute the terminate instance command:

Note the iam ARN which is returned. We’ll need it later.

Next, put in a file (/tmp/role.json) the following json statement to allow access to the EC2 service:

Then attached it to the role creation:

Finally add the role to the instance profile:

User data script

The user data script is passed on to the EC2 instance and executes when it boots the first time.

This is where you can install packages, execute your job then the instance terminates itself. The awscli package must be installed.

ec2metadata is Ubuntu’s package which collects EC2 information on the running instance.

Launch Configuration

Create the launch configuration specifying the user data script previously created plus the AMI, ARN, ssh key, security group and instance type:

Auto Scaling Group

Create the auto scaling group, note that there must be zero instances min and max as we’re not launching any instances immediately:

By default the auto scaling group will replace any unhealthy (terminated) instances, disable it:

Recurring Schedule

Now specify the schedule when to launch the instance(s) using the Unix contab recurrence format (it must be in UTC timezone).
This is the scheduled group action to launch one instance everyday at 1pm UTC time:

Then at 1pm UTC time, the instance will launch, run the user data script and finally self terminate !

Check the scaling activities with:

Even though the instance will self terminate when the job has finished, the auto scaling group will still show min-size and max-size 1, as a consequence it will not launch an instance at the next recurrence. So we must to reset the sizes to 0 whenever the job is likely to have finished by.

This scheduled action makes sure all instances are terminated and the auto scaling group reset, every day at midnight UTC time:

Recurring Schedule + Cloud Watch metrics Auto Scaling

All the recurring schedule does is adjust the min and max size of the instances in the auto scaling group. You can combine scheduled actions with traditional auto scaling using Cloud Watch metrics !

Cleanup

Delete the auto scaling group and launch configuration:

Notes

The instance self terminate is optional, you could skip the IAM part and just issue a shutdown command in the user data script and wait for the cleanup recurring schedule to terminate the instance.

The recent addition of the Auto Scaling Management to the Console does not support defining recurring schedule auto scaling.