Immutable Infrastructure with AWS and Ansible – Part 1 – Setup

Introduction

Immutable infrastructure is a very powerful concept that brings stability, efficiency, and fidelity to your applications through automation and the use of successful patterns from programming.  The general idea is that you never make changes to running infrastructure.  Instead, you ensure that all infrastructure is created through automation, and to make a change, you simply create a new version of the infrastructure, and destroy the old one.  Chad Fowler was one of the first to mention this concept on his blog, and I believe it resonates with anyone that has spent a significant amount of time doing system administration:

“Why? Because an old system inevitably grows warts…”

They start as one-time hacks during outages. A quick edit to a config file saves the day. “We’ll put it back into Chef later,” we say, as we finally head off to sleep after a marathon fire fighting session.

Cron jobs spring up in unexpected places, running obscure but critical functions that only one person knows about. Application code is deployed outside of the normal straight-from-source-control process.

The system becomes finicky. It only accepts deploys in a certain manual way. The init scripts no longer work unless you do something special and unexpected.

And, of course the operating system has been patched again and again (in the best case) in line with the standard operating procedures, and the inevitable entropy sets in. Or, worse, it has never been patched and now you’re too afraid of what would happen if you try.

The system becomes a house of cards. You fear any change and you fear replacing it since you don’t know everything about how it works.  — Chad Fowler – Trash Your Servers and Burn Your Code: Immutable Infrastructure and Disposable Components

Requirements

To begin performing immutable infrastructure provisioning, you’ll need a few things first.  You need some type of “cloud” infrastructure.  This doesn’t necessarily mean you need a virtual server somewhere in the cloud; what you really need is the ability to provision cloud infrastructure with an API.  A permanent virtual server running in the cloud is the very opposite of immutable, as it will inevitably grow the warts Chad mentions above.

Amazon Web Services

For this series, we’ll use Amazon Web Services as our cloud provider.  Their APIs and services are frankly light years ahead of the competition.  I’m sure you could provision immutable infrastructure on other public cloud providers, but it wouldn’t be as easy, and you might not have access to the wealth of features and services available that can make your infrastructure provisioning non-disruptive with zero downtime.  If you’ve never used AWS before, the good news is that you can get access to a “free” tier that gives you limited amounts of compute resources per month for 12 months.  750 hours a month of t2.micro instance usage should be plenty if you are just learning AWS in your free time, but please be aware that if you aren’t careful, you can incur additional charges that aren’t covered in your “free” tier.

Ansible

The second thing we’ll need is an automation framework that allows us to treat infrastructure as code.  Ansible has taken the world by storm due to its simplicity and rich ecosystem of modules that are available to talk directly to infrastructure.  There is a huge library of Ansible modules for provisioning cloud infrastructure.  The AWS specific modules cover almost every AWS service imaginable, and far exceed those available from other infrastructure as code tools like Chef and Puppet.

OS X or Linux

The third thing we’ll need is an OS X or Linux workstation to do the provisioning from.  As we get into the more advanced sections, I’ll demonstrate how to provision a dedicated orchestrator that can perform provisioning operations on your behalf, but in the short-term, you’ll need a UNIX-like operating system to run things from.  If you’re running Windows, you can download VirtualBox from Oracle, and Ubuntu Linux from Canonical, then install Ubuntu Linux in a VM.  The following steps will get your workstation setup properly to begin provisioning infrastructure in AWS:

Mac OS X Setup

  1. Install Homebrew by executing the following command:
    ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
    You should see output like the following:Screen Shot 2016-01-08 at 11.14.05 AM
  2. Install Ansible with Homebrew by executing the following command:
    brew install ansible
    You should see output like the following: (note, I’m actually running Ansible 2.0.0.2 now, but this output was for an older version; use Ansible 2.0+ as it’s the future 🙂 )
    Screen Shot 2016-01-08 at 11.14.32 AM
  3. Install the AWS Command Line Interface (CLI) with Homebrew by executing the following command:
    brew install awscli
    You should see output like the following:
    Screen Shot 2016-01-08 at 2.15.32 PM
  4. Install wget through homebrew by executing the following command:
    brew install wget
    You should see output like the following:
    Screen Shot 2016-01-22 at 11.56.37 AM

Linux Setup

  1. Install Ansible by executing the following command:
    sudo pip install ansible
  2. Install the AWS Command Line Interface (CLI) by executing the following command:
    sudo pip install awscli
  3. Install wget using your package manager.

Generic Workstation Setup

These steps need to be followed whether you’re running a Mac or Linux for your workstation.

  1. Install a good text editor.  My favorite is Sublime Text 2, but you can use whatever you want.
  2. Install the yaegashi.blockinfile Ansible role from Ansible galaxy.  This is a very useful role that will allow us to add blocks of text to configuration files, rather than simply changing single lines.  Type the following command to install it:
    sudo ansible-galaxy install yaegashi.blockinfile
    You should see output like the following:
    Screen Shot 2016-01-08 at 11.24.55 AM

Amazon Setup

There are a few things you’ll need to begin provisioning infrastructure in your AWS account.  First, you’ll need to make sure the default security group in your VPC allows traffic from your workstation.  This is necessary because Ansible will configure your EC2 compute instances over SSH, and needs network connectivity to them from your workstation.

  1. Login to your AWS Console and select VPC from the bottom left of the dashboard.
  2. Click on Security Groups on the bottom left hand side under Security.
  3. Select/highlight the security group named “default”, and select the Inbound Rules tab.  Click the Edit button, then click Add another rule, and for the rule type, select “ALL Traffic”, and insert your workstation’s Internet IP address, with a /32 at the end to indicate the CIDR netmask.  If you don’t know your workstation’s true Internet IP address, you can find it at this website.
    Screen Shot 2016-01-08 at 3.34.08 PM
    Note: I blanked my IP address in the image above.
  4. Click Save to Save the Inbound Rules.
  5. Go back to the AWS Console dashboard, and click “Identity & Access Management.”  It is located towards the middle of the second column, under Security & Identity.
  6. Click on Users on the left, then click “Create New Users.”  Enter a username for yourself, and leave the checkbox selected to Generate an access key for each user.  Click the Create button:
    Screen Shot 2016-01-09 at 5.49.04 PM
  7. Your AWS credentials will be shown on the next screen.  It’s important to save these credentials, as they will not be shown again:
    Screen Shot 2016-01-09 at 5.49.31 PM
  8. Using your text editor, edit a file named ~/.boto, which should include the credentials you were just given, in the following format:
  9. At the command line, execute the following command, and input the same AWS credentials, along with the AWS region you are using:
    aws configure
    For most of you, this will be either “us-east-1” or “us-west-1”.  If you’re not in the US, use this page to determine what your EC2 region is.
  10. Click on Groups, then click “Create New Group”:
    Screen Shot 2016-01-09 at 5.59.18 PM
  11. Name the group PowerUsers, then click Next:
    Screen Shot 2016-01-09 at 5.59.35 PM
  12. In the Attach Policy step, search for “PowerUser” in the filter field, and check the box next to “PowerUserAccess”, then click “Attach Policy”:
    Screen Shot 2016-01-09 at 6.00.09 PM
  13. Click Next to Review, and save your group.
  14. Select/Highlight the PowerUsers group you’ve just created, and click Actions, then “Add Users to Group”:
    Screen Shot 2016-01-09 at 6.00.41 PM
  15. Select the user account you just created, and add that user to the group:
    Screen Shot 2016-01-09 at 6.00.59 PM
  16. Now, we’ll need to create an IAM policy that gives zero access to any of our resources.  The reason for this is that we’ll be provisioning EC2 instances with an IAM policy attached, and if those instances get compromised, we don’t want them to have permission to make any changes to our AWS account.  Click Policies on the left hand side (still under Identity & Access Management), then click Get Started:
    Screen Shot 2016-01-09 at 6.06.26 PM
  17. Click Create Policy:
    Screen Shot 2016-01-09 at 6.06.39 PM
  18. Select “Create Your Own Policy” from the list:
    Screen Shot 2016-01-09 at 6.07.11 PM
  19. Give the policy a name, “noaccess”, and a description, then paste the following code into the policy document:
  20. Click Validate Policy at the bottom.  It should show “This policy is valid,” as you see below:
    Screen Shot 2016-01-10 at 7.27.10 AM
  21. Click Create Policy, then click Roles on the left-hand side of the screen.
    Screen Shot 2016-01-12 at 9.01.41 AM
  22. Click Create New Role, then type in a role name, “noaccess”:
    Screen Shot 2016-01-12 at 9.01.54 AM
  23. Under the Select Role Type screen, select “Amazon EC2”:
    Screen Shot 2016-01-12 at 9.02.06 AM
  24. On the Attach Policy screen, filter for the “noaccess” policy we just created, and check the box next to it to select it:
    Screen Shot 2016-01-12 at 9.02.22 AM
  25. On the Review screen, click the Create Role button at the bottom right:
    Screen Shot 2016-01-12 at 9.02.33 AM
  26. Now, go back to the main screen of the AWS console, and click EC2 in the top left.
  27. Click “Key Pairs” under the Security section on the left:
    Screen Shot 2016-01-12 at 1.38.35 PM
  28. Click “Create Key Pair”, then give the Key Pair a name:
    Screen Shot 2016-01-12 at 1.38.56 PM
  29. The private key will now be downloaded by your browser.  Save this key in a safe place, like your ~/.ssh folder, and make sure it can’t be read by other users by changing the mode on it:
    mv immutable.pem ~/.ssh
    chmod 600 ~/.ssh/immutable.pem
  30. Run ssh-agent, and add the private key to it, by executing the following commands:

    You should see output like the following:
    Screen Shot 2016-01-12 at 1.49.16 PM
  31. Next, install pip using the following command:
    sudo easy_install pip
    You should see output like the following:
    Screen Shot 2016-01-22 at 11.52.01 AM
  32. Then, install boto using the following command:
    sudo pip install boto
    You should see output like the following:
    Screen Shot 2016-01-22 at 11.54.04 AM

The setup of your environment is now complete.  To test and ensure you can communicate with the AWS EC2 API, execute the following command:
aws ec2 describe-instances

You should see output like the following:
Screen Shot 2016-01-12 at 10.00.17 AM

In the next article, we’ll begin setting up our Ansible playbook and provisioning a test system.

6 Responses

  1. Jon Forrest January 25, 2016 / 11:29 am

    “due to it’s simplicity” ->
    “due to its simplicity”

    • VCDXpert January 25, 2016 / 11:47 am

      Thanks! I should hire you as my editor. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *