More about Infrastructure as Code AWS
Ansible enables you to automate cloud deployments. You can use Ansible to manage applications and services using automation playbooks. Each playbook defines a set of configurations, which is used consistently across cloud environments.
In this post, we’ll explain how to use Ansible modules with AWS, and quickly walk you through the process of automating Ansible playbook deployments with Amazon EC2 and GitHub. We’ll also explain how NetApp Cloud Volumes ONTAP can help simplify storage when using Infrastructure as Code in AWS.
In this article, you will learn:
- Why use Ansible for AWS
- How to use Ansible modules with AWS
- Automate Ansible playbooks with Amazon EC2 and GitHub
- Ansible AWS with Cloud Volumes ONTAP
Why Use Ansible for AWS?
Ansible is an open source tool that you can use to automate your AWS deployments. You can use it to define, deploy, and manage applications and services using automation playbooks. These playbooks enable you to define configurations once and deploy those configurations consistently across environments.
Another benefit of using Ansible is ensuring safe automation. Misconfigurations are a major vulnerability in cloud environments, but automation can help you ensure that only permitted configurations are deployed. However, you don’t want everyone on your team to be able to automatically deploy anything they want.
To prevent this, Ansible offers the Ansible Tower. Ansible Tower is a web-based UI that you can use to define role-based access controls (RBAC), monitor deployments, and audit events. It enables you to set and authorize user actions on a granular level. Ansible Tower also includes features for encrypting credentials and data.
Ansible modules supporting AWS
When using Ansible, there are dozens of modules you can choose from that support AWS services. These modules include functionality for:
- CloudFormation, CloudTrail, and CloudWatch
- DynamoDB, ElastiCache, and Relational Database Service (RDS)
- Elastic Cloud Compute (EC2)
- Identity Access Manager (IAM) and Security Groups
- AWS Lambda
- Simple Storage Service (S3)
- Virtual Private Cloud (VPC)
For additional ways to automate AWS infrastructure, see our article about deploying Terraform on AWS.
How to Use Ansible Modules with AWS
There are a few modules, used in most Ansible deployments, that you should know how to use. The most common of these modules are introduced below with some instructions to get you started.
To use authentication with AWS-related modules, you need to specify your access and secret keys as either module arguments or environmental (ENV) variables.
To store as module arguments:
This method involves storing your keys in a vars_file. This file should be encrypted with ansible-vault for security.
Keep in mind, if you store keys as arguments, you need to reference your arguments for each service. You can see an example of this below:
To store as environment variables:
Dynamic Host Inventory
After your hosts are provisioned, you need to establish communications. You can do this manually, but it creates significantly more work. A better alternative is to use the EC2 Dynamic Inventory script.
This script enables you to dynamically select hosts regardless of where hosts were created. It then automatically maps your hosts according to your inventory script.
Tags, groups, and variables
After using the Dynamic Inventory script above, hosts are also automatically grouped according to how hosts are tagged in EC2.
For example, a host tagged with a class of ‘webserver’ can be automatically discovered with the following command:
- hosts: tag_class_webserver
You can leverage this functionality to group systems by function and simplify management. You can also enhance this functionality by including ‘group_vars’. These are variables in Ansible that you can assign as subcategories of classes.
Autoscaling with Ansible Pull
To autoscale your resources, you can either use the built-in Amazon autoscaling features or you can use Ansible modules. These modules can configure your autoscaling policies and grant finer control.
One module you can use is ansible-pull. Pull is a command-line tool that you can use to fetch and run playbooks. To apply this to autoscaling you can create images with a built-in ansible-pull invocation. Then, when a host comes online, it will automatically pull your autoscaling playbook. This eliminates the need to wait until the next Ansible command cycle occurs.
Ansible vs CloudFormation
CloudFormation is a native Amazon service that you can use to define your cloud resource stack as a JSON document. It can provide essentially the same functionality as Ansible but it has a much steeper learning curve. Because of this, it is often easier to use Ansible modules.
In some cases, however, users may want to use both CloudFormation and Ansible. There are also modules for this. For example, modules that can be used to abstract the application of CloudFormation templates. These modules can enable you to use Ansible to build images and then launch those images with CloudFormation.
Ansible AWS Tutorial: How to Automate Ansible Playbook Deployment with Amazon EC2 and GitHub
In the below walkthrough, you’ll learn how to automate an Ansible playbook deployment using EC2 and GitHub. This is a good way to get familiar with how Ansible interacts with AWS services like EC2. However, before you get started, you should be familiar with both AWS and Ansible separately. This walkthrough was adapted from a longer tutorial which you can view here.
Before you get started, make sure that you have the following prerequisites:
- An active AWS account
- An EC2 key pair
- An EC2 instance running Amazon Linux 2
- A security group with SSH and HTTPS access
- A GitHub repository
- Set up webhook processing
To begin, you need to configure your Ansible deployment to use GitHub webhooks. This requires setting up processing for webhooks on your EC2 instance. To do this, you need to route requests to an Express server using NGINX as a reverse proxy.
Please also make sure you have installed Amazon’s Extras Package for Linux.
Use SSH to access your EC2 instance:
amazon-linux-extras install epel
Update your packages:
yum update -y
Install Ansible, NGINX, and Git:
yum install ansible -y
yum install nginx -y
yum install git -y
- Install Node.js and set up your Express server
Once webhooks are enabled, you need to prepare Node.js and your Express server.
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
nvm install node
Now, you need to select a location for your Express server. The example below, creates a directory called server.
mkdir server && cd server
npm install express
exec("ansible-pull -U email@example.com:<GitHubUser>/<repo-name>.git <playbook>.yml")
Finally, run the Express server:
- Set up a deployment key for your repository
With your server running, you are ready to set up your deployment key. You will use this deployment key later in the procedure.
Create an SSH key on your instance, name it ssh, and then run the following command:
eval "$(ssh-agent -s)"
You should get an output similar to:
>Agent pid 1111
- Configure NGINX to route traffic
Next, you need to set your NGINX configuration to listen on port 80. Then, you can route traffic to the port that your Express server listens to. For details see the full tutorial.
To start NGINX, use the following commands:
systemctl start nginx
systemctl enable nginx
- Set up GitHub to configure your webhooks
Finally, you are ready to configure your webhooks on GitHub.
Log into your GitHub account, and navigate to Settings > Deploy Keys.
Click Add deploy key.
Go to the .ssh directory where your public key is stored and open the id_rsa.pub file. Then, copy its contents. It should look something like the following:
Navigate to Webhooks in the Settings menu and click Add webhook.
Copy the public IP address for your EC2 instance into the Payload URL section.
Check the Response section to verify that your Express server received the request.
Ansible AWS with Cloud Volumes ONTAP
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP supports up to a capacity of 368TB, and supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data.
Cloud Manager is completely API driven and is highly geared towards automating cloud operations. Cloud Volumes ONTAP and Cloud Manager deployment through infrastructure- as- code automation helps to address the DevOps challenges faced by organizations when it comes to configuring enterprise cloud storage solutions. When implementing infrastructure as code, Cloud Volumes ONTAP and Cloud Manager go hand in hand with Terraform to achieve the level of efficiency expected in large scale cloud storage deployment in AWS.