hamburger icon close icon

How to Integrate Your DevOps Tools with Cloud Volumes Service

June 5, 2019

Topics: 5 minute read

NetApp® Cloud Volumes Service for AWS is an enterprise-ready storage solution that helps cloud customers achieve on-premises storage service performance levels in the cloud. Cloud Volumes Service takes advantage of colocation hosting with Amazon Web Services (AWS) data centers so that you can connect your environments through virtual private gateways (VPGs) or Amazon Direct Connect. That critical feature relieves you of the burden of building high availability and scaling for storage, and instead offloads it to NetApp. Your DevOps team can then focus less on managing the storage layer and more on provisioning high-speed storage for file and database services.

Cloud Volumes Service enables DevOps teams to build more quickly and efficiently, using tools to integrate with Cloud Volumes Service through its RESTful API.

What Does Cloud Volumes Service Bring to DevOps?

Many DevOps shops rely solely on Amazon Elastic Block Storage (Amazon EBS) and its various IOPS offerings, but those offerings can only go so far. Cloud Volumes Service not only offers speeds up to 20 times faster than Amazon EBS, but also is highly customizable to suit frequently changing storage needs.

Those capabilities are directly connected from a NetApp edge location to a single Virtual Private Cloud (VPC) through VPG or to an AWS account through Direct Connect. Cloud Volumes Service acts as storage as a service and can be tapped for NAS-like shares, volumes, and mounts on demand. 

Some users simply want to use Cloud Volumes Service for file services. Of course, you can launch a pair of Microsoft Windows or Linux servers, attach high-cost/high-IOPS Amazon EBS volumes, and maintain high availability. But that approach comes with a (sometimes extreme) risk—particularly because that model entails managing data through a single source of truth.

Cloud Volumes Service eliminates that concern by dynamically creating highly available file services that are fully compliant with all modern protocols.

Furthermore, Cloud Volumes Service eliminates issues that DevOps engineers face when trying to automate stacks, such as:

  • When you use Amazon Elastic Compute Cloud (Amazon EC2) snapshots, refreshing storage to servers is a slow process, particularly for restoring large snapshots to multiple environments in parallel. Apart from the time it takes AWS to complete the snapshot (before restoring), all volumes are flat provisioned and costly.
  • When you test through your continuous integration/continuous delivery (CI/CD) pipelines, applications typically need file services; having a quick, complete copy at the ready makes that operation far more ephemeral and inexpensive. Using NetApp Snapshot™ technology to create rapid clones saves time and money.
  • Executing database schema changes or upgrades can be a lengthy and difficult process, not to mention risky. The ability to clone and replace test databases with actual data can give your DevOps team greater confidence that changes in the application environments will be successful, and the team won’t have to completely build and maintain additional databases in the long term.
  • Maintaining high availability in the cloud comes with challenges. It’s no small task to restore failed servers. Cloud Volumes Service eliminates this concern because it is already built for high availability and data can be synchronized with other applications.

DevOps Integrations with Cloud Volumes Service Through the RESTful API

You can use your preferred automation tools to create environments while still reaping the benefits of Cloud Volumes Service. Whether you use desired-state configuration management tools such as Puppet or Chef, or orchestration tools like HashiCorp Terraform and IBM’s implementation of Ansible, the service’s RESTful API provides a way to directly interface those tools with the service. The API allows the tools to provision or attach to storage without intervention. Let’s look at an example with HashiCorp Terraform.

Note: Some resources are omitted for brevity. Terraform configurations and methods can vary heavily; this code is to help get you started.

Creating a Volume from a Database Snapshot Copy

The idea behind this Terraform code and script is to create a volume from a Snapshot copy of a production database, create a Postgres server, and then mount that Cloud Volume. This volume could be used as a database refresh or for testing.

terraform.tfvars

variable "env" {
description = "This determines the environment such as dev or prod."
default = "dev"
}
variable key_name {
description = "Instance launch ec2 keypair name"
default = {
dev = "notcritical"
prod = "critical"
}
}
variable subnet_id {

}

variable instances {
default =
dev.postgres_db_ami = "ami-randostringabc1234"
dev.postgres_db_count = 1
dev.postgres_db_type = "t3.large"
dev.postgres_db_subnet_id = "subnet.abc123"
dev.postgres_db_availability_zone = "us-west-2b"
}
variable cvs-api-key {}

variable cvs-secret-key {}

... Omitted vars

database-server.tf
# Create the Postgres server

resource "aws_instance" "postgres-db" {
ami = "${lookup(var.instances, format("%s.postgres_db_ami", var.env)}"
availability_zone = "${lookup(var.instances, format("%s.postgres_db_availability_zone", var.env)}"
count = "${lookup(var.instances, format("%s.postgres_db_count", var.env)}"
instance_type = "${lookup(var.instances, format("%s.postgres-db.type", var.env))}"
key_name = "${lookup(var.key_name, var.env)}"
subnet_id = "${lookup(var.instances, format("%s.postgres_db_subnet_id", var.env)}"
vpc_security_group_ids = ["${aws_security_group.postgres-db.id}"]
source_dest_check = false
iam_instance_profile = "${aws_iam_instance_profile.postgres-db.id}"
user_data = "${data.template_file.database-server.rendered}"
tags {
Name = "${var.env}-postgres-db.mydomain.io"
deployment = "${var.env}"
Type = "postgres-db"
}

lifecycle {
ignore_changes = ["source_dest_check", "ebs_optimized", "ami", "user_data"]
}
}

# Execute user-data via a template on the Postgres Server
data "template_file" "database-server" {
template = "${file("database-server.tpl")}"

vars {
serverip = "${aws_instance.postgres_db.private.ip}"
mountname = "${var.env}-refresh-db"
}
}

# To clone the prod database snapshot
data "external" "task_definition" {
program = ["bash", "./clone-database-snapshot.sh"]
query = {
apikey = "${var.cvs-api-key}" #Uses pipeline job variable
secretkey = "${var.cvs-secret-key}" #Uses pipeline job variable
apiurl = "https://cv.us-west-2.netapp.com:8080/v1/"
name = "${var.env}-refresh-db"
}
}

... Other resources omitted for brevity

database-server.tpl

#!/bin/bash
# Install nfs utilities
sudo yum install -y nfs-utils
# Create the mount directory
sudo mkdir db_data_volume
# mount the volume
sudo mount -t nfs -o rw,hard,nointr,bg,nfsvers=4,tcp \
${aws_instance.postgres_db.private.ip}:/"${mountname}" db_data_volume

clone-database-snapshot.sh

# The clone-database-snapshot.sh
#!/bin/bash
curl -s -H accept:application/json -H "Content-type: application/json" -H api-key:$apikey -H secret-key:$secretkey -X POST $apiurl/v1/FileSystems -d '
{
"snapshotId": "e19c5b72-MYPRODSNAPSHOT-a24702907fad",
"name": "${name}",
"creationToken": "another-refresh-volume",
"region": "us-west-2",
"serviceLevel": "medium"
}'
# Additional handling if necessary

Using similar code, you can execute a Terraform stack and have it restore and attach to volumes, and then destroy the environment after testing is complete.

Meeting DevOps Performance Demands

NetApp Cloud Volumes Service is a full-featured storage solution to meet the needs of DevOps engineers who want more performance and provision speed, high availability, deduplication, reduced cost, and increased flexibility in their cloud environments. Cloud Volumes Service’s robust API enables you to take advantage of a number of tools, CI/CD pipelines, and automation workflows while also tapping into the service’s myriad capabilities.

Try It Out Now

Request a demo to get started now, and see how NetApp Cloud Volumes Service for AWS can fit into your DevOps needs.

Solution Architect

-