Programming with Cloud Volumes Service APIs to Manage Performance, Protection, and Cost

One of the amazing things about the cloud is that it can be provisioned and managed entirely via software using application programming interfaces (APIs). This allows DevOps and site reliability engineers to create compute instances, configure networking, and expand or contract resources to meet business and application requirements. One missing piece has been how to programmatically control persistent storage and enable increased performance when needed, and lower costs when workloads are lighter.

This is one of many problems that the NetApp® Cloud Volumes Service for AWS can solve. This fully managed service for AWS customers provides high-performance shared storage over NFSv3 and/or SMB 2.1, 3.0, and 3.1.1 protocols to support Linux and Windows Amazon EC2 instances. RESTful APIs mean that every feature can be controlled with an API call, including provisioning volumes, creating snapshots and clones, and changing performance non-disruptively. The ability to change performance while clients are mounted and actively reading and writing is a unique feature of the service that can greatly reduce the total cost of storage. This blog considers examples where costs are reduced by more than 70%.

Before we get into some examples, take a look at the online documentation and Getting Started guide on how to use the APIs, and how to find your Access key and Secret API key. The guide gives several examples for listing volumes, creating volumes, creating snapshots and clones, and updating performance. There are also links to simple bash scripts, which can help you to quickly write code that is specific to your needs. Modern programming languages all fully support calling RESTful APIs, including Python, Golang, Java, C, C#, C++, and more.

Creating Volumes

You can create a cloud volume in seconds by using APIs, and you can fully define the protocols you need, export policies, and set performance. You set performance for a cloud volume by using service levels and allocating capacity to the volume, with the combination defining X performance and Y cost.

The following table from Cloud Central describes the three service levels.


You can choose from the three service levels and change the service level on the fly without needing to re-provision volumes.

Choosing a service level:

  • The Standard service level offers very economical cloud storage, at just $0.10 per GB per month. It enables throughput up to 16MB/s for each TB allocated. This level is ideal as a low-cost solution for infrequently accessed data. If you need to increase performance, you can  increase the allocation (for example, 10TB provides 160MB/s) and/or choose a higher service level.
  • The Premium service level delivers a good mix of cost and performance. At a cost of $0.20 per GB per month, it offers 4 times the performance of Standard, with 64MB/s for each TB allocated. This is a good fit for many applications where data capacity and performance needs are balanced.
  • The Extreme service level provides the best performance. At a cost of $0.30 per GB per month, it enables up to 128MB/s for each TB allocated, and cloud volumes can scale to deliver several GB/s for reads and writes. Extreme is the best fit for high-performance workloads.

When you create a volume, you provide the name, export path, protocols, policies, service level, and  capacity you want to allocate. The following example uses a POST call to create a volume at the Standard service level, with an allocated capacity of 100GB and exported using nfsv3:

curl -s -H accept:application/json -H "Content-type: application/json" -H api-key:<api_key> -H secret-key:<secret_key> -X POST <api_url>/v1/FileSystems -d '

"name": "Test",
"creationToken": "grahams-test-volume3",
"region": "us-west",
"serviceLevel": "standard",
"quotaInBytes": 100000000000,
"exportPolicy": {"rules": [{"ruleIndex": 1,"allowedClients": "","unixReadOnly": false,"unixReadWrite": true,"cifs": false,"nfsv3": true,"nfsv4": false}]},
"labels": ["test"]

Changing Performance on the Fly

If you realize that you need more performance, you can update the volume with an API call. The change happens in seconds and is nondisruptive to clients. The following example uses a PUT call to change the service level to Extreme and the allocated capacity to 500GB:

curl -s -H accept:application/json -H "Content-type: application/json" -H api-key:<api_key> -H secret-key:<secret_key> -X PUT <api_url>/v1/FileSystems/cdef5090-aa5e-c2cf-6bba-f77d259a37f8 -d ' {        "creationToken": "grahams-test-volume3",        "region": "us-west",        "serviceLevel": "extreme",        "quotaInBytes": 500000000000 }'

It’s just as easy to lower the performance and therefore the cost by using APIs. The following example lowers the service level back to Standard: 

curl -s -H accept:application/json -H "Content-type: application/json" -H api-key:<api_key> -H secret-key:<secret_key> -X PUT <api_url>/v1/FileSystems/cdef5090-aa5e-c2cf-6bba-f77d259a37f8 -d '

"creationToken": "grahams-test-volume3",
"region": "us-west",
"serviceLevel": "standard",
"quotaInBytes": 500000000000

Now that you’ve learned how to change performance, you can use it in scripts for tasks such as starting and stopping tasks and scheduling changes to performance and cost over time.

Scripting Performance Changes

Using the example, you can quickly script increasing performance before running an intensive task such as machine learning and then lowering cost when the task finishes.

#! /bin/bash
# script to increase cloud volume performance for a machine learning app and then lower costs when finished.

./ -m arcadian-pedantic-shaw -l extreme -a 30000 -c us-west-1.conf ./ ./ -m arcadian-pedantic-shaw -l standard -a 10000 -c us-west-1.conf

This script increases performance to 3.8GB/s (128MB/s * 30TB) to accelerate the machine learning task.

Note: If it is run all the time, this performance level would cost $9,000/month (30000GBs @ $0.30/GB).

When the task finishes, you can drop the performance to 160MB/s (16MB/s * 10TB), which still meets the I/O needs to review the resulting data, but at significantly lower cost.

Note: This performance level would cost $1,000/month (10000 GBs @ $0.10/GB) .

The cost savings vary, but if you run the machine learning application for 20% of the time and adjust the Cloud Volumes service level you could save about $6,400/month.
$9,000 – (($9,000*0.2)+($1,000*0.8)) = $6,400, which is a saving of more than 70%.

Also, by using the Extreme performance level when needed, you can achieve additional savings by running fewer Amazon EC2 instances for a shorter time to complete the machine learning task.

Scheduling Changes

By using a scheduler such as cron in Linux, you can define when to increase and decrease performance to control costs while meeting business needs. This can be very useful when applications such as databases need fast performance for a few hours to process weekly reports or for user home directories, for which you can lower costs during evenings and weekends.

Example crontab file:

# Runs at extreme performance for 14 hrs every week to accelerate order processing
# Increase the performance of cloud volume ‘arcadian-pedantic-shaw’ every Thursday at 8am

0 8 * * Thu /opt/cvs-api/ -m arcadian-pedantic-shaw -l extreme -a 20000 -c us-west-1.conf

# Decrease the performance of cloud volume ‘arcadian-pedantic-shaw’ every Thursday at 10 pm

0 22 * * Thu /opt/cvs-api/ -m arcadian-pedantic-shaw -l standard -a 10000 -c us-west-1.conf

 This schedule would save more than 75% versus always running at the Extreme service level ($6,000).
($6,000 * 0.833) + ($1,000 * 0.9166) = $1,416

Data Protection

In addition to controlling the performance and cost of cloud volumes, you can also programmatically create snapshots of cloud volumes.

With the Cloud Volumes Service snapshot policies, you can define when snapshots are made and how many to retain (with the UI or the API). It can be useful to make snapshots of a dataset before tasks like updating applications or running a new algorithm, to give you point-in-time recovery in case you have an issue or want to run different algorithms against the original dataset.

Using the example script, you can quickly create a snapshot before starting a job or to make a consistency point from which to create a backup.

#! /bin/bash
# script to create a snapshot of cloud volume ‘arcadian-pedantic-shaw’ before running ML job

./ -m arcadian-pedantic-shaw -c us-west-1.conf

You can also revert volumes from point-in-time snapshots by using APIs.

Using the example script, you can revert volume vol3 to the latest snapshot:

./ -m vol3 -s last -c us-west-2.conf

You can also revert to an older snapshot by selecting its unique ID:

./ -m vol3 -s a6518730-eaff-cc24-d020-52e25ea91c1b -c us-west2.conf

Deleting Volumes

It takes only a few seconds to create or delete a cloud volume. This makes it practical to use cloud volumes as high-performance shared scratch space for ephemeral workloads.

For example, using AWS CloudFormation, you can call AWS APIs to create hundreds of Amazon EC2 instances, and use cloud volume APIs to create high-performance shared volumes,  which are then mounted by all the instances. These instances can be used to run compute- and storage-intensive jobs against a new dataset. When the job finishes, the instances are  automatically terminated and the cloud volumes are deleted to save costs.

You can use the example script to delete volumes. The following example deletes the cloud volume ‘test’ in region eu-west1.

./ -m test -c eu-west1.conf

Of course, deleting a volume or reverting a snapshot is destructive, so proceed with caution.

Note: Cloud volume API keys are unique to each user and are available only to privileged AWS IAM users.


The NetApp Cloud Volumes Service for AWS is a fully managed service or AWS customers. Every feature that is available through the web user interface is available as a RESTful API, for programming tasks such as creating volumes, cloning, making snapshots, and changing performance levels.

RESTful APIs can be called by modern programming languages, so it’s easy to include cloud volumes in custom scripts and for partners to integrate into their applications.

To learn more, watch this short video, “Using Service Levels to Meet Business Needs and Lower Costs in NetApp Cloud Volumes Service".