Cloud Automation

Automating Storage Volume Provisioning with Ansible Automation Scripts and Cloud Volumes ONTAP

[Cloud Volumes ONTAP, DevOps, Master, 8 minute read, Cloud Automation, D]

The fast-paced DevOps model has revolutionized how enterprises develop and release new products and services to their customers. But at a certain scale, provisioning the volumes that are needed to supply that process can be a challenge for storage admins to handle manually. This challenge can be tackled with cloud automation.

In this article, we will show how IT storage admins and managers can give users the independence and flexibility to create volumes in NetApp Cloud Volumes ONTAP easily, quickly, and automatically while keeping standards and best practices. This can be enabled through NetApp Cloud Manager APIs combined with Ansible API scripts that are activated via a dedicated provisioning portal.

We will also show an example of how to create Ansible automation scripts that embed the enterprise’s IT policies and best practices.

Faster Development=More Storage Demands

Enterprises act on massive operational levels, and as such they need to scale in order to meet demands that can change drastically over time. In order to reduce the workload and allow for faster provisioning of multiple volumes simultaneously, providing the user/requestor with the flexibility to create volumes is key.

But without the appropriate knowledge of the storage system, storage volumes, company best practices and storage policies, users can make mistakes. These errors can lead to data resilience/reliability issues, data security issues, wasted spending, or even data loss.

The solution for that in Cloud Volumes ONTAP is NetApp Cloud Manager. Cloud Manager provides the single-page view and management for your storage management. Cloud Manager can manage NetApp’s cloud products and physical storage systems using the same browser window. For developers, Cloud Manager is especially useful in cloud storage automation through APIs or using Terraform and Ansible. Cloud Manager also plays a role in managing hybrid and multicloud architectures, and DevOps processes such as automated data cloning and Kubernetes persistent volume provisioning.

Below we’ll see the role of enterprise automation in an enterprise deployment, using Ansible automation scripts to create volumes with the Cloud Manager API calls. For repetitive tasks such as volume creation, this can be more efficient compared to using a web browser, and provide a consistent, secure method for volume creation.

Scripted Volume Provisioning

Using API Calls

Our chosen scripting language for automated provisioning is Ansible Automation scripts or playbooks, a very mature and commonly used language in the realm of DevOps when automating server and service provisioning. Note that any scripting language which can use HTTP APIs can be used here as well.

To manage this we will use the Cloud Manager API. The solution we will discuss utilizes a simple web portal where the user can specify, for example, the required size, location, intended use (selected from a specified list), and the IP of the system(s) that will be accessing the storage. The portal will then create an Ansible playbook (scripts) from a template, populating all variables, then execute the playbook to create the volume.

Ansible has a module called "uri" which can be used to interact with a REST API. In the playbook we can authenticate with Cloud Manager, create volumes, set an export policy, and set a snapshot policy.

The Ansible Script

For this example, the playbook below is for AWS, though this action can just as easily be performed with Google Cloud or Azure. Where user choices can replace the capitalized variable names, this will create the specified volume.  Because the user is choosing from specified lists of options, a number of variables such as Workplace ID can be populated by the portal rather than discovered by Ansible logic. Find the full instructions on how to start Cloud Manager on AWS here.

Ansible Script Header

This is a standard header in an Ansible script and is as follows;

name: Provides a name for the script, which can appear in debug lines.
hosts: Usually specify the target host(s) to connect to, but with API’s it is used to reference variables which are directly associated with the host.
connection: Specifies how we connect to the target, API’s are called by the locally running script.
gather_facts: Stops Ansible from trying to run commands on the host to gather facts. ---
- name: create netapp volume
  hosts: occm
  connection: local
  gather_facts: False

The Variables Section

As explained above, some facts can be directly associated with a host, either in the Ansible hosts file or the hosts variable file. This allows security credentials such as username, password, or other authentication tokens to not have to be listed in the playbook.

The following are specified in the Ansible hosts file and thus can just be referenced;

cm_auth0_domain: the domain used to generate tokens from Auth0 service.
cm_client_id: The client ID.
cm_refresh_token: refresh tokens allow you to create auth tokens.
occm_ip: The IP address of the Cloud Manager instance.

Capitalized variables must be set by the portal when it generates the playbook. Please note the CM_EXPORT_IPS placeholder must be replaced by either a single network specification "10.0.0.1/28" or multiple network specifications “10.59.8.24/32” or “10.0.0.0/24” in the script.
 vars:
    cm_we_id: "CM_WE_ID" 
    cm_svm_name: "CM_SVM_NAME"
    cm_export_policy: "CM_EXPORT_POLICY"
    cm_snapshot_policy: "CM_SNAPSHOT_POLICY"
    cm_volume_name: "CM_VOLUME_NAME"
    cm_vol_size: "CM_VOL_SIZE"
    cm_provider_type: "CM_PROVIDER_TYPE"
    cm_export_ips: [CM_EXPORT_IPS]

To authenticate with Cloud Manager API we need to create an HTTP request body containing the authentication information to be passed in the get token request using the documented format. It is simpler to define the body as a variable.

    cm_auth_body:
      "grant_type": "refresh_token"
      "refresh_token": "{{cm_refresh_token}}" 
      "client_id": "{{cm_client_id}}"

We will also create the volume quote request body as a variable. The required details are either predefined or obtained from the user via the web frontend.
    cm_quote_request_body:
      "workingEnvironmentId": "{{cm_we_id}}"
      "svmName": "{{cm_svm_name}}"
      "exportPolicyInfo":
        "policyType": "custom"
        "ips": "{{cm_export_ips}}"
      "snapshotPolicyName": "{{cm_snapshot_policy}}"
      "name": "{{cm_volume_name}}"
      "providerVolumeType": "{{cm_provider_type}}"
      "capacityTier": "S3"
      "tieringPolicy": "auto"
      "verifyNameUniqueness": true
      "size":
        "size": "{{cm_vol_size}}"
        "unit": "GB"
      "enableThinProvisioning": true
      "enableDeduplication": true
      "enableCompression": true
    cm_num_disks: 5
    cm_aggregate_name: "fred"

 

The Task Section

Now we define the tasks that will be executed in the order they are specified in the playbook.
tasks:

Get Access Token

We first request an access token passing our “authentication credentials” in the request body. These are converted to JSON by the to_json function. Make sure to notice the single space in the body between the single quote and the curly braces ' { , this stops Ansible from trying to determine the body type automatically, as it will become JSON.  The response is saved.
    - name: Obtain Access Token
      uri:
        url: https://{{cm_auth0_domain}}/oauth/token
        method: POST
        body_format: json
        return_content: yes
        body: ' {{cm_auth_body|to_json}}'
        status_code: 200,202,204
      register: token_response
      ignore_errors: no

Creating an Authorisation Token

Create a fact for header authorization using token_type and access_token from token_response. This token will be used for further API calls. In Ansible, a fact is now like a variable and can be used the same way.
    - name: Create Token String
      set_fact: token_string="{{
 (token_response.content|from_json).token_type }} {{
 (token_response.content|from_json).access_token }}"

Get Volume Quote

Before creating a new volume, a quote must be obtained. This will specify the number of extra disks required and the aggregate name. The request body is the cm_quote_request converted to JSON. Again, the response will be saved.
    - name: Get Volume Quote
      uri:
        url: "http://{{occm_ip}}/occm/api/vsa/volumes/quote"
        method: POST
        headers:
           Authorization: "{{token_string}}"
           Referer: "Ansible"
        body_format: json
        body: "{{cm_quote_request_body|to_json}}"
        status_code: 200,202,204
        timeout: 180
      register: quote_response

Volume Create Fact

From the quote_response extract and create facts from the number of discs and target aggregate name.  Then create the dictionary fact for the actual volume create request, using the existing variables and new facts.
    - name: Set Disks and Aggregate facts
      set_fact:
        cm_num_disks: "{{quote_response.json.numOfDisks}}"
        cm_aggr_name: "{{quote_response.json.aggregateName}}"
        cm_volume_request_body:
          "workingEnvironmentId": "{{cm_we_id}}"
          "svmName": "{{cm_svm_name}}"
          "exportPolicyInfo":
            "policyType": "custom"
            "ips": "{{cm_export_ips}}"
          "snapshotPolicyName": "{{cm_snapshot_policy}}"
          "name": "{{cm_volume_name}}"
          "iops": null
          "providerVolumeType": "{{cm_provider_type}}"
          "capacityTier": "S3"
          "tieringPolicy": "auto"
          "verifyNameUniqueness": true,
          "size":
            "size": "{{cm_vol_size}}"
            "unit": "GB"
          "enableThinProvisioning": true
          "enableDeduplication": true
          "enableCompression": true
          "maxNumOfDisksApprovedToAdd": "{{cm_num_disks}}"
          "aggregateName": "{{cm_aggr_name}}"

The Create Volume Script

Finally, the actual volume create is requested.  Because the authorization token for the header and request body has already been created, the actual API request is quite simple. The request body is converted to JSON.

Looping CheckIt would be very simple to add some logic based on the number of additional required discs and not to make the request.
    - name: Create Volume
      uri:
        url: "http://{{occm_ip}}/occm/api/vsa/volumes?createAggregateIfNotFound=true"
        method: POST
        headers:
           Authorization: "{{token_string}}"
           Referer: "Ansible"
        body_format: json
        body: "{{cm_volume_request_body|to_json}}"
        status_code: 200,202,204
        timeout: 180
      register: volume_response

Because the volume create is a request, it will take a finite amount of time. For that reason, an API is called on loop, calling an audit API until the status of the volume is no longer “Received” and the volume is provisioned.

The loop will pause for 20 seconds between each API call and exit after 10 iterations, so approximately 3.5 minutes will be allowed for completion of the volume provision.
    - name: Loop until volume provisioned
      uri:
        url: "http://{{occm_ip}}/occm/api/audit/{{volume_response.oncloud_request_id}}"
        method: GET
        headers:
           Authorization: "{{token_string}}"
           Referer: "Ansible"
        status_code: 200,202,204
        timeout: 180
      register: audit_response
      until: audit_response.json.0.status != "Received"
      retries: 10
      delay: 20

The Final Response

Not much output will be generated normally when an Ansible script runs. The final response status is output to ensure the final status is seen or logged.
    - name: Output final response
      debug: msg={{audit_response.json.0.status}}

Scripting Your Way to Enterprise-Scale Environments

To conclude, NetApp Cloud Manager API calls can be used to partially or fully automate the provisioning of storage volumes on any NetApp system, including Cloud Volumes ONTAP, using DevOps practices. Doing so can provide repeatable provisioning speed improvements as well as compliance with best practice, company policy, internal procedures, and security policies.


New call-to-action

-