hamburger icon close icon

Get the Most Out of Your Oracle Databases in Cloud Volumes Service for AWS

July 16, 2019

Topics: Database9 minute read

Cloud adoption among enterprises is unfolding rapidly, with many adopting a cloud-first strategy for new projects and migrating their existing systems from on-premises to AWS. Oracle workloads are mission-critical for most enterprises and feature prominently in discussions over an enterprise cloud migration. This architecture piece provides a brief overview of Oracle databases, and gives a reference architecture for deploying Oracle on Cloud Volumes Service (CVS) and AWS. It also explains the benefits of running Oracle databases on CVS and AWS.

If you’re running Oracle databases on-premises and are seeking a similar data management solution in the cloud, there are a few options. When moving your database to the cloud, you’ll also keep in mind high performance, data protection, data durability, encryption, and high availability. NetApp has created a one-stop cloud storage solution by partnering with AWS. 

A One-Stop Storage Solution for Cloud

With Cloud Volumes Service for AWS, you can run a high-performance database with maximum data protection. Underlying that security is NetApp SnapshotTM technology, which offers a crucial option for rapid, efficient database backup and restoration. By design, CVS for AWS provides nine 9s of data durability.  

With consistently high performance of over 200k IOPS, Cloud Volumes Service provides shared persistent storage with high throughput and low latency. It easily meets the demands of large Oracle databases, with SLAs that guarantee performance

Note: Oracle Database licensing on AWS is based on the size of the instance on which the database is installed. For information about Oracle Database licensing, see Licensing Oracle Software in the Cloud Computing Environment on the Oracle website.

Increase the Resilience of Oracle Databases with Snapshot Copies

You can easily create a snapshot copy of an Oracle database using NetApp Snapshot technology.  Snapshot copies act as logical backups. They’re point-in-time representations of your data, with a rapid revert function that allows you to restore your database without downtime. You create snapshot copies manually or schedule the creation of snapshot copies using the Cloud Volumes Service API or graphical user interface (GUI); rapid revert is only available from the API. 

Snapshot copies are fast, plentiful, and nondisruptive. A NetApp Snapshot copy simply manipulates block pointers, creating a “frozen” read-only view of a volume that enables your applications to access older versions of files and directory hierarchies without special programming. Snapshot copy creation takes only a few seconds (typically less than 1 second) regardless of the size of the volume or the level of activity within the environment. Since they are read-only, incremental copies, you only pay for the space consumed by new data written.  

Cloud Backup Service Expands Data Protection Capabilities

Cloud Backup Service (CBS) is now fully integrated into Cloud Volumes Service for AWS. It’s a simple and efficient way to backup Oracle databases. 

Cloud Backup Service expands the data protection capabilities of Cloud Volumes Service by delivering dedicated backups for long-term recovery, archive, and compliance. Backups created by the service are stored in AWS S3 object storage. Backups created with CBS are independent of snapshot copies, which are available for near-term recovery and rapid cloning.  

Speed Up Time to Market With Fast Copy 

Most organizations need multiple copies of data for testing and development. Oracle landscapes are littered with system copies for variety of uses; creating and refreshing those copies are cumbersome. Typically, creating copies of Oracle landscapes is a time-consuming and tedious process. Cloud Volumes Service for AWS allows you to fast copy and backup database files, drastically improving the process of copying, backing up, and reverting. The process takes almost no time, which ultimately leads to lower costs by way of a quicker time to market.   

Data durability

With Cloud Volumes Service, data is protected not just against multiple drive failures, but also against numerous storage media errors that can harm your data durability and your data integrity. And with 99.9999999% durability—based on the experience of over 300,000 customers—you don’t have to worry that your data is going to disappear, which is underpinned by the product’s SLA.  

High availability 

Built on industry leading hardware and software, NetApp Cloud Volumes Service is characterized by high availability and uptime, both of which are enabled by architectural features, such as redundant network paths, failover, and advanced data protection. 
 
Because NetApp Cloud Volumes Service for AWS sits centrally in relation to each of the Availability Zones within an Amazon Web Services (AWS) region, your service is unaffected by Availability Zone outages. You can access your data from any Availability Zone within the region without having to replicate content. This availability is covered by CVS’s SLA

Security and encryption 

NetApp Cloud Volumes Service uses at-rest encryption, relying on the XTS-AES 256-bit encryption algorithm.  CVS encrypts your data without compromising your storage application performance. NetApp manages and rotates encryption keys for you, thus, this single-source solution can increase your organization’s overall compliance with industry and government regulations without compromising your user experience. 

Average cost savings of around 70% 

When you use CVS for AWS, you control your cloud performance by dynamically adjusting service levels. If you need to increase performance, you can increase the allocation (for example, 10TB provides 160MB/s) and/or choose a higher service level. 

  • The Standard service level offers very economical cloud storage, at just $0.10 per gigabyte per month. It enables throughput up to 16MB/s for each terabyte allocated. This level is ideal as a low-cost solution for infrequently accessed data.  
  • The Premium service level delivers a good mix of cost and performance. At a cost of $0.20 per gigabyte per month, it offers 4x the performance of the Standard level, with 64MB/s for each terabyte allocated. This is a good fit for many applications where data capacity and performance needs are balanced. 
  • The Extreme service level provides the best performance. At a cost of $0.30 per gigbytes per month, it enables up to 128MB/s for each terabyte allocated, and cloud volumes can scale to deliver several GB/s for reads and writes. Extreme is the best fit for high-performance workloads.  

One of the unique features of NetApp Cloud Volumes Service for AWS is the capability to change performance on-the-fly. If the requirement is to have the Extreme performance tier for 2 hours a day and Standard performance for the rest, Cloud Volumes Service for AWS can use API calls or a scheduler in Linux to facilitate that process

Detailed Architecture Design

Oracle database on single instance:

Oracle DB workload on CVS

In the architecture diagram, you can see that the Oracle database datafiles and logs are configured on Cloud Volumes Service for AWS. With the combination of backups, snapshot copies, and right-sized throughput, you can easily host your high-performance database in the cloud with maximum data protection and eight 9s of data durability (soon to be nine 9s).   

In the diagram above, you can see: 

  • The Oracle database is configured on an Amazon EC2 instance.
  • Single or multiple cloud volumes are used as the dedicated storage for the datafiles.
  • An additional volume is dedicated to logs (archive logs, redo logs) and control files.
  • The datafile volume(s) are provisioned using the Extreme service class because that class provides the highest throughput at a manageable cost. 
  • A second volume is provisioned using the Premium service class.
  • Housing archive and logs in the same volume is a more cost-efficient solution also helps in restoring from the backups 
  • than creating an individual volume for redo logs alone. 
  • Cloud Backup Service backs data up to the S3 cloud.
  • For more details on the configuration details, please refer to Cloud Backup Service

The key components of the solution include: 

  • Oracle database engine 
  • Amazon EC2 instances
  • NetApp Cloud Volumes Service for AWS (storage)
  • NetApp Snapshot Technology
  • NetApp Cloud Backup Service

Oracle database high availability: 

Oracle DB workload with standby instance on CVS

In the architecture diagram above, you can see that the Oracle database data files, archive logs, redo logs, and control files are configured on Cloud Volumes Service for AWS. The setup resembles the single instance Oracle database diagram above, except in this case, it includes a standby database. That database is setup on the second EC2 instance, in a second availability zone, which was done by copying a primary database to the second instance. Since a single cloud volume can be used in multiple availability zones, you de factor have higher availability with a single volume connected to multiple Oracle instances. That availability increases when you have two cloud volumes.  

  • In tandem with Cloud Volumes Service, the Oracle primary database is configured on an Amazon EC2 instance in the first availability zone.
  • The standby database is setup on the second EC2 instance in a second availability zone by replicating the primary database to the second instance. 
  • A single cloud volume or multiple cloud volumes are used as the dedicated storage for the datafiles.
  • An additional volume is dedicated to logs (archive logs, redo logs) and control files.
  • The data volume is provisioned using the Extreme service class.
  • The other volume is provisioned using the Premium service class.
  • Housing archive and logs in the same volume is a more cost-efficient solution that also helps by restoring from backups rather than creating an individual volume for redo logs alone. 
  • Cloud Backup Service backs data up to the S3 cloud.
  • For more details on the configuration details, please refer to Cloud Backup Service 

Performance 

Proven performance: 

The graph below illustrates the performance of an Oracle database on Cloud Volumes Service. We ran the benchmark with various workload mixtures and volume counts. The results were stunning.  

  • A c5.9xlarge instance can drive up to 40,000 disk IOPS, and the c5.18xlarge supports up to 80,000 disk IOPS. 
  • 250,000 file system IOPS in CVS in 2 millisecondswhen using the c5.18xlarge instance.  
  • 144,000 file system IOPS in CVS in fewer than 2 millisecondswhen using the c5.9xlarge instance. 
  • Generally speaking, 2-millisecondlatencies or better are acceptable to database administrators (DBAs). CVS provides 2 millisecond latencies in most regions. Latency can vary depending on the availability zone (AZ) and the region.  
  • Database administrators prefer the simplicity of a single-volume database, and 200,000 IOPS should satisfy all but the most demanding of database needs, with a simple layout. For the most demanding workloads, additional volumes enable greater throughput, as shown in the graph below. 
  • NetApp Cloud Volumes Service for AWS is uniquely positioned to take advantage of Oracle Direct NFS. The Oracle Direct NFS client spawns several network sessions to each cloud volume, according to the workload’s demands. A vast number of network sessions creates the potential for a significant amount of throughput for the database—far greater than a single network session can provide in AWS. 
  • Check out our blog on oracle performance for more details.  

    Oracle SLOB in AWS Cloud Volumes - Single AWS client (c5.18xlarge)

The Economic Benefits of Cloud Volumes Service

Cloud Volumes Service is the lowest cost, highest quality solution to database hosting in the cloud.  You can save a significant amount of time and money by changing performance levels on demand in CVS for AWS; other cloud storage solutions recommend that you configure performance and capacity to meet peak requirements, which means peak prices. Database performance requirements are rarely consistent—they require a system that’s adaptable, agile. But other cloud solutions offer up monolithic structures without swift-footed switching between performance levels, recommending static throughput of the highest level for databases. You can use the NetApp API to change performance on the fly, or interface the service with a scheduler such as cron in Linux. That saves a lot of money.  

For example, let’s say that you’re using Cloud Volumes Service and configured a volume at the Standard performance level ($0.10/GB). If you realize that you need more performance, you can update the volume with an API call or scheduler and the change happens in seconds—it’s nondisruptive to clients. It’s just as easy to revert to the lower performance tier. So instead of continually paying for peak performance, you only incur added costs for the time you used the higher performance tier.  

Think about it like this: If you have a performance intensive workload at certain times (such as online sales transactions on Black Friday, UBER, or Lyft during weekend peak times), you may need a volume to perform at the Extreme level for 30TB at $.030/GB, but only during those peak periods. If you were to run at this level all the time it would cost $9000/month.  But with Cloud Volumes Service for AWS, when the intensive task finishes, you can quickly drop down to the Standard performance level for 160MB/s (16MB/s x 10TB) and meet the I/O needs for off-peak loads at a significantly lower cost. This performance level costs $1,000 per month (10TB at $0.10/GB). The cost savings vary, but if you run the processing intensive workload for 20% of the time and adjust the Service level, you can usually save about $6,400 each month. 

Note that the formula we used to calculate savings is: $9000 – (($9000*0.2) +($1000*0.8)) = $6,400, which equals savings of more than 70%.  

Additionally, Cloud Volumes Service for AWS provides savings from: 

  • Space efficient snapshot copies, which only incur costs for unique data used in snapshot copies. This single 4KB copy is enough to protect all the data in the snapshot. 
  • High performance storage that enables you to use fewer compute instances, which saves time and results in lower EC2 costs.  
  • Support for both NFS and SMB, which enables a dataset to be shared between Linux and Windows instances.
    • Alternative solutions require an expensive and slow data copy between multiple volumes. 

Code Snippet: Rest APIs

Cloud Volumes Service has rest APIs that can be called by various orchestration engines and scripting languages. Here are some example scripts that you can leverage to get started. 

import requests
import json
import time
#Base URL
CVAPI_BASEURL="https://cv.us-west-1.netapp.com:8080/v1/"
CVAPI_APIKEY = "Supply your CVS API key here"
CVAPI_SECRETKEY = "Supply your CVS Secret key here "
#Headers
HEADERS = {
    'content-type': 'application/json',
    'api-key': CVAPI_APIKEY,
    'secret-key': CVAPI_SECRETKEY
    }

getfilesystemDetailsHeaders = {
    'content-type': 'application/json',
    'api-key': CVAPI_APIKEY,
    'secret-key': CVAPI_SECRETKEY
}
filesystemURL = CVAPI_BASEURL + "/FileSystems"
filesystemCreateURL = CVAPI_BASEURL

class cvsAPI(object):
    # get FileSystems
    def get_fileSystems(self):
        getResult = requests.get(url=filesystemURL, headers=HEADERS)
        print("File system creation success, the response code : ", getResult.status_code)
        fileSystemsData = getResult.json()
        for i in fileSystemsData[:]:
            fileSystemId = (i['fileSystemId'])
            name = (i['name'])
            print("FileSystemId : ", fileSystemId, " = VolumeName : ", name)

# create Volume/filesystem
def create_fileSystems(self):
    payload = {
        "name": "IAAS",
        "creationToken": "IAAS",
        "region": "us-west-1",
        "serviceLevel": "basic",
        "quotaInBytes": 100000000000
   }
   postfileSystems = requests.post(filesystemURL, data=json.dumps(payload), headers=HEADERS)
   print("FileSystem Created", filesystemURL, postfileSystems.content)
   datafilesystems = postfileSystems.json()
   datafilesystems1 = datafilesystems['fileSystemId']
   exportname = datafilesystems['creationToken']
   time.sleep(30)
   datafilesystems1 = datafilesystems['fileSystemId']
   self.test = datafilesystems1
   self.export = exportname
   return datafilesystems1, exportname

volume = cvsAPI()
volume.get_fileSystems()
volume.create_fileSystems()

Ready to Get Started?

Check out Cloud Volumes Service for AWS to learn more and signup for a personalized demo.

Solution Architect

-