Our test indicates performance a customer could achieve with single or multiple clients against a single instance of Cloud Volumes Service for AWS. The test reveals that the maximum I/O that can drive from a single Cloud Volumes Service instance to a single client is ~60,000, whether 1K, 4K or 8K random read, and 50,000 down to ~40,000 for the writes. However, with multiple clients, a single instance of Cloud Volumes Service can provide more than 200k IOPS.
Our sequential test was done using the exact same tests and procedures. In this case the maximum amount of I/O that a single client is able to drive is right around 4-4.5Gbps reads or writes. Like in the previous sample, a single client is not able to drive the maximum capability of a Cloud Volumes Service volume, which can achieve around 2,250MBps for 16K block sizes, and over 3,000MBps for 32K and 64K block sizes.
For a deeper understanding of how to best run Cloud Volumes Service on AWS check out our 3 part blog series on the subject.
The graphs compare the performance when using EBS and when using NetApp Cloud Volumes Service for AWS. The graph on the right shows that Oracle is able to drive 250,000 file system IOPS at 2ms when using the c5.18xlarge instance and a single volume provisioned from the Cloud Volumes Service, or 144,000 file system operations at below 2ms using the c5.9xlarge.
The graph to the left provides more performance examples of how Oracle workloads behave on Cloud Volumes Service for AWS when
For more information on Oracle Performance and Storage Comparison in AWS: Cloud Volumes Service, EBS, EFS.
The graph on the right shows performance when using Amazon S3 and NetApp Cloud Volumes Service for AWS (service levels Standard and Premium). It shows that Spark is able to achieve an average throughput of 3,100MB/s against a single Cloud Volumes Service volume when
Although the price of the Premium service level ($0.20/GB/month) is higher than both the Standard service level ($0.10/GB/month) and the upfront costs of Amazon S3 (capacity + egress), the increased bandwidth results in both an overall cost reduction and improved run time, making the Premium service level more cost-efficient overall.
API costs make up a large portion of the Amazon S3 price. GET requests for Standard Access Tier are priced at $0.0004 per 1,000, so the cost of continuously using Amazon S3 for primary analytics clusters can add up to ~$170,000 annually.
Read this in-depth blog on Spark performance using Cloud Volumes Service for AWS.
The graph to the right shows the performance of a MySQL workload running on c5.18xlarge. Run against the single Cloud Volumes Service instance, we were able to generate close to 25,000 IOPS at 4 ms latency, and 22,000 IOPS at 3 ms latency.
The graphs demonstrate how Cloud Volumes Service for AWS and EFS compare when running random and sequential workloads.
Elastic File System: Maximum 250/MB/s per instance throughput
Cloud Volumes: 1GB/s maximum per instance throughput (512MB/s read + 512MB/s write)
Elastic File System: Maximum 7,000 IOPS per volume (as documented by AWS)
Cloud Volumes: ~200,000 maximum IOPS per volume as tested
Here are the observed latencies in milliseconds between Amazon EC2 instances and Cloud Volumes Service for AWS based on the regions that the service is available in today.
Cloud Volumes Service for AWS was tested against competing products to move a database designed for genomic workloads to the cloud. The sequential read benchmark had 1 hour to complete - with a goal of ~10 TiB/hr (2,900MiBps). The test itself comprised 2,500 files representing 2000TIB of content.
The throughput achieved using Cloud Volumes Service volume is equal to 2,887MiBps or 9.91TiB/hr, which is 2.1x the rate of the four self-managed NFS servers and 3x times that of the Provisioned Throughput configured EFS volumes. Cloud Volumes Service for AWS achieved the results while also providing snapshot copies, which the other options were not able to provide, or not able to provide without impacting performance.
While the chart indicates a throughput of 2,887 MiBps the test data shows that only a handful of workers took longer than the rest of the 2,500 workers. In fact, most of the workers achieved a throughput of roughly 3,500 MiBps.
As additional data points, the graph on the left shows the results of the second use case - that of a SQL-type query of the 2,500 genomic files. A lower time to completion indicates strong performance. Cloud Volumes Service was able to access data from 100,000 individuals in less than an hour, while also providing snapshot copies like in the previous case.