Blog

Enterprise Applications on Google Cloud: An Introduction to Cloud Volumes Service For GCP

NetApp and Google Cloud are proud to bring you Cloud Volumes Service for Google Cloud Platform. This blog focuses on identifying the envelope of the service—that is, what you can expect from your enterprise application when it’s built on this new and robust cloud-native NAS storage service.

This introductory blog focuses generically on the category of enterprise applications that require a Linux file service; let’s call it Enterprise App X. Enterprise App X is a scale-out custom application that relies heavily on NFSv3 to provide each of the many compute instances access to one or more shared filesystem resources. (SMB is available too; it will be covered in a future blog. V4.1 is coming soon.) As the application architect, you’re not quite sure about the I/O needs, except that they are large and distributed. To understand the I/O needs, let’s explore what the NetApp® Cloud Volumes Service is capable of, by answering the following questions:

  • How many IOPS can Application X generate against a single cloud volume?
  • How much bandwidth can Application X consume against the same volume?
  • How much bandwidth can Application X consume in total in the GCP project?
  • What response time can Application X expect?

The results documented below come from Vdbench summary files. Vdbench is a command line utility that was created to help engineers and customers generate disk I/O workloads to be used for validating storage performance. We used the tool in a client-server configuration using a single mixed master/client and 14 dedicated client GCE instances—thus scale out.  

The tests were designed to identify the limits that the hypothetical Application X may experience as well as expose the response time curves up to those limits. Therefore, we ran the following scenarios:

  • 100% 8KiB random read
  • 100% 8KiB random write
  • 100% 64KiB sequential read
  • 100% 64KiB Sequential write
  • 50% 64KiB sequential read, 50% 64KiB sequential write
  • 50% 8KiB random read, 50% 8KiB random write


Before going any further, let's talk about the environment we used in our tests.  

The Region


All tests were conducted in the us-central1 GCP region.

Project-Level Bandwidth


At the project level, GCE instances are currently provided approximately 13Gb of redundant bandwidth (26Gb usable) in total to access the Cloud Volumes Service. This limit may be raised in the future.

GCE instance Bandwidth


To understand the bandwidth available to a GCE instance, you must understand that inbound and outbound (read and write) rates are not the same.   

Per GCE instance:


  • Writes to the cloud volume service are rate limited by GCP at 3Gb per second.
  • Reads from the Cloud Volume Service are unrestricted. Although rates up to 3Gb per second can be anticipated, testing has shown that up to 6Gb per second can be achieved.

Each GCE instance must have enough bandwidth in and of itself to achieve these numbers. Why trust the documentation? Find out for yourself by using iPerf3, a tool for active measurement of the maximum achievable bandwidth on IP networks.

GCE instances of type n1-highcpu-16 were used for most of the testing described in this paper. Running iperf3 from two n1-highcpu-16 instances shows that this machine type has 5Gb of bandwidth.

Volume-Level Bandwidth


Volume network bandwidth is based on a combination of service level and allocated capacity. However, the total bandwidth available to all volumes is potentially constrained by bandwidth made available to the project. Bandwidth calculations work as shown in the following table.

diagram-6 (1)

For example, although two volumes allocated 10Gbps of bandwidth each may both operate unconstrained, three volumes allocated 10Gbps each are constrained by the project and must share the total bandwidth.

Test Results


NFSv3 Workloads: IOPS

The following graph demonstrated the amount of random I/O a customer could expect to achieve with multiple clients against a single GCP cloud volume. The test reveals that the maximum I/O in that scenario is ~242k 8KiB IOPS.

diagram-gcp-1

NFSv3 Workloads: Throughput

While the previous graph documents the random I/O potential of a single cloud volume, this next graph does the same for sequential workload.  The tests were run in a similar manner using one to many CentOS 7.5 Linux GCE instances as the Vdbench workers. In this case the maximum amount of bandwidth that could be consumed represents the totality of that available in the project.  As stated in the “Project-Level Bandwidth” section, a project presently has ~26 gigabit of total bandwidth which equates to 3,300MB/s as shown below.

diagram-gcp-2

Up to ~225,000 IOPS at < 2ms Latency

Applications may benefit from the excellent network latency seen across the board in GCP, run from within the us-central1 region, ~168,000 8K random read IOPS were achieved at less than 2ms and ~225,000 IOPS just past the 2ms point.

diagram_gcp_3 (1)

 

Bandwidth in Summary


Collectively, the GCE instances in a GCP project have roughly 26Gbps of aggregate bandwidth in relation to the Cloud Volumes Service (the bandwidth limit may be raised in the future). Individually, each GCE instance may read between 3Gbps and 6Gbps; writes are constrained to 3Gbps. As the application architect, you allocate volume bandwidth enough for the needs of the Application X, either scale up or scale out, within the constraints of the environment just defined.

Request a Demo


Sign up now to schedule your personal Cloud Volumes Service for GCP demo
-