hamburger icon close icon
Kubernetes Storage

Scaling Kubernetes Persistent Volumes with Cloud Volumes ONTAP

For stateful workloads that need data persistence beyond the lifecycle of the container, a scalable and robust storage management solution is a must. But native storage solutions may not be able to scale at the level many enterprises required for Kubernetes storage.

Cloud Volumes ONTAP—the data management platform from NetApp—provides a solution, offering a robust and petabyte-scale storage solution for Kubernetes deployments in the cloud. In this blog we’ll explore some of the container scalability challenges for stateful Kubernetes workloads and see how Cloud Volumes ONTAP can help solve them.

Read on below as we cover:

Container Storage Scaling Challenges

As is the case with any other deployment, the scalability of an application running in a container depends on the scalability of its storage layer. Here are some of the challenges that users may run into when trying to scale out Kubernetes storage.

Cloud provider size limits

While the cloud has generally been billed as offering limitless storage, that may not always be the case. When using native cloud storage for containers, the quota and limitations of the specific services can become a bottleneck. There is a maximum size limit to the amount of persistent storage you can allocate using a native file share service on AWS, Azure, or Google Cloud.

Volume resizing

Volume resizing also becomes a challenge since that has a dependency on the scalability limits mentioned above. There could be provider-specific limitations that can also come into the picture for volume resizing. For example, in Azure Kubernetes Service volume resizing is not supported for the built-in storage classes that use Azure disks in the backend. Customers need to opt for a custom storage class to overcome this limitation.

Storage capacity management

Storage capacity management could also become an overhead as you need to consider the native cloud storage and cloud service provider specific limitations. If there are multiple storage types being used—for example, both disks and file shares—then each of them has to be evaluated separately while planning your containerized applications’ storage capacity.

This could also impact the speed and agility at which these configurations can be managed. You might need a different automation approach for different types of storage depending on the cloud service provider.

Unified administration

Last but not the least is the added complexity users face when containerized workloads are deployed in multicloud or hybrid cloud environments. Such architectures could result in cloud administrators switching between multiple cloud consoles and automation tools for container storage layer management on a constant basis. This process often becomes cumbersome, with no unified approach for managing persistent storage for your containerized workloads across such complex architectures.

Native Cloud Provider Container Storage Scaling Considerations

While there are multiple options for using cloud native storage services for Kubernetes persistent volumes, there are certain inherent limitations that you should take into account when using them.

Size restrictions

Disk-based block storage such as Azure disks, AWS EBS, and Google Persistent Disk can be configured as persistent volumes for containerized workloads. However, there is a limit on the number of disks that can be attached to specific VM SKUs. Each cloud provider’s service also has its own limit on the maximum size of disk that can be attached.

The maximum supported disk size when correlated with the maximum number of disks supported by different VM SKUs limits the maximum storage capacity that can be supported by VMs. This capacity is currently at 257 TB for Google Cloud, 336 TB for AWS, and 256 TB for Azure. This could become a bottleneck if your applications require storage to scale beyond the aforementioned limits and into the petabytes.

Complexity

Using storage other than local storage for containers is not a straightforward process. You will have to deal with multiple storage classes and PV configuration files to begin with. There could be native cloud-specific configurations that should also be taken into consideration. For example, the process for mounting an Amazon FSx for Windows or FSx for Lustre file share on an EKS cluster involves multiple manual configuration steps, including installing additional CSI drivers before the persistent volume configuration.

File share scaling limitations

AWS, Azure, and GCP provide options for using native managed file share services as persistent volumes for their respective managed Kubernetes services. However, the scalability limits of these services will be applicable for the persistent volumes created using these services.

The limits for some of the commonly used file share services are as follows: 64 TB for Amazon FSx, 5 TB for Azure Files (standard tier), and 63.9 TB for GCP Filestore (Basic SSD). The premium/high-scale tiers for file share service in Azure and GCP can scale to a maximum size of 100 TB, but would incur additional cost.

Lifecycle management

There could be data residing on disks attached as persistent volumes rarely accessed by applications. This data could still be required at some point in time, say for audit, compliance, or referral purposes. However, since this data is not in active use, keeping it in block storage is wasteful considering the cost and potential uses of the format. There are no native solutions provided by cloud service providers for efficient lifecycle management of data in container local storage.

Higher costs

Native cloud block storage services base their charges on the size of the provisioned disk. Infrequently accessed data residing on local storage would add to the storage costs, irrespective of the access pattern. Customers end up paying for storage that they are not using on a day-to-day basis. This reduces the overall ROI for cloud storage.

Now that we’ve seen what some of the scaling constraints are when using native cloud provider services, let’s see how Cloud Volumes ONTAP can overcome these storage limitations to scale Kubernetes storage up to the petabyte scale.

Addressing Container Storage Scaling Challenges with Cloud Volumes ONTAP

Cloud Volumes ONTAP delivers the capabilities of the trusted NetApp ONTAP data management platform with the cloud-based block storage offered by AWS, Azure, and Google Cloud. It provides access to storage volumes over iSCSI, NFS, and SMB protocol and can be configured as persistent storage for your containerized workloads.

Though Cloud Volumes ONTAP uses native cloud storage to create a virtual storage appliance, it provides the following additional benefits when used as persistent volumes for containers in the cloud.

On-demand capacity

Container storage capacity requirements can change on the fly. While using Cloud Volumes ONTAP as the storage layer for your persistent volumes, there is no need to predict the capacity requirements and pre-provision volumes. Cloud Volumes ONTAP uses NetApp Astra Trident as the CSI-based storage provisioner to dynamically provision storage for your persistent volume claims.

Petabyte-scale storage

Cloud Volumes ONTAP provides an option to bypass the block storage size limitations of the cloud service providers through license stacking and storage tiering so you can reach storage capacity in the petabyte scale.

Block storage is freed up when data is tiered to object storage, which is virtually limitless. By stacking multiple ONTAP BYOL licenses, you also increase the overall available block storage capacity into the petabytes. For example, a single Cloud Volumes ONTAP license can support up to 368 TB due to the single VM block storage limitation. However, adding three licenses would take this up to 1.4 PB, thereby provisioning you a petabyte-scale storage pool for containerized workloads.

Storage tiering

Cloud Volumes ONTAP helps overcome the storage lifecycle management challenges associated with native cloud storage services by providing an option to tier infrequently accessed data to low-cost object storage. This storage tiering feature is transparent to the application and does not impact its performance. The data remains accessible to the application whenever required. At the same time it helps bring down the storage cost drastically by moving rarely accessed data to a cost effective cloud storage tier.

Dynamic resizing

While using Cloud Volumes ONTAP as persistent storage, the volumes can be expanded directly from the Kubernetes layer. The configuration required is as simple as setting an allowVolumeExpansion flag to true in the storage class definition. This helps to overcome the rigid limitations of native cloud storage based persistent volumes that do not support resizing once the volume is provisioned. You can start small based on the requirements of the application and later expand the volume as the data size increases.

Centralized management

The NetApp BlueXP Console SaaS provides a centralized control plane to manage your Cloud Volumes ONTAP volumes across hybrid and multicloud environments. No matter which cloud platform you use to deploy your containerized workloads, the storage layers for all of them can be controlled from this single dashboard.

You can also enable replication/data copy between the persistent volumes through SnapMirror® data replication directly from the BlueXP Console interface. This gives you a way to eliminate the hassle of switching between tools to manage different persistent storage layers. Plus, all of these capabilities can be carried out programmatically with RESTful API calls, with no need to use the GUI at all.

Configuration flexibility

The persistent volumes provisioned using Cloud Volumes ONTAP can be mounted over a protocol of your choice, i.e., iSCSI, NFS, or SMB. That means you aren’t limited to just using local storage options. It also caters to the requirements of shared and non-shared storage. iSCSI driver-based volumes can be used for non-shared storage requirements while NAS driver based volumes can be used for shared storage requirements where multiple pods might need access to the same volume.

Optimized file caching

Cloud Volumes ONTAP uses FlexCache® technology to enable faster reads of data. The proprietary caching technology used by FlexCache ensures that the reads are cached to the nearest client. This offers scalability in that you can cache data without having to replicate your entire data set.

Data cloning

Instant, zero-capacity data cloning via NetApp FlexClone® allows Cloud Volumes ONTAP users to scale their dev/test operations without worrying about drastically increasing the storage footprint and costs for storage. All clone copies are based on Snapshot images, and only delta data requires additional storage space.

Get More for Kubernetes with Cloud Volumes ONTAP

Scalability is just one of the benefits offered by Cloud Volumes ONTAP. It comes packed with additional features that deliver more value for your cloud storage investment.

Features like thin provisioning, deduplication and compression help bring down the cost of persistent volume storage by up to 70%. Cloud Volumes ONTAP also ensures a higher level of data protection for persistent volumes, through its dual-node high availability configuration and point-in-time backup snapshot copies.

No matter how large the scale, Cloud Volumes ONTAP delivers a great value proposition for your persistent storage requirements for Kubernetes workloads in the cloud.

New call-to-action
Yifat Perry, Technical Content Manager

Technical Content Manager