It’s the nightmare no business relying on shared file storage wants to go through: during peak usage time, screens start going dark for users all over the world. Nobody is sure what’s going on. And your users only know one thing for certain: they can’t access your site or applications.
Continuous availability of file share services accessed by line-of-business (LOB) applications is crucial to meeting SLAs. If file shares go down, users around the globe will go down with them, which can have harmful operational and financial consequences. NetApp® Cloud Volumes Service for AWS is a highly available cloud-based file share service in the AWS cloud, and it’s designed to help you avoid that nightmare. Read on to find out how Cloud Volumes Service secures high availability and fault tolerance in the underlying storage layer.
High Availability for Cloud File Share Services
In modern application architectures, the use cases of cloud file shares extend beyond the simple SMB or NFSv3 shares amounted to a machine. Use cases capture other enterprise workloads like SaaS applications, databases, disaster recovery (DR), big data analytics, and DevOps implementations. The major challenges faced by organizations using file shares that have dependencies in the cloud are file share infrastructure configuration, high availability, performance, and life cycle management. Of these, high availability is a non-negotiable attribute: the data that keeps the lights on for these workloads must be available at all times.
Consider databases. Irrespective of the database technology used, production workloads demand minimal downtime and no data loss. These stringent availability requirements cannot often be met by cloud-based database-as-a-service (DBaaS) solutions. For that matter, they’re not often possible for customer-hosted infrastructure-as-a-service (IaaS)-based databases, either.
Similarly, big data analytics use cases require you to process massive amounts of data to identify patterns and correlations that can provide insight into a number of groundbreaking applications, such as genomics. If your data isn’t available, the process of crunching large caches of data comes to a standstill, slowing productivity. Hence, the underlying cloud file share services solution for these workloads should support the lowest recovery point objective (RPO), short recovery times, quick failover, and non-disruptive upgrade processes.
NetApp offers an enterprise-class cloud file share service with multiple layers of high availability built into it. Cloud Volumes Service both complements and uses the existing infrastructure components in AWS to provide fault tolerant, reliable, and highly available storage for your workloads in the cloud.
HA and Fault Tolerance in Cloud Volumes Service
Let’s take a look under the hood to see how fault tolerance and high availability work in Cloud Volumes Service for AWS. In the backend, the service uses the highly available NetApp ONTAP infrastructure configuration. This relies on the concept of high availability (HA) pairs, where the infrastructure is deployed across every Availability Zone (AZ) in an AWS region. High availability is ensured through the use of redundant network paths, data failover, and advanced data protection features. The deployment architecture of a highly available deployment for Cloud Volumes Service in an AWS region is given below.
As you can see from the diagram above, redundancy is built into every component, starting from the Amazon EC2 instances, moving across multiple Availability Zones in a region, and the multiple NetApp gateway routers to the NetApp storage controllers in the backend.
This architecture ensures multiple paths to the data that resides in your file shares. In the event of planned or unplanned downtime for any of the controllers, the alternate controller will take over to ensure data availability to users until the partner is back online. This high availability configuration ensures an SLA of 99.999999% durability for data volumes hosted in Cloud Volumes Service.
For cross-region high availability, Cloud Volumes Service can be integrated with NetApp Cloud Sync to synchronize data across different AWS regions. Cloud Sync is a secure and automated data synchronization solution that can be used to transfer data between source and target and keep them synchronized based on a predefined schedule. Thus, two copies of the data will always be available in two AWS regions, facilitating cross-regional high availability as well as disaster recovery. The copies in the secondary region can also serve other use cases, such as backup, source volumes for creating clones for DevOps test environments, and big data analytics. The reference architecture that can be used to implement cross-regional high availability is shown below.
Additionally, Cloud Volumes Service is integrated with NetApp Cloud Backup Service for additional data protection, point-in-time recovery, and long-term retention of your data.
More Reasons to Use NetApp Cloud Volume Services
To read more about fault tolerance in Cloud Volumes Service for AWS, check out this blog. But there’s a lot more that this powerful service can do to meet your cloud file share needs. In addition to built-in high availability, Cloud Volume Service also provides:
Multi-protocol support (NFSv3 and SMB) that enables data migration for both Windows and UNIX hosts without refactoring.