Blog

The Importance of Defining Your Cloud-Based Disaster Recovery Strategy

cloud-based disaster recovery strategy data replication data tiering amazon s3 storage backup and disaster recovery solutions in the cloudBackup and disaster recovery solutions differ depending on the vendor, but all of them use similar logic and share a common goal: To provide companies with stable and easily recoverable secondary environments in case something bad happens to their primary production environments.The expansion of cloud computing is changing the point of view that companies have towards DR, offering high SLAs and guaranteeing the integrity of your data to a very high degree.

In this article, we will show you what you should pay attention to when defining your cloud-based disaster recovery strategy and procedures, including the frequency of synchronization, the types of backup you are going to apply for specific services, and talk about what NetApp’sCloud Volumes ONTAP® (formerly Cloud Volumes ONTAP) can do to enhance your cloud-based disaster recovery efforts.

Disaster Recovery in General

Backup is a protection method intended to take offline copies of your data which you can use later to restore your system data to a state from a previous point in time, in most cases on same environment.

While backup is oriented towards helping you recover from data corruption or service disruptions, disaster recovery is a little bit different. Disaster recovery should help you recover if your whole infrastructure, or part of it, is not available. It uses backup logic as its core mechanism, but it adds more features to it. The final outcome should be a remote replica of your environment, ready to take over if your primary site is not able to function.

Based on that, first you need to differentiate your services and determine whether they need only backup or, if they are business critical, if they need to be included in the disaster recovery strategy solution. Although you can build a physical disaster recovery site by yourself, the public cloud offers great options for this use case. Mainly this is because instead of investing in new hardware, software, physical infrastructure and everything else you need to set up an enterprise-level secondary site, the cloud offers all that you need to fully deploy and test for disaster recovery scenarios using out-of-the-box tools paid for on an as-you-go basis.

You can choose to utilize a recovery solution from almost any cloud vendor out there, including AWS disaster recovery or Azure Site Recovery. Another advantage to having a solution for disaster recovery in the cloud in place is that these cloud-based disaster recovery resources are mostly never used: only if the worst-case scenario takes place do you have to put these services into action, and only then will you be charged for their use.

By defining your cloud-based disaster recovery you can protect your workloads no matter where they are stored: on-premises, the cloud, or in hybrid or multi-cloud environments. Additionally, you can choose from a number of ways to execute disaster recovery procedures. You can simply back up your data to cloud storage, and if the need ever emerges, create cloud machines from that data and deploy them in a working environment. This is the most cost-effective solution, but the whole logic for it relies on user-defined scripts or other additional tools, making it difficult and inefficient for administration.

Such scripts may also not be able to comply with your stated recovery time objective (RTO) SLA.
Another option is to develop a more advanced disaster recovery plan that has compute power on a secondary site in place and turned off until a user action occurs; however, such solution is inefficient as all of that equipment will be sitting idle for 99.99% of the time.

Finally, the most advanced disaster recovery procedure includes orchestrated environments that are completely synchronized with each other, typically offering zero data loss when failover happens and fast automatic recovery. This is what NetApp offers with its AWS high availability configuration for Cloud Volumes ONTAP.

Carrying Out the Plan

cloud-based disaster recovery strategy data replication data tiering amazon s3 storage backup and disaster recovery solutions in the cloudAfter defining what you want to protect and how you are going to recover, you need to plan additional parameters.  Scheduling defines how often your disaster recovery sets will replicate to the secondary location, which directly affects your recovery point objective (RPO), or in other words, how much data loss your service can tolerate during failover.

Of course, for dynamic workloads you would want to schedule replication to take place as frequently as possible; for data sets that don’t change often your schedule will be less intensive.

You will also define how many copies in time you want to keep and for how long, which could lead to increased consumption of cloud storage, and therefore higher costs.

Finally, a disaster recovery strategy needs to define the proper method for taking copies of your machines and that is aligned to the services you are using. For some services, such as Active Directory Domain Services, you can choose to save only the system state, which is enough data to recover the primary environment and is storage efficient.

For other services, such as file servers, you can make replicas of only plain copies of shared files and folders and recover them by attaching data to a new machine. However, when replicating complex services such as databases or enterprise applications, you will want to be sure that you create application snapshots that are consistent and that the application will work properly after recovery.

This requires your replication solution to be completely aware of applications running inside the machine, which is the most reliable way of replicating enterprise workloads. This opens doors for more effective backups and/or disaster recovery scenarios where you can recover an entire service, or even part of it, such as with Exchange mailboxes for example.

Disaster Recovery with Cloud Volumes ONTAP

To replicate applications from a primary site to a cloud-based secondary site for disaster recovery, companies will most likely choose to utilize an enterprise-grade solution able to efficiently manage replication between data environments, keeping storage costs optimized while protecting the environmental consistency. Cloud Volumes ONTAP is NetApp’s premiere data management software for hybrid storage environments, running on top of Azure storage or AWS storage services and extending their possibilities with features that make it easier to overcome possible replication obstacles.

SnapMirror® is Cloud Volumes ONTAP’s built-in replication technology. You can leverage SnapMirror for disaster recovery by replicating your storage from primary to secondary sites, by offloading the replication process from hosts to storage. This way you ensure a fully-synchronized mirror site of your environment in the cloud, waiting for failover if it’s ever needed. With SnapMirror, this solution is completely flexible, whether your secondary site is on Azure or AWS. The system tracks changes and moves data on the block level, keeping network bandwidth to the minimum and saving time.

SnapMirror’s efficiency comes from two points: it only replicates the delta, not all the data, and since all the replicated data will also be compressed and deduplicated, data is compact, and syncs consumes less in network traffic costs. Those synchronization cycles can also be defined according to your own pre-defined schedules, which also saves time.

NetApp Cloud Volumes ONTAP’s data compression, deduplication, and thin provisioning storage efficiencies make it possible to minimize the amount of storage you consume and avoid unnecessary storage costs for DR.

But Cloud Volumes ONTAP’s most impressive savings for disaster recovery storage come with the use of data tiering. Available for use with both AWS and Azure, data tiering can tier an entire DR copy to a capacity tier on Amazon S3 or Azure Blob and automatically bring the site back up to the performance tier on Amazon EBS or Azure Premium or Standard disks when it is needed. That can cut storage costs for the secondary site down to as low as $0.03 per GB per month.

Another Cloud Volumes ONTAP feature that aids in DR is cloning with NetApp FlexClone® technology. FlexClone allows you to take a snapshot and make your data volume writable with zero capacity penalty in almost zero time. These flexible and writable copies of environments can be used to provision DR test environments and to validate DR procedures without affecting production environments and ongoing replication processes.

With NetApp efficient Snapshot® technology you can keep multiple copies on hand and pick a desired point in time you want to recover, preserving application data and ensuring a completely consistent application state.

To predict costs of running Cloud Volumes ONTAP you can use NetApp’s Azure calculator or AWS calculator to analyze virtual machine scale, storage size, performance levels, and storage efficiency rates. It’s easily noticeable how Cloud Volumes ONTAP’s enterprise-grade features can keep your cloud storage costs for disaster recovery as low as possible.

Summary

While cloud vendors will provide you with the ability to run disaster recovery, those cloud-based disaster recovery solutions by default are not storage efficient.

NetApp Cloud Volumes ONTAP’s industry-leading storage efficiency features use less space and help you spend less on DR storage, no matter which major cloud provider you choose. To try Cloud Volumes ONTAP as part of your disaster recovery strategy register for a free 30-day trial on Azure or on AWS.

Disaster Recovery

Related Articles

Amazon EBS, EFS or S3: Comparing Pricing, Performance a...

Read More

5 AWS EBS Volume Types and Functions You're Not Using

Read More

ONTAP Cloud is Now Cloud Volumes ONTAP

Read More
-