Blog

Reducing Application Downtime with Disaster Recovery Testing


 Disaster recovery (DR) planning was once considered a luxury, available only to companies with huge budgets. That’s no longer the case.

Today, as disasters can happen at anytime, a DR plan is considered a must-have by all organizations. Yet, simply having a DR plan isn’t enough. You must test it to make sure it works. And testing can lead to application downtime, which disrupts operations.

This post will cover a few tactics for streamlining the disaster recovery testing process to ensure your recovery plan is strong without interrupting the flow of business.

Cloud-based Disaster Recovery

Some organizations have DR plans in place but are terrified of testing them. In a survey of 400 IT professionals in the UK from 2014, 17% of respondents were afraid of putting their DR plans to the test because of the possibility of prolonged downtime.

Cloud-based DR, is an excellent solution for avoiding network downtime. Cloud-based DR means that physical and virtual servers are replicated off-site to the cloud. This service brings environments back online without the need to restore computing power, which enables organizations to continue operating. 

Temporary Environments

Estimates of the cost of downtime vary widely. A 2013 survey by the Ponemon Institute appraised the cost of a minute of downtime at $7,900, while IDC’s 2014 report on the subject assessed the cost of the same amount of time at $1,666. Even at the cheapest estimate, no company can afford even 60 seconds worth of downtime.

Testing in a temporary environment means that your live environment remains unaffected. The testing team configures the temporary environment to mimic an emergency situation. The risk of testing in a temporary environment is that you might miss an important step because you’re not really in a disaster. It’s crucial to make the temporary environment as realistic as possible so that you don’t skip over something that could happen in the event of an actual calamity.

Unified Storage

The term “unified storage” refers to a storage system that makes it possible to run and manage different storage entities or protocols from one device. Unified storage offers the enterprise advantages over separate products.

First, there are fewer hardware requirements for unified storage. It doesn’t rely upon separate storage platforms. Also, unified storage is easier for IT administrators to manage because it’s a single product. Finally, unified storage can enable failover testing without costly network downtime.

Since its introduction, unified storage has evolved dramatically. Many companies trust it for all types of storage, including server and desktop virtualization, general purpose file services, and enterprise applications. Some unified storage solutions offer a range of options for business continuity.

Depending on the vendor, IT administrators may have the option of instantaneous mirroring to another data center, or periodic mirroring.

NetApp builds its DR solutions on the Data ONTAP operating system, an enterprise-capable unified scale-out storage system. The Data ONTAP operating system enables IT administrators to leverage their DR infrastructure to perform failover testing without downtime. With this solution, testing teams no longer have to sacrifice business continuity to ensure their DR solution is robust – that they get the best of both worlds.

-