According to Gartner, a hybrid storage appliance is:
“A physical or virtual storage caching system that destages local or on-site data storage to the cloud, instead of to back-end disks. In every other way, it behaves as a storage system. The term ‘hybrid’ describes the fact that the appliance combines local storage with cloud storage, usually public cloud storage that is geographically distant from the appliance.”
Virtual storage appliances (VSA) have found widespread acceptance, now that workloads have become heavily virtualized and the cloud has become the driving force in IT. And though VSAs have been around for some time, breakthrough improvements have made it possible to bridge the gap between on-premises resources and the public cloud.
These improvements have ensured that storage appliances can replace the traditional storage subsystems and make use of the almost infinite cloud storage resources for everyday workloads.
In this article,we look at some of the best practices for configuring and deploying a virtual storage appliance in Azure.
Configuring Virtual Storage Appliances
There are a number of factors that have to be addressed when configuring your VSA for deployment on Azure.
Configurations you make to the VSA’s 1) Compute, 2) Storage, 3) Networking, 4) Use cases, 5) Security, 6) Admnistration and Availability, will all affect your overall performance and the costs associated with your deployment.
Therefore, it is critical to follow the best practices in each of these areas.
One of the biggest factors driving virtual storage appliance performance is the compute resource assigned to the appliance.
As the storage appliance acts as a liaison between on-premises workloads and cloud resources, abiding by by the minimal recommended requirements laid down by the storage vendor is a necessity. This is true for both on-premises hypervisor resources as well as for sizing IaaS VMs in Azure.
For example, NetApp Cloud Volumes ONTAP (formerly ONTAP Cloud) recommends using at least DS3 and DS3v2 machines in Azure. If you want to use the ISCSI protocol, you need to provision a DS4 machine or above. Also, compute sizing determines the storage limits of the storage appliance.
This can be a complicated point, so make sure you are familiar with all the details of Iaas VM sizing.
Other things to look out for during compute-sizing with an on-premises infrastructure are:
- VM generation (in case of Hyper-V) or virtual machine version (Vmware Esx)
- Memory provisioning type (dynamic or static)
- Data disk type (fixed or dynamic)
These factors might affect the performance of the storage appliance.
The essential part of choosing a storage account in Azure is to follow the vendor guidelines; this will help you understand the limits of the storage account, especially when using multiple appliances with the same storage account.
One number to note is the maximum storage capacity offered by the storage appliance, because it will affect how you scale your storage accounts in Azure.
For example, Cloud Volumes ONTAP supports a maximum of 31TB storage capacity and a storage account in Azure supports up to 500TB, so that will limit the overall number of instances you can have.
Follow best practices when you allocate storage for snapshot overhead and assign enough storage in restore scenarios. It is important to plan for IOPS needs based on the workload that the storage appliance supports.
For example, IO intensive operations such as SQL will require an ISCSI target with Premium disks as the backend storage. On the other hand, backups can use Standard disks. These are key requirements to understand so that you can choose a storage account accordingly.
Also, determine if you should enable the inbuilt Deduplication and Compression features (such as Windows Server in the former case), as they might interfere with similar features provided natively by the storage appliance.
In terms of bandwidth, the network requirements rely heavily on the workload, use case and delta changes.
If the storage appliance is on-premises, you need to consider delta changes and how throttling might affect the handling of data changes. This also heavily depends on the type of workload being served.
For example, the block-based ISCSI protocol might require a much more consistent and reliable connection when compared with workloads related to disaster recovery and backup. Also, for IOPS driven workloads such as SQL, it might be necessary to have a dedicated connection to Azure via ExpressRoute that will provide a faster, more reliable, secure and SLA-driven connection.
Apart from the networking resources on-premises, you also need to ensure that your Azure networking resources, such as Vnets, are configured to align with all the guidelines laid down by the storage vendor.
For example, Netapp OnCommand Cloud Manager specifies that you need to have a VNet with one or more subnets that allow Internet access. NSG rules, as specified by the vendor, should also be followed so that you can plug any security holes. An example of this would be using predefined NSG rules when you deploy Cloud Volumes ONTAP via the OnCommand Cloud Manager.
Also important to keep in mind are the costs associated with egress network activity. These costs can increase rapidly when it comes to storage appliances.
4. Use Cases
Most of the configuration and deployment of storage appliances relies on the use case that the storage appliance is going to be used for.
Storage appliances with Azure can be used in a variety of scenarios, ranging from data analytics to business continuity factors, such as backup and disaster recovery. It is important to understand how your use cases or application workloads will influence the performance of the storage appliance.
These use cases warrant careful planning and proof of concept. In the case of Cloud Volumes ONTAP, this includes use of tools such as OnCommand Cloud Manager to help simplify the configuration and deployment of cloud instances, with the help of a central console for the management of all instances.
Security for cloud-based storage appliances should be tackled from a few different directions.
The first to address is the appliance’s security, ensuring that access to the storage appliance is restricted and based on assigned roles and permissions defined by an identity management solution, such as Azure Active Directory.
This precaution will limit the type of changes that authorized users can make and also provide logging and auditing capabilities. Also, unrequired services can be shut down as an added precaution. For example, if you are running a Linux-based appliance, you might want to restrict SSH access or disable it if you only allow web-based access.
The next thing to consider is the security of the data at rest. Follow the best practices of the storage vendor and encrypt all data with an encryption protocol such as AES-256 and by utilizing a key management system, such as Azure Key Vault. Also, users should not overlook the importance of encrypting data in transit using SSL certificates.
Finally, ensure that the storage account keys are regenerated from time to time so that any change of administrator does not put the whole appliance at risk.
6. Administration and Availability
Operational best practices are necessary to maintain for the storage appliances. Ensure that the configuration of the appliance is backed up. Also, if the storage appliance serves as an ISCSI target or NFS/SMB repository, have a backup strategy that maintains multiple copies of this data (depending on your RTO).
Monitoring the storage appliance is another aspect to keep in mind; alerts and notifications must be set up to trigger whenever the storage appliance enters an error state.
Administration of the appliances also includes management of the existing volumes. Avoid over provisioning and make use of automatic shutdown tasks to help reduce costs.
Also, storage appliances need to be updated periodically and preferably through a central unified console. For example, OnCommand Cloud Manager takes care of updating Cloud Volumes ONTAP appliances based on set policies.
Finally, it is important for business continuity to have a highly available architecture in place in order to avoid costly downtime.
Use of VSAs has become a necessity in this data-driven world where local storage, coupled with public cloud storage, can greatly reduce your TCO. But, it is important to ensure that the storage vendor’s best practices are followed in order to see desired results in terms of performance and TCO.
Researching case studies and receiving support from the storage vendor while deploying and configuring the storage appliances are also valuable resources as you set up your VSA to be safe, reliable, and efficient.
When configuring your virtual storage appliance for optimal performance in Azure, using NetApp Cloud Volumes ONTAP with the help of the OnCommand Cloud Manager's unified management console can make this difficult process much easier both to set up and to maintain.
Want to get started? Try out Cloud Volumes ONTAP today with a 30-day free trial.