As more enterprises adopt a cloud-first approach to their digital transformation journeys, performance has moved up on the agenda. Setting—and meeting—the right performance standards for the database tier of applications has become a key factor in choosing a cloud service provider.
The shared-storage architecture for databases in the cloud allows seamless access to a single set of data from a cluster of multiple low-cost servers. But its features stretch far beyond ease of use: They ultimately offer your enterprise peace of mind by providing consistently high performance, continuity, and security.
Consistent performance. Databases can be quite demanding in terms of IOPS requirements, because they’re linked to the core business transactions of an enterprise. But depending on the type of application and customer base, the requirements can vary widely. To meet those diverse demands, the underlying database storage should have multiple storage performance tiers.
Business continuity. All organizations need to prioritize protecting data from unexpected outages and data corruption, particularly their critical data stored in databases. With stringent recovery point objectives (RPOs) and recovery time objectives (RTOs) that are linked to SLAs and financial penalties, the underlying storage system should ensure business continuity through disaster recovery mechanisms such as backups and snapshots.
Storage efficiency and scalability. Databases can quickly grow in size. The cloud database storage system should be robust enough to scale to meet demand while maintaining performance standards. It should also manage the available space efficiently: Because cloud storage is charged on a pay-per-use basis, keeping the footprint minimal helps control costs.
Data security. Enterprise databases contain sensitive information, bound by privacy and compliance standards. The storage system used by databases in the cloud should be able to ensure the security of both data at rest and data in transit when they’re accessed by applications.
High availability. Databases in the cloud should be resilient against storage failures. The underlying storage system should have capabilities that support real-time replication and multiple paths to the data to ensure high database availability in the cloud.
Why—and How—to Move Your Database to the Cloud
Organizations choose to adopt cloud-based shared storage for their databases for a number of reasons. Some uses cases include:
Organizations that are adopting a cloud-first strategy and want to build applications to ensure that all components are born and operated in the cloud
Data center exit events that require critical databases in on-premises storage to be migrated to the cloud with minimal downtime
Refreshing a legacy storage layer for databases by adopting a best-in-class, high-performance cloud storage option
Improving scalability by taking advantage of the endless capacity offered by cloud storage on a pay-as-you-go basis
Implementing a disaster recovery strategy to meet an organization’s RPOs and RTOs for databases
You can take two paths to the cloud: by hosting your databases in virtual machines, which is known as the infrastructure-as-a-service (IaaS) model, or by adopting a database-as-a-service (DBaaS) offering from one of the major cloud service providers. You might consider DBaaS if you don’t want to be involved in managing infrastructure and prefer that your cloud service provider take care of it.
DBaaS or IaaS?
With a DBaaS, you can choose your database platform of choice and spin it up in a matter of seconds. AWS offers services, such as Amazon Relational Database Service (Amazon RDS) and Amazon Aurora, that cater to popular database platforms such as SQL, MySQL, and PostgreSQL. Although DBaaS offers a number of advantages in terms of ease of deployment and management, there are few challenges to consider:
DBaaS instances often have scalability limitations, which vary depending on the service being used. Amazon RDS, for instance, lets a database scale to only 16TB. This limitation is a matter of concern if you expect your data to grow exponentially over the years.
Interoperability in hybrid and multicloud architectures can be a challenge. For example, migrating data from an on-premises environment to the cloud or between clouds could get complicated if you’re using a DBaaS.
Existing databases might not be compatible with the DBaaS. Is data migration as simple as a backup and restore, or will you need to rearchitect before moving to DBaaS?
The performance of the DBaaS depends on the infrastructure and storage used by service providers in the back end. You might wind up having to opt for higher-priced storage tiers—or you might even resort to overprovisioning storage to get optimal performance during peak hours.
DBaaS has a different operating model relative to traditional database deployments. Database administrators face a learning curve, which might not be desirable in time-sensitive migration scenarios.
Fortunately, the IaaS model (Cloud Volumes Service) for shared storage for databases addresses these concerns, while empowering you to take advantage of the resilience, scalability, and flexibility offered by the cloud.
Ready to Use, Right out of the Box
NetApp Cloud Volumes Service offers a ready-to-use cloud-based file service for customers who opt for the IaaS model for database deployment.
Configurable performance. Cloud Volumes Service offers three performance tiers: Standard, Premium, and Extreme. Together, they offer a range of IOPS and throughput to match your database performance needs. You have the flexibility to change these tiers dynamically. You could start with one tier and then later switch over to a higher performance tier if your workload demands increase. And there’s no downtime.
Data security. Cloud Volumes Service protects data at rest at the volume level by using AES 256-bit encryption. This protection is also FIPS 140-2 compliant, using a key that is accessible only to the storage system. Data in transit is protected through SMB protocol capabilities.
Scalability on demand. Volume capacity can be increased on demand and can scale up to 100TB. Administrators can perform this scaling from a simple UI, without having to configure the storage layer or compromise data integrity and security.
Quick onboarding and management. Cloud Volumes Service is available as a fully managed service on all leading cloud platforms, including AWS. For example, you can manage Cloud Volumes Service for AWS from the NetApp Cloud Central portal, where you can easily create a volume and make it available to your servers in just a few clicks. The same goes for activities such as creating NetApp Snapshot™ copies, creating clone copies, and managing export policies.
Migration. Cloud Volumes Service is integrated with NetApp Cloud Sync technology to allow fast synchronization of data from on-premises NFS shares or any other repository. This synchronization helps expedite data migration to the cloud.
Business continuity. Snapshot technology creates read-only, point-in-time backups of your data, meaning you can feel more secure about your disaster recovery strategy. Volumes can be re-created from these Snapshot copies and connected to target database servers to resume business as usual.
Platform compatibility. Cloud Volumes Service supports all leading database platforms, including SQL, Oracle, MySQL, PostgreSQL, and MongoDB, with ensured performance and availability standards. For example, using SMB 3.0 for Microsoft SQL helps build zero-downtime clusters with features such as multi-NIC support and an RDMA capability.
An Essential Tool for Database Recovery
Your critical data lives in your databases. That means it’s no less critical that your cloud storage provider offers a secure, resilient, and scalable service. Hosting databases by using Cloud Volumes Service for AWS offers a familiar, intuitive experience, with reinforced scalability, flexibility, security, and best-in-class performance for your database in the cloud.