5 Points to Consider when Running Databases in the Cloud

Databases hold data that is highly critical to any organization. Your cloud infrastructure for databases needs to take care of each and every component that your database needs, and that includes storage, CPU, RAM, network, security, compliance, licenses, data protection and backup, and more. Plus, you need to make sure all of that is cost-effective for your company.

The good news is that there is a way to do all those things by running your databases in the cloud. In this article, we will look into 5 best practices or options for running robust databases in the cloud and also take a look at how Cloud Volumes ONTAP can enhance databases in the cloud.

1. Understand the Advantages of Cloud Databases

There are a lot of advantages to running a cloud database vs. traditional database. When running databases in the cloud, there are two deployment models to think about: PaaS (Product-as-a-Service) and IaaS (Infrastructure-as-a-Service).

DBaaS allows you to chooseIn the PaaS model, the public cloud provider offers a managed database service also referred to as a Database-as-a-Service (DBaaS), where most of the administrative tasks such as backup, patching, setting up infrastructure, providing SLA backed availability, and scaling up and down is taken care by the cloud provider. The Database-as-a-Service AWS offers, Amazon Relational Database Service (Amazon RDS), and Azure Database services such as SQL Database, Azure Database for MySQL, and Azure Database for PostgreSQL can get your database up and running in a few clicks, and also provide database management consoles for managing and monitoring your databases. Depending upon your database needs, database-as-a-service allows you to choose between multiple sizes and performance options from the list of pre-configured instance types. There are some limitations with these DBaaSs; for example, you do not have control over the file system that your database uses, there are limitations on the maximum amount of storage available, and there is the higher cost of license and storage.

With the IaaS model, a DB can run on the infrastructure made available by the cloud service provider. The benefit to this setup is having everything under your control while still taking advantage of public cloud-based infrastructure. Some of the major benefits of running your own DB using cloud infrastructure include:

  • The flexibility to manage your databases on-premises and in the cloud from a single window.
  • Control over when updates and patches are applied to your databases.
  • No lock-in so you can move your database between clouds or between the public cloud or on-premises.
  • More choices of database software than you can run on AWS or Azure.

2. Ensure Business Continuity

To ensure business continuity, it is important that the deployment of the database is highly available and resilient to disasters or infrastructure failures. When you are running databases in the cloud for high availability and you need to meet all the compliance requirements, you should always consider setting up your DBaaS in Multi-AZ setup.

With Amazon RDS, you can configure high availability and failover support for your databases. Amazon RDS will make multiple copies of your database automatically when you set up DBaaS in Multi-AZ. Amazon RDS automatically switches between standby and primary node when it detects any problem. This can be further extrapolated to regions or infrastructure with geographically separated borders using read replicas. Apart from this, it is also necessary to ensure that the RTO and RPO levels are in-line with your business objectives, especially if you are looking at restoring DB instances across regions.

But keep in mind: running a Multi-AZ Amazon RDS does come with extra costs. One example is when all standby nodes must be on the same type of IaaS. In that case, you may incur costs that can be saved when you run the Multi-AZ using Amazon EC2 instances. Multi-AZ is also possible when you are running your own DB on cloud resources, and in that case, you can control the configuration of your standby instances as well.

Azure SQL Database offers two high-availability architectural models, namely the Standard and Premium Models. In Standard Model, there are standby redundant nodes which take over the compute service in case there are infrastructure failures or critical failures in the SQL process, thereby ensuring 99.99% availability. Premium Model, on the other hand, offers availability in the form of Always-On Availability Groups, with the option of placing the replicas in different availability zones.

Azure Database for MySQL and PostgreSQL HA model allows automatic creation of a new node and data storage is attached to this new node if a service disruption like hardware failure is detected. When Azure Database for MySQL and PostgreSQL are deployed with geo-redundant backup storage, you can geo-restore the DB instances in a different region even if the region where your primary databases are held is offline within 12 hours and with an RPO of less than 1 hour.

With NetApp, you can achieve high availability for your databases by using Cloud Volumes ONTAP’s high availability configuration.

3. Ensure Database Security

Ensuring cloud database security is a top priority. If you choose the DBaaS deployment, you need to ensure that the recommended practices for the multi-tenant solution are followed to protect against other tenants and external attacks. AWS, for example, recommends the use of VPCs to run your Amazon RDS DB instances, using AWS IAM policies for stricter role-based control to the RDS resources, firewall rules via security groups to control connection to your RDS DB instances, using SSL connections for RDS DB instances to provide encryption during transit, and using RDS encryption for data at rest.

Azure recommends use of Firewall Rules, SSL with client application connections and Virtual Network Rules to secure MySQL and PostgreSQL databases. Apart from these in-transit security and cloud database access management rules, the data is stored on Azure storage which is secured via 256-bit AES encryption and thus provides solid encryption for at-rest data. In addition to encrypting data at rest with Transparent Data Encryption (TDE) and data in transit with TLS security, Azure SQL also offers Advanced Threat Protection (ATP), which provides a set of advanced SQL security capabilities.

4. Leverage Database Automation

One of the most important tasks of a DB admin while managing cloud DB instances is the use of automation to speed up tasks like deploying and managing multiple DB instances. Administrative tasks such as database creation, backup and restore management, and monitoring can easily be automated using built-in tools or custom database scripts that DBAs create.

AWS CLI commands are a very common way of managing AWS RDS instances and performing tasks such as creation of DB instances, managing security groups, and defining backup retention policies. Parameter Groups and Option Groups are very good tools of deploying database instances with a consistent configuration. Option Groups can be used to ensure all DB instances within that option group have Transparent Data Encryption (TDE) set for MSSQL databases. Similarly, Azure Automation can be used to manage Azure SQL instances. This can be done by using Azure SQL Database PowerShell cmdlets or by using readymade runbooks in the runbook gallery. Azure CLI can be used to customize and automatically configure MySQL and PostgreSQL DB instances.

Both AWS and Azure provide native tools to help in the cloud database monitoring process. Amazon RDS provides metrics to Amazon CloudWatch and Enhanced Monitoring. Azure provides several metrics for MySQL and PostgreSQL which can be used to set up alerts on the Azure Portal. Azure SQL offers several metrics that can be fed into Azure Storage, Azure Event Hubs and Log Analytics. Intelligent Insights can proactively monitor the Azure SQL database using AI to provide detailed analyses.

Automation and monitoring tools like AWS CloudWatch and Azure Log Analytics can be used with IaaS deployments.

5. Balancing Costs and Performance

As database instances grow, cost for managed instances and storage grows exponentially. It is important to understand the tips and tricks to keep the TCO under control. One way to keep costs down is with data tiering. Data tiering allows you to keep all your infrequently-used, cold data on inexpensive storage, while hot, frequently-used data can remain on more performant drives. With Amazon AWS and Cloud Volumes ONTAP, you can set up data tiering to leverage block storage on Amazon EBS or Azure managed disks as a performance tier and object storage on Amazon S3 or Azure Blob as a capacity tier. With data tiering Cloud Volumes ONTAP controls and automates the sync-up of snapshot backups, secondary backups, and inactive data to the capacity tier from the performance tier and it can automatically bring back that data to the performance tier if the data becomes active and needs to be used.


With so many improvements in cloud services for database workloads, major organizations are willing to adapt to make the shift to running databases in the cloud. You still need to weigh all the different options to determine the right solution for your database before you make the move to cloud, but NetApp offers so many benefits to running databases in the cloud.

Cloud Volumes ONTAP can help you in make the decision a little easier, with solutions for databases such as highly-available storage, state-of-the-art data replication with SnapMirror® and SnapVault® technology, FlexClone® data cloning technology, data tiering, cost-saving storage efficiencies such as data deduplication and compression, RBAC data encryption, and Snapshot™, SnapCenter® and SnapRestore® functions.

To explore how NetApp can help set up your databases in the cloud, try a 30-day free trial on AWS or a 30-day Azure free trial of Cloud Volumes ONTAP today.