Because of the innovations of driven developers, cloud architects, and engineers, cloud solutions now approach the performance levels of the traditional data center. With every new release, the border between the data center and cloud gets a little bit thinner. They’ve grown nearly identical in many regards, with the clear exception of one area wherein the cloud excels: cost.
In this blog post, we’ll take a look at the cost demands of running on-premises data centers as opposed to using a cloud-based storage service, such as AWS. Hopefully, this post will give you clarity about how to best manage your company’s storage—and budget.
The More You Use, the More You Pay: Do Data and Cost Growth Correlate?
The costs of maintaining an on-premises data center are rising, just as data growth is increasing every year. In the current data-swarmed environment, companies need to invest in new storage systems frequently, which can be quite expensive, even before taking hidden costs into account.
What are those hidden costs? If data center storage and server use grow, so do cooling, floor space, and electricity costs. New cooling equipment has to be purchased sporadically. To keep a data center running smoothly also takes a lot of elbow grease from the humans that manage it: system engineers and storage architects have to be on call at all times in case of an issue.
Moreover, it’s even harder to predict how much budget will need to be invested in order to cover future storage needs.
Cloud Costs Flip the Narrative
Cloud costs are an entirely different story. Using pay-as-you-go cloud storage, the company has one cost associated with that storage: the amount issued with each month’s bill. Costs for storage in the public cloud are overall lower because all the equipment needed to serve your data is owned and operated by the cloud provider. The cloud’s automation and orchestration capabilities also free up system engineers and architects from toiling over customizations, which saves on the costs of labor.
NetApp Cloud Volumes Service for AWS Pricing
1TB or 1TB increment Standard Performance Tier
1TB or 1TB increment, Premium Performance Tier
1TB or 1TB increment, Extreme Performance Tier
The difference between purchasing on-premises storage and consuming public cloud storage comes down to the difference between capex and opex models. Capex, or capital expenditures, are incurred when a company spends money to invest in new equipment. Opex, or operating expenses, take place on a regular basis and represent a part of the day-to-day operation of the company. Investing in new equipment is called capex, while consuming storage in the cloud is mostly opex, since bills are paid monthly and based on usage (a cost for operating the cloud, rather than the physical hardware).
The purchase of a highly performant on-premises storage system can cost as much as $100,000. Let’s compare that cost to the cloud: If we (a) consider the price of 1TB of on-premises storage with performance comparable to Cloud Volumes Service’s Premium performance tier (3 IOPS/GB) and then (b) factor in the housing and maintenance costs for that unit, we can clearly see that on-premises storage is almost 10x higher than the price of 1TB of NetApp Cloud Volume Service.
How Does the Cost Difference Between On-Premises vs. Cloud Affect Your Workloads?
When architecting a storage infrastructure environment on-premises for enterprise-level workloads, you have to make a number of predictions to figure out how much storage space and server resources will wind up being used. Many times, this prediction will be wrong, either because too much storage was purchased for fear of running out or because you purchased too little and now you need to request more money for additional hardware resources.
Figuring out how fast storage will be consumed by an enterprise workload is also quite a problem. You can try to manage the speed of applications with Quality of Service (QoS), but this requires a lot of monitoring and adopting new rules. In the cloud’s opex model, it just takes a few clicks to order additional storage and add some more IOPS to bump up your QoS. The cloud makes it possible to start small and grow as needed. Costs that start small will grow along with the need for more storage.
Some workloads need to be highly resilient. File services and SaaS applications are two such workloads; they likely need to be spread among multiple data centers. To do this in the on-premises world, users have to have more than one data center and buy twice as much equipment. Everything that’s in the primary data center has to be mirrored in the secondary location. On the other hand, cloud-native file services, in general, come with high resiliency, and data is spread across more than one location, without any duplicate infrastructure costs.
Performance is also one of the cost factors in deciding between on-premises and cloud storage. If you want to make your application faster on-premises, you need to buy a performant storage system. Such systems don’t come cheap, and they’re pure capex. Most likely, you will also wind up buying a machine with more storage space than you need: you may need it in the future, but then again, you may not. In cloud storage, you only pay for the storage you use and you can choose how performant your storage needs to be by selecting the appropriate service level. If you need more responsive storage, you optimize by choosing the volume type that offers the most IOPS. A higher storage tier will be a more expensive storage option than a disk type with lower IOPS, but it’s always going to cost less than rolling in a new physical machine.
New ways of using data can make or break the bank.. The Internet of Things (IoT) is producing extraordinary amounts of data, but building a Big Data analytics environment on-premises is highly complex, and the resulting infrastructure is inflexible and often poorly utilized. The total cost of ownership is extremely high. Data scientists want the freedom to manage and analyze data using various devices, they need to deploy multiple scaled GPUs and multiple Hadoop clusters. All of these requirements are ultimately very expensive, and take up a lot of time—too much time to really be effective. Using cloud solutions for Big Data analytics has been proven to shorten the time it takes to deploy a cluster and develop a new pipeline from a couple of months to just a few minutes, with almost the same performance.
The Best of Both Worlds: NetApp Cloud Volumes Service for AWS
For companies that are ready to stop bleeding all that capex on big, new storage boxes, but are nervous about potential performance or reliability loss in the cloud, NetApp offers Cloud Volumes Service for AWS.
Cloud Volumes Service for AWS is a fully managed service based on NetApp hardware. It’s available natively through AWS. Companies can order completely managed cloud-native file services that provide NAS volumes over NFS and SMB with all-flash performance. It’s like having the data center without the cost of the building. NetApp Cloud Volumes Service can be accessed through a user friendly GUI for use in file services, enterprise workloads, DevOps, or even databases and Big Data analytics.
Cloud Volumes Service for AWS is an enterprise-class data management service with data encryption at rest and cross-region replication support for backup/DR procedures. NetApp Cloud Volumes supports all file protocols, whether SMB or NFS. NetAppTM Snapshot technology is also built into Cloud Volumes Service, providing backup at no additional cost. Users can quickly and easily migrate their data to Cloud Volumes Service using Cloud Sync or other third-party migration tools regardless of where that data originated or lives. Cloud Volumes Service’s cloning capabilities shorten development time, making your team more productive at a quarter of the price of the data center.
On-Premises Performance Without the Big Price Tag
Cloud Volumes Service for AWS outperforms all other cloud management services available in the cloud today: both in terms of the performance it offers, and the price tag. It outstrips other comparable offerings when running speed-intensive applications, such as Hadoop (which you can read more about in this article).