Aug 24, 2017 10:30:00 AM
In an earlier blog I wrote about the bias for change in the enterprise, and how risky it can be to stay where you are. Enterprises I speak with generally acknowledge this fact, but they respond in a myriad of ways to the pressure. The most common cloud-related response for entrenched IT is something I call divergence – where established and mission critical software operations stay in-house, but new, upcoming or next-gen projects are developed at service providers or otherwise in the cloud.Businesses that are based on or developed in the cloud from Day 1 don’t usually have this behavior.
If anything, their tendency is to pull large operations back in-house to save on cost of ingress/egress. But companies whose management or corporate direction defines running in the cloud as a strategic initiative are faced with this task all the time. Essentially, how do I get from here and right now to there, and quickly?
The Bias of Change
Having the bias towards change as an integral part of a company or departmental culture is essential, of course. But that doesn’t bridge the divergence chasm once it’s established. Directors commonly have in-house applications that have been built over many years on enterprise-level databases, extensive middleware and powerful front-ends. Then they have a contender application, still in testing or development, but based in an *aaS provider or built in the cloud, with bells and whistles and functionality still being added.
The two efforts are parallel, only not quite. The skillsets required to build, revise and operate the two different paths are vastly, generationally different. Directors or CIO’s who worry about these deployments basically think about one thing and one thing only – how do I bridge the two paths, and get people to jump at the right time and with minimal stress?
It keeps them up at night, trust me.However daunting these types of shifts may seem to be, the solution undoubtedly centers on finding a minimization of analysis paralysis.We’ve all heard of it.
As data scientists and engineers, we LOVE to analyze all facets of a problem, from as many different angles as possible. A rather simple question, such as “is it better to move application X to the cloud?” could require years to research and test and model the outcomes to consider all logical ramifications.
Just listing all the variables, including costs, opex vs. capex breakdowns, and all the various hardware and software variations is exhausting. And generally businesses don’t have an infinite amount of time to get these projects done.
Staying in Motion with NetApp Cloud Volumes ONTAP (formerly ONTAP Cloud)
This is where NetApp comes in. If you accept the idea of minimizing paralysis as a guiding principle, then bridging the divergence is most easily done when a business has a common data platform that spans the enterprise (in-house) operations as well as the cloud providers, service providers and hyperscalers.
Common servers or networking are less important overall, because of hypervisor abstraction layers, because of the relatively smaller costs, because of hyperscaler elasticity/parallelism, and frankly, because the things are evolving so fast. But if you can move your application data fluidly from in-house established IT infrastructure directly to your cloud/SP infrastructure, with storage efficiencies preserved, two-way synchronization and security locked at both ends, you create a non-blocking system and solution architecture.
No paralysis equals getting things done. So yes, traditional enterprises have a valid strategy in getting to the cloud by use of divergence, but the strategy needs a bridge to fully complete the motion.
Luckily, NetApp has been building that into our products for years. Cloud Volumes ONTAP, AltaVault, Cloud Sync, the new NetApp HCI – they all accelerate your comfort level in the cloud, reducing risk and easing your transition. Thousands of our customers are using these products to move data quickly and decisively.