hamburger icon close icon

Conquering Cloud Collaboration Obstacles within a Distributed Enterprise

July 23, 2017

Topics: Cloud Volumes ONTAP 12 minute read

The huge volumes of data that businesses can produce are growing at a phenomenal rate.

This data can be a business's greatest asset but it can also be its greatest liability if not managed with care. 

The consequences of mismanaging this data can range from loss of intellectual property to the possibility of fines and other legal action.

Losing an IP may impact the future of the business itself, while exposing a company to legal action that can seriously hinder operations.

IT departments are often caught between a rock and a hard place in this sense: they need to be able to control access to the data, ensure that they comply with the relevant regulations, and prevent data from leaking outside of the organization.

Such tasks are made even more complex if the organization is global, as each country has its own regulations with which the company must comply.

At the same time, the end users want to be able to collaborate in a seamless manner and be able to access the data from any location at any time. If they don’t have better option, they will find their own ways to work around the constraints of their day-to- day jobs.

There are many cloud-based collaboration tools which make shadow data sharing within the business a real threat. The end users may not be aware that their actions are exposing the business to risk, but they are: there may not be any data classification, policies, or data loss prevention controls in place when they employ these solutions.

To overcome these obstacles, there must be an efficient enterprise cloud collaboration strategy in place.

This article will show you how to successfully create that kind of strategy in order to avoid the pitfalls mentioned above. Using a three-phase approach, you will understand the key steps to allow your users to be able to collaborate in an efficient, frictionless manner.

Phase One: Understand the Business and the Constraints

There are two key steps in this phase:

  • Listening to the customer and understanding their needs.
  •  Understanding your company’s own constraints.

It is easy to make assumptions about how the business works when creating the cloud collaboration strategy. If the strategy is created in isolation it will most likely be product-driven and aligned with how the IT department thinks the business wants to work.

The strategy needs to be co-created with the business. By involving the business and tapping into their insights the strategy will be more closely aligned to what the company actually needs.

When it is time to implement, there will be less resistance to the changes as the business will understand what you are delivering and will feel as if they it had been part of journey to produce the strategy.

Understanding your constraints when developing the strategy means reviewing the regulations that are applicable to your organization, and then defining the controls you’ll implement to ensure compliance.

These controls will typically include data classification, identifying data owners, and enacting data protection methods.

1. Data Classification

Data Classification can be an extremely complex and time-consuming process to implement. To break it down into steps, the key points are: 

  • Identifying the data types and storage locations within the organization and where that data is hosted. Such places include distributed file shares, cloud collaboration tools such as SharePoint and Jive, code repositories such as TFS and GitHub, and locally saved data.
  • Creating a process to classify data.That means defining protection methods based on document classification.
  • Creating a process and controls to ensure that data is stored in the correct location based on data classification.
  • Determining your recovery time objective, and understanding the importance of data so that in the event of a disaster, recovery can be prioritized.

2. Identifying Data Owners

The IT department cannot be responsible for all data within the business; owners of the data must be identified as they will be responsible for authorizing access to the data, responsible for its accuracy, integrity and lifecycle management of the data.

3. Data Protection Methods

These can be implemented from access level controls to encryption, depending on the data classification. By having the data classified it allows the IT department to identify where extra layers of protection have to be deployed. These additional controls can range from encryption at rest by utilizing capabilities such as NetApp Storage Encryption (NSE) or AWS S3 server-side encryption and encryption in transit by using TLS or AWS client-side encryption. 

Keep in mind that backing up your data effectively is also important to make sure recovery will be possible in case of any disaster. Once you have documented your strategy and gained approval from the business stakeholders you are now ready to implement the strategy.

Phase Two: Implement the Strategy

This is the most exciting and challenging part for the project. It can seem like a daunting and never-ending task but by delivering incremental value through the process the business will quickly start the see benefits.

Implement Data Classification

Once you have defined your data classifications you need to implement them. This is typically done via tools or by a manual process. 

The end state is that the data has a data classification, this is typically done via a protective marking on the data: for example, adding a header or watermark to the data’s metadata.

Each approach has several challenges and in the majority of cases an organization will end up using both methods.

  • Auto classification via tools provides a method to either classify the data when the user opens or creates the data or scans the data while it is at rest and then applies a classification. The main challenge with this approach is the complexity of creating the rules. Certain data patterns are easy to detect but other patterns can be extremely complex.
  • Manual classification is when the end user is responsible for setting the data classification manually. The drawback to this approach is that people make mistakes and interpret things in differents way, meaning there will always be errors in the way the data is classified.

Data classification will never be perfect there will always be a margin of error, but by at least having the data classified the risk of data being shared incorrectly would be reduced when compared to having no data classification process in place.

The data classification will feed into the Data Loss Prevention (DLP) controls that can now be deployed.

Implement Data Loss Prevention Controls

Data loss prevention tools allow the detection and prevention of data breaches and unauthorised sharing of data.

There are a few different approaches to deploying DLP within your organization, especially if you have deployed Office 365 (O365) or Goggles G Suite Enterprise Edition, which are outlined in the table below.
 

Options

Advantages

Disadvantages

Native O365 Capabilities

  • Simplified support experience
  • Integrated into O365
  • Depending on the O365 plan, DLP tools may be available at no extra cost
  • Reporting and dashboard available
  • Consistent policy engine across SharePoint, Exchange, and OneDrive
  • O365 DLP capabilities do not block the uploading of data to SharePoint or OneDrive, but it issues alerts on or restricts access to the data once in SharePoint or OneDrive
  • Custom policies can be complex to create

Native G Suite Capabilities

  • Simplified support experience
  • Integrated into G Suite
  • Over 700 predefined templates
  • Reporting and dashboard available
  • Consistent policy engine across Gmail and and Drive
  • Only available in the G Suite Enterprise edition
  • Custom policies can be complex to create

Third-Party DLP Tools

  • Ability to stop data being uploaded into collaboration areas
  • Vendor separation
  • Ability to create and deploy more complex policies
  • Product maturity
  • Additional cost
  • Vendors ability to keep up with the collaboration and product changes
  • Extra agent-on-client devices

CASB

  • Ability to detect cloud usage within the environment
  • Vendor separation
  • Granular policies
  • Flexible deployment options
  • Additional capabilities, e.g. encryption and tokenization
  • Additional cost
  • Some CASB vendors require integration with third-party DLP tools
  • CASB market is moving towards SaaS model, can be a challenge with some compliance scenarios
  • Vendor landscape is changing rapidly


Implement Collaboration Zones

Having defined methods for the business to be able to collaborate will reduce the risk of them trying to find their own solution.

There a number of different approaches that can be implemented: internal web-based collaboration, cloud-based collaboration, and network-optimized file sharing. Each have their own advantages and disadvantages.

  • Internal web-based collaboration is made possible by products such as SharePoint or Jive, which are used to deploy an on-premises collaboration zone that allows the business to share and collaborate. The infrastructure needs to be carefully planned to ensure that its performance meets the business's expectations. It’s important to review the integration options with your storage platform to lower the complexity of managing of the underlying infrastructure, for example NetApp Storage Solutions for SharePoint .

    A common complaint with centralized collaboration zones is the latency from remote sites. Consider network upgrades, quality of service rules, or network accelerators if the performance is unacceptable.

  • Cloud-based collaboration uses products such as SharePoint Online, Microsoft Teams, Slack, or Huddle to provide a feature-rich method of collaboration from any location.
    These solutions can face the same network performance challenges as internal collaboration sites, so ensure that your internet breakout point can cope with the extra traffic.

    Depending on the data classification being hosted on these platforms, additional security controls may have to be deployed. Encryption is one way to address that, either by using native capabilities of the application or via a cloud access security broker (CASB), which can provide transparent encryption capabilities.

    Design these security controls upfront before the users start utilizing the platform. Also, review how the data will be backed up if you require additional capabilities beyond what is available natively.

    If, for example, you are using Office 365, a solution such as
    Cloud Control for Microsoft Office 365 can make backing up and restoring your data much easier.

  • Network-optimized file sharing is useful in several situations. Certain file types are not suitable for sharing within web based collaboration zones. You may also find that you need to centrally manage the data but the performance at remote locations is not acceptable.

    In both these scenarios network-optimized file sharing is a good fit: these products utilize the existing storage infrastructure capabilities and optimize how data is replicated between sites. A typical use case for this is CAD files, as versioning, replication, and distributed file locking must all be carefully managed.

As organizations move to a hybrid cloud model they will also need to be able to replicate data into the cloud for collaboration.

Cloud vendors provide some native methods of being able to do this but they are often optimized for moving data into the cloud. For example, AWS Storage Gateway provides an on-premises file server that integrates into S3.

Storage vendors are also extending their capabilities into the cloud. This approach can simplify the operational management of the storage platforms as they use the same management tools to manage on-premises and cloud based storage. 

They also provide additional capabilities that are not available natively from the cloud provider; this can provide powerful capabilities around how the business can derive value from the data that they generate.

AWS Encryption

As an example, NetApp Cloud Sync can be used to optimize the transfer of data into S3, data which can then be consumed by QuickSight to provide business intelligence reports for the company.

Once the data is in the cloud there is a requirement to secure the data. AWS provides a number of approaches depending on an organization's specific compliance requirements: server-side encryption with AWS S3 Managed Keys (SSE-S3), server-side encryption with AWS KMS-Managed Keys (SSE-KMS), and server-side encryption with customer-provided keys (SSE-C). 

Learn more: The Ultimate AWS Encryption Guide

Typically, organizations will use either SSE-S3 or SSE-KMS unless they have stringent compliance rules that require them to utilize SSE-C. If an organization has very sensitive data that requires separation, they may also apply additional encryption independent of AWS.

There are numerous vendors on the AWS Marketplace who provide encryption capabilities that can be applied on top of the services AWS provide.

Phase Three: User Training

Do you want to avoid dissatisfaction, frustration, technical problems, and general chaos?

If so, training the users in the systems and planning the user adoption process together is a necessary step. People are resistant to change: it’s your job to bring them along with you on the journey. Otherwise, they won’t buy into the new solutions.

The user adoption process has a number of key steps to it. Before you make any changes, the journey has to start with a clear communication plan. This plan will help keep the business informed as you move forward with the adoption. No one will feel shut out as changes start to be implemented, if they know what is coming up ahead.

Next, the new systems have to be tested with key users. This will discover any problems in a safe way before they can cause harm. Once the system tests well, phased implementation can begin. At this level, it’s important when possible to avoid big bang cut overs. Also, find champions within the company who will evangelize the adoption on your part. These thought leaders will help you as you being to show the benefits of the new system moving forward.

As issues crop up, and they undoubtedly will, you need to be able to address them promptly and effectively. You don’t want to risk losing support this late into the process. Finally, getting feedback and following up on it to adjust along the way is an ongoing final step.

Summary

What does successful information sharing look like?

Success is when the business can collaborate in a seamless and frictionless manner, one where users can concentrate on deriving benefits from the data rather than worrying about how they are going to access it or share that information with colleagues at different sites.

The collaboration tools should become transparent to the user, if you achieve this, then you know your efforts have been successful. Even if you work in a highly regulated and secure environment with the right level of planning you can still design and implement a collaboration strategy that brings value to the business and simplifies the end user experience.

What will failure look like?

Imagine shadow data sharing using cloud-based services in an unofficial, unmonitored manner, employing any number of different collaboration tools and islands of data.

This alternative will not only prevent gaining the full benefits from the data, it poses a security threat to IT department, and to the health of the company itself. Failure looks like putting far too much at risk.

Want to get started? Try out Cloud Volumes ONTAP today with a 30-day free trial.

-