Blog

Kubernetes Shared Storage: The Basics and a Quick Tutorial

Kubernetes offers a storage architecture that is quite complex, but very powerful. It lets you define generic storage units called “volumes” and use them to save data created by containers, and share data between containers.

In this post we’ll review the basics of Kubernetes storage, and provide a quick tutorial on how to set up a Kubernetes pod that shares data between two containers, and exposes it via a Kubernetes service. In addition, we’ll show how NetApp Cloud Volumes ONTAP can help set up persistent storage volumes dynamically and efficiently.

In this article, you will learn:

How does Kubernetes storage work?

Kubernetes storage is based on the concept of Volumes. A Volume is an abstracted storage unit that containers (nodes in the Kubernetes cluster) can use to store data, and share data between them.

Kubernetes contains a wide range of storage plugins that let you connect to storage services by AWS, Azure, Google Cloud Platform, VMware, and also on-premise hardware.

Volumes and Persistent Volumes

There are regular Volumes, which are ephemeral, torn down when their parent pod shuts down. There are also Persistent Volumes, which are deployed as a separate pod, and can provide long-term storage even when the pods accessing them shut down.

Cluster nodes can provision volumes by issuing claims for storage, and specifying what type of storage they need. Administrators define StorageClass objects that specify which storage resources are available. The Kubernetes cluster searches for a suitable Volume based on its StorageClass, and performs binding between a claim and a target volume.

Static vs. Dynamic Provisioning

Kubernetes offers static provisioning, which means the cluster has a fixed set of Volumes or Persistent Volumes. Claims for storage are binded to one of the available Volumes if it meets the user’s criteria, and if not, the claim is denied.

A more advanced option is dynamic provisioning, in which storage Volumes are created automatically in response to a claim issued by a user. While this is quite powerful, Kubernetes only goes as far as allocating the storage. It does not handle backups, high availability, testing, or other capabilities needed in production environments.

Kubernetes Shared Storage: Quick Tutorial

This tutorial shows how to allow two containers running in the same pod to share data via a non-persistent volume.

The following tutorial steps are summarized - see the full tutorial and code here.

Step 1. Define a Kubernetes pod with two containers

We create a YAML file, called two-containers.yaml that defines a pod with two nodes and a volume called ‘shared-data’.

We have two containers, conatainer1 running an NGINX server and container2 running the Debian OS. Each has the shared volume mounted on a directory, specified using as mountPath.


The YAML file runs both containers, and instructs container2 to continually write a timestamp to an index.html file in the mounted storage volume. We’ll want to see that container1 is able to access this file and see the timestamps. apiVersion: v1
kind: Pod
metadata:
    name: two-containers
spec:
    volumes:
      - name: shared-data
      emptyDir: {}
containers:
    - name: first
      image: nginx
      volumeMounts:
        - name: shared-data
        mountPath: /usr/share/nginx/html
    - name: second
      image: debian
      volumeMounts:
        - name: shared-data
        mountPath: /pod-data
      command: ["/bin/sh"]
      args:
          - "-c"        - >
          while true; do
          date >> /pod-data/index.html;
          echo Hello from the second container
          >> /pod-data/index.html;          sleep 1;       done

Step 2. Create the pod and try to read the shared file from the first container


We’ll create the pod from the YAML file as follows: $ kubectl create -f two-containers.yaml pod "two-containers" created
Let’s ask the first container to read the index.html file from its mounted version of the shared volume:
$ kubectl exec -it two-containers -c first -- /bin/bash root@two-containers:/# tail /usr/share/nginx/html/index.html

You should see the timestamps written by the first container.

New call-to-action

Step 3. Define a Kubernetes service to enable external access


Let’s see how to expose the shared data to the world. We’ll use this command to create a Kubernetes service that opens the pod for external access. Then, the NGINX web server will be able to deliver our index.html file to external users. $ kubectl expose pod two-containers --type=NodePort --port=80 service "two-containers" exposed
Use the kubectl describe service command to check which port is mapped to the first container. You can then use a command like the following to view the index.html file and see the timestamps as they are written by the second container. $ curl http://localhost:31944/

Next Steps in Kubernetes Storage

After this basic tutorial, you’re ready to explore more advanced aspects of Kubernetes storage. We recommend these resources:

1. Learn about the basics of about the basics of Kubernetes persistent storage
2. See the official tutorial for configuring a pod with persistent storage
3. Learn how to use NFS in Kubernetes 

Kubernetes Shared Storage with Cloud Volumes ONTAP

NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP supports up to a capacity of 368TB, and supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, cloud automation, and more.

In particular, Cloud Volumes ONTAP provides persistent shared storage for Kubernetes storage, with enterprise-grade features like backup and high availability.

New call-to-action

-