More about Kubernetes Storage
- Azure Kubernetes Service: Configuring Persistent Volumes in AKS
- Kubernetes NFS: Two Quick Tutorials
- Kubernetes Shared Storage: The Basics and a Quick Tutorial
- AWS ECS vs Kubernetes: An Unfair Comparison?
- Kubernetes Persistent Storage: Why, Where and How
- Kubernetes for Developers: A Deep Dive
- How to Provision Kubernetes Persistent Volumes for NFS Services
- Using Cloud Manager for Kubernetes Deployment
- Docker Volume Tutorial - Using Trident to Provision Storage
- How to Set Up MySQL Kubernetes Deployments
- Kubernetes Persistent Volumes Cloning
- Storage Efficiency for Improving Persistent Volume Storage Costs
- Protection for Persistent Data Storage in Kubernetes
- Dynamic Kubernetes Persistent Volume Provisioning
- Managing Stateful Applications in Kubernetes
- Understanding Kubernetes Persistent Volume Provisioning
- An Introduction to Kubernetes
Amazon Web Services provides several convenient options for setting up Kubernetes clusters. In this post we’ll explain how clusters work and provide quick tutorials for two options—running Kubernetes clusters directly on EC2 and via the Elastic Kubernetes Service (EKS). We’ll also show how NetApp Cloud Volumes ONTAP can help provision persistent Kubernetes storage on AWS.
In this article:
- What is a Kubernetes cluster
- AWS Kubernetes deployment options
- Quick tutorial #1: Running Kubernetes cluster on EC2
- Quick tutorial #2: Deploying a Kubernetes cluster using EKS
- AWS Kubernetes Clusters with Cloud Volumes ONTAP
What is a Kubernetes Cluster?
A Kubernetes cluster is a self-sustained unit that includes one or more pods. A pod is a group of containers that fulfills a certain function, and provides convenient options for containers to communicate and share data. For a complete overview of Kubernetes, see our introduction to Kubernetes.
A cluster consists of the following components:
- API server—provides a customizable REST interface that can be accessed by other Kubernetes resources.
- Scheduler—runs containers in the cluster, according to the policies you define in cluster configuration, which can include details about resources required by applications in the cluster, and metrics that should be evaluated to provision those resources.
- Controller manager—monitors the state of the cluster and compares it to the desired state. For example, if the cluster needs to be running three pods and there are currently only two running, the controller adds another pod.
- kubelet—an agent that runs on each node and communicates with the cluster.
- kube-proxy—a full TCP proxy that allows nodes to communicate with each other and the cluster.
- etcd—persistent storage that holds the cluster configuration.
Kubernetes Deployment Options on AWS
Amazon Web Services provides three main options for deploying Kubernetes:1. Running Kubernetes directly on Amazon EC2 machines
2. Using the Amazon Elastic Kubernetes Service (EKS)
3. Using kops—an open source provisioning system built for AWS, provided as part of the Kubernetes project.
In this post we’ll focus on the first two options. To learn more about kops see the official documentation.
Also check out our post on how to use NetApp Cloud Manager and Trident for Kubernetes deployments with enterprise-grade persistent storage.
Quick Tutorial #1: Running a Kubernetes Cluster on AWS EC2
Here is how to create a Kubernetes cluster directly on Amazon EC2 machines:
1. Install Kubernetes on EC2 machines
Make sure you have an AWS Access Key ID and Secret Access Key.
To set up the cluster ,run the command line tool kubectl on your local workstation (see kubectl official documentation). The startup script creates a Kubernetes directory on your workstation. You can then use the code below (on Linux) to move the Kubernetes deployment to your EC2 machine.
2. Scale the cluster
You cannot use kubectl to run Kubernetes on additional EC2 machines. To scale up, you should use an EC2 Auto Scaling Group. A group is created automatically by the startup script.
You can define the number of nodes you need using the desired and max parameters of the Auto Scaling Group, like this (substitute bold text for the name of your group):
aws autoscaling set-desired-capacity \
--auto-scaling-group-name my-group --desired-capacity 2
3. Shutting down the cluster
To shut down the cluster, run the following command on your workstation. Ensure the environment variables you used previously are still exported.
Quick Tutorial #2: Deploying a Kubernetes Cluster Using Amazon EKS
What is Amazon EKS?
Amazon Elastic Kubernetes Service (Amazon EKS) lets you deploy and manage Kubernetes on AWS, without having to run Kubernetes directly on EC2 machines, like we showed above. EKS is certified by the Kubernetes project, so existing applications, tools and plugins from the Kubernetes ecosystem should work correctly.
Kubernetes Cluster Setup
This tutorial shows how to create an Amazon Virtual Private Cloud (VPC) and use the EKS console to create a Kubernetes cluster within that VPC.
1. Grant EKS permissions
The Amazon Identity and Access Management (IAM) user you will use to perform the operations below needs to have permission to call Amazon EKS API operations. Below is an example of how to add this permission to your IAM user.
2. Create a Virtual Private Cloud (VPC)
You’ll need to setup a VPC for each Kubernetes cluster you create with EKS. This ensures the cluster runs in its own isolated, secured private network within AWS. To create one easily, you can use this CloudFormation template. Open CloudFormation in the Amazon Console, click Create new stack, and provide the URL for this template. Give the VPC a name, leave all options as default, and click Create VPC.
Make a note of the SecurityGroups, VpcId and SubnetIds, so you can fill these in during the EKS cluster setup.
Open the Amazon EKS console and select Create Cluster. Select a name for the cluster, your Kubernetes version and the EKS service role you defined in IAM.3. Create cluster in EKS console
You will also be asked for a VPC, subnets and security groups—fill in the VPC name and the values you obtained in the previous step.
Other options during cluster creation include:
- Endpoint private access—defines whether the Kubernetes API should be accessible through a private VPC endpoint.
- Endpoint public access—specifies if the Kubernetes API server endpoint can receive requests from outside the cluster VPC.
- Logging—there are several log types, for each one you can choose to enable or disable it. All logs are disabled by default.
4. Wait for provisioning and run worker nodes
Cluster provisioning in EKS takes between 10-15 minutes. When it ends, the console will display your API server endpoint and Certificate authority. Make a note of these, as you will need them in your kubectl configuration.
You can now run worker nodes in your cluster—see these instructions.
AWS Kubernetes Clusters with Cloud Volumes ONTAP
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP supports up to a capacity of 368TB, and supports various use cases such as file services, databases, DevOps or any other enterprise workload.
In particular, Cloud Volumes ONTAP integrates with Kubernetes, and lets you easily provision persistent storage for your Kubernetes clusters on AWS.