Blog

Kubernetes for Developers: Overview, Insights, and Tips

Kubernetes took the development and cloud world by storm. In the few years since becoming public, Kubernetes is the most used container orchestrator, mainly due to its simplicity, declarative syntax, and a ubiquitous presence on almost any cloud provider. Click here to take a deep dive into Kubernetes.

In this post we’ll take a closer look at what is so appealing about Kubernetes for developers, including an overview of its basic features for deployment, monitoring, security, and some of the NetApp solutions that can make it even more effective.

What Is Kubernetes?

Kubernetes is an open-source container orchestration tool designed to manage distributed applications with automated scaling of nodes and containers, fault tolerance, and ease of use. It was originally created by Google, but is currently managed by the Cloud Native Computing Foundation. Initially designed to run using Docker containers, Kubernetes can currently run many other types of containers, such as rkt.

How Developers Deploy Kubernetes

Working with distributed applications is challenging. The many moving parts involved increase the chances of failure. Besides all the infrastructure needs (such as machines and networks), there are other factors to juggle from the operational and development side. For example, how do users deploy each application while keeping them reliable? What to do in case of a failure? How will services discover each other? And what about security?

Deployment Description File

Kubernetes simplifies those questions by addressing all infrastructure needs and proposing solutions for the operational and development needs. Kubernetes features a complex deployment description file (see the image below), which is designed to address many different scenarios. In a single file, the operator can define how many instances the application should have, all networking needs, any secrets (variable files that are secretly shared to specific resources, which we’ll discuss more below) that need to be shared, and how the deployment will update the version if a new version is being deployed.

Kubernetes Deployment Description File ExampleKubernetes Deployment Description File Example

Update Process Policies

Officially, there are two types of update process on Kubernetes: Recreate and RollingUpdate. The Recreate stops the old version and starts the new one. Because of that, the application appears offline while changing the versions. The RollingUpdate policy on the other hand keeps the application working all the time by spinning up new versions while spinning down old versions of the same application and changing the communication user flow from one to another.

Load Balancing

Every application designed to work in a distributed environment must not rely on external parts, as these are subject to failures. Kubernetes tries to improve service reliability by providing direct control of load balancers and the number of instances. When a user creates an application deployment, they will also declare a load balancer (called Service) and how many instances the application should have. Kubernetes will try to keep the desired number of instances up even if one fails by creating a new one. If autoscale is enabled on the cluster, it is even possible to define the minimum and the maximum number of instances an application should have and how Kubernetes should increase/decrease the current number of instances.

Nginx deployed on KubernetesNginx deployed on Kubernetes

The example above shows an application on the open-source server, reverse proxy, and load balancer NGINX being deployed using Kubernetes. Line 6 specifies the number of instances the cluster must keep, while line 7 configures the load balancer to keep serving every app, called nginx-example (defined in line 16).

Kubernetes Security Secrets

Although deployment files can have environment variables fine-tune and configure a container, some values should not be written at the file level. In Kubernetes security, properties such as database passwords, SSL keys, and other sensitive data should be stored in a special vault. In Kubernetes, this area is called Secrets and can store single values or even whole files. Using Secrets, an application deployment file can be used in many different environments (dev, stage, production) without any sensitive information stored on it.

Monitoring and Security for Kubernetes 

Monitoring Kubernetes and its running applications is straightforward: DaemonSets are applications that run in every cluster node available and can be used to monitor the cluster, its nodes, and even the applications running on each node. Many monitoring tools run as a DaemonSet to deliver metrics to a centralized environment. Kubernetes suggest using at least two of them: (1) Kubelet, which acts as a bridge between the master and the nodes and watches for PodSpecs to collect statistics and current status. (2) Container Advisor (cAdvisor) works on containers and delivers resource usage and performance analysis metrics.

As a popular solution, Kubernetes is available in all major cloud providers, giving access to Kubernetes for developers no matter what cloud they’re working in. One should consider deploying your cluster or using a solution already available, such as Amazon Kubernetes Service or Azure Kubernetes Service. Deploying on your own, even using cloud resources, can be challenging. However, consider that using one a cloud provider version can increase the lock-in on that platform. One last solution would be using a multi-cloud solution, such as NetApp Kubernetes Service.

So far, we’ve talked about stateless applications: software that, if turned off do not lose any data as no data stored on then. Stateful applications, such as databases and message brokers, need to have persistent storage to keep their data safe. More: they need to be appropriately configured to avoid data loss and data corruption while keeping excellent performance.Kubernetes Development for Stateful Applications

Stateful distributed applications cannot work in the same manner of stateless apps: you can’t just increase the number of instances to accommodate the actual demand because the data stored can’t be shared without risks of data corruption or data loss. Each stateful application is going to treat distribution in a different way, and that needs to be studied with care.

The simplest solution, generally, is in using the master/slave architecture. In this case, a master version of the application is responsible for receiving connections for reading and writing operations, while slave instances can (in some applications) receive read operations. If the master instance goes down, one slave automatically assumes that role and becomes the master.

It is possible to have more than one node accepting write operations, but the increased complexity can even void its benefits if not well configured. The complexity and performance can be fine-tuned, reducing or increasing the consistency constraint.

Kubernetes delivers Persistent Volumes to handle persistence storage in applications. They can be static or dynamic. Static volumes are created for the application ahead of time and are harder to maintain as the operator must know all the application’s future storage needs from the start. Dynamic volumes are created on-demand; thus, the cluster allocates the resources as they are needed. While dynamic volumes are used in most cases, a static volume can come in handy when an application has specific IO needs, such as a relational database.

Persistent volumes are tied to the infrastructure the cluster is running; thus, it depends on the storage options the providers make available. Each cloud provider offers at least one solution for persistence; at NetApp, a useful open-source tool has been created called NetApp Trident. Trident carries out dynamic provisioning of persistent storage for Kubernetes and works with  NetApp Cloud Volumes ONTAP, which provide the storage volumes needed on AWS, Azure, or Google Cloud. Persistent volumes allocated by Cloud Volumes ONTAP get additional benefits such as high availability, data protection, file services, and more.

Conclusion

In this post, we’ve highlighted some interesting points to consider when using Kubernetes for developers. The container orchestrator has very nice features that facilitate the use of a distributed environment without needing to reinvent the wheel.

We also discussed stateful applications, a topic that is rarely touched in most coverage of Kubernetes. While stateless applications can easily be started and stopped, stateful software requires a more refined deployment, defining where it’s the data will be stored, when they will be removed (if needed), and what kind of fallback it will provide.

Finally, we introduced NetApp Trident, a cloud storage provider that enables the use of Cloud Volume ONTAP. Trident is just one of the DevOps benefits of deploying with Cloud Volumes ONTAP. This enterprise-grade solution allows thin provisioning and other storage efficiencies to cut down persistent volume storage costs, easy replication, fast cloning of persistent volumes, and high availability without any significant knowledge about the technology involved.

If you’re still looking for more info on this subject, read more about Kubernetes use cases on our Kubernetes hub

New call-to-action

-