More about Kubernetes on AWS
- AWS ECS in Depth: Architecture and Deployment Options
- EKS vs AKS: Head-to-Head
- AWS ECS vs EKS: 6 Key Differences
- Kubernetes on AWS: 3 Container Orchestration Options
- AWS EKS Architecture: Clusters, Nodes, and Networks
- EKS vs GKE: Managed Kubernetes Giants Compared
- AWS ECS vs Kubernetes: An Unfair Comparison?
- AWS Kubernetes Cluster: Quick Setup with EC2 and EKS
What is Google Kubernetes Engine (GKE)?
What is Amazon Elastic Kubernetes Service (EKS)?
Google Kubernetes Engine (GKE) is a managed service for scaling and deploying containerized applications in the cloud. It is offered as a cloud service on the Google Cloud Platform, and is also available on-premises and on the AWS cloud, via the Google Anthos multi-cloud framework.
GKE simplifies cluster creation and offers load balancing, networking, security, auto scaling, and other features required for Kubernetes in production.
GKE was launched in 2015 and is the veteran managed Kubernetes service. According to a recent survey, over 90% of users of Google Cloud are using GKE to manage Kubernetes clusters.
Amazon’s Elastic Kubernetes Service (EKS) is a managed Kubernetes service. Unlike Amazon’s Elastic Container Service (ECS), which is a proprietary orchestrator created by Amazon, EKS is fully compatible with native Kubernetes. It is certified by the Cloud Native Computing Foundation (CNCF), and Amazon is a regular contributor to the Kubernetes open source codebase.
EKS manages deployment, ongoing operations, networking, and scaling of Kubernetes clusters, automating tasks like upgrades and node provisioning. It also offers built-in security and encryption.
EKS integrates with Amazon CloudWatch and CloudTrail for logging and auditing, and can use Amazon Identity and Access Management (IAM) for user account and role management.
This is part of our series of articles about Kubernetes storage.
In this article, you will learn:
- EKS vs GKE: Feature Comparison
- EKS vs. GKE: Kubernetes Pricing Comparison
- Kubernetes Storage with NetApp
EKS vs GKE: Feature Comparison
Here is a breakdown showing how EKS and GKE compare on main Kubernetes management features.
Both GKE and EKS let you scale up cluster nodes easily using the user interface.
GKE offers a highly automated solution—users just need to specify the VM size they need and the range of nodes in the node pool, and the rest is managed by Google Cloud. GKE also allows you to further customize autoscaling—it provides pre-configured Cluster Autoscaler, an open source project that lets you scale nodes based on actual workloads.
EKS also provides an auto-scaling feature, but requires some manual configuration. You can set up similar functionality to Cluster Autoscaling, but unlike GKE, it requires configuration and is not enabled by default.
- Read our guide to setting up AWS Kubernetes clusters
- Learn more about multi-cloud EKS deployment with EKS Anywhere
Kubernetes Networking and Security
Kubernetes deployments are new to operations teams, and can be highly complex and dynamic, making security a challenge. Regulating network resource access and regulation via container network interfaces (CNI) and role-based access control (RBAC) are two important ways to enforce security controls.
GKE deploys Kubernetes RBAC by default, and limits network access to cluster endpoints in Kubernetes APIs. APIs receive private internal IP addresses rather than public ones. This is done for added protection against exposure to attackers, who may gain access to a cluster through the API server, which remains publicly available. Additionally, a Classless Inter-Domain Routing (CIDR) allowlist helps protect against compromised cluster credentials and similar scenarios.
EKS takes RBAC a step further by making it mandatory, thereby maintaining standard core security controls in each Kubernetes cluster. It supports permissive pod security policy by default. These types of Kubernetes-native security controls will become more important for cluster workload migrations. Customers install and manage Calico CNI upgrades as a prerequisite.
On the other hand, EKS uses managed node groups, which are convenient for users, but create a security risk. This is because they expose a public IP address when sending traffic out of Amazon’s virtual private cloud (VPC). These addresses are protected with security group rules and access control lists (ACL), but are still at risk of misconfiguration and pose a security risk, unless nodes are placed on a private subnet.
GKE provides a feature called Cloud Code, which is an extension for Visual Studio Code and IntelliJ, letting developers deploy, debug and manage your cluster right from their IDE. The tool directly integrates with Cloud Run and Cloud Run for Anthos.
Related content: read our detailed guide to Google Anthos
EKS fully supports Google’s Kubernetes extensions, so it supports any generic functionality available with kubectl or the Kubernetes API. However, Amazon does not provide dedicated Kubernetes development tools beyond this generic functionality.
GKE integrates directly with all monitoring tools on the Google Cloud platform. It also has a modern, well-designed interface that allows you to check logs, check resource usage, and set alarms.
EKS supports logging and monitoring, using a separately installed product called CloudWatch Container Insights. While the integration works well and provides comprehensive metrics, the Container Insights user interface is somewhat inconvenient. You may want to set up a third-party monitoring and logging solution.
GKE offers Cloud Run, which lets you deploy highly scalable workloads that support “per request scaling”, meaning they scale down to zero once a request has been executed. For certain types of workloads this can be very efficient and save dramatically on compute costs. In addition, Google provides Cloud Run for Anthos, which lets you achieve the same functionality in your own Kubernetes clusters, even those managed on-premises.
EKS integrates with Amazon's serverless container platform, Fargate. You can run containers as container instances instead of full virtual machines, and pay per virtual CPUs (VPUs) and memory actually used by your workloads. Note that with Fargate you need to use Amazon's Application Load Balancer (ALB), which adds some complexity to the setup.
GKE provides two availability options—zonal deployment, which provide replication across Google Cloud availability zones, and regional deployments, with replication between different regions. Zonal deployments guarantee 99.5% uptime, while regional deployments guarantee 99.95%.
Amazon EKS guarantees 99.95% uptime for the Kubernetes endpoint in a specific Kubernetes cluster.
EKS vs. GKE: Kubernetes Pricing Comparison
Let’s review the differences in pricing between EKS and GKE.
Amazon EKS charges $0.10 per hour ($72 per month) for each Kubernetes cluster you create. You can use clusters to run multiple applications, with different Kubernetes namespaces and IAM security policies.
There are three ways to run EKS. Pricing for workers nodes depend on the method you choose:
EKS on Amazon EC2
You are billed for the resources created to run your Kubernetes worker nodes. By default, you pay on-demand EC2 instance prices (see pricing page). It is also possible to use spot instances and receive discounts of up to 90%. This requires a complex configuration including Cluster Autoscaler and EC2 Auto Scaling Groups.
EKS on Amazon Fargate
With Fargate, you do not need to configure and manage server infrastructure for worker nodes. Fargate allows you to specify and pay for resources actually consumed by your workloads—billing is according to vCPU and memory resources used between the start of the container image download and pod termination, with a minimum of 1 minute (see pricing page).
Outposts lets you run Kubernetes clusters on-premises using a managed physical appliance provided by Amazon. The service is priced per appliance, and starts from $5,407 per month.
GKE, like EKS, charges a cluster management fee of $0.10 per hour per cluster. One cluster per region per billing account is free of charge. The cluster management fee does not apply to Anthos GKE clusters.
There are four ways to run GKE, with different pricing models.
GKE on Google Compute Engine
When running on Compute Engine, beyond the cluster management fee, you pay on-demand prices for VMs used to run your containers (see pricing page).
GKE on Preemptive VMs
You can also run GKE on preemptible VMs (Google’s name for spot instances), which run on spare capacity on Compute Engine and grant discounts of up to 80%. GKE lets you create a “preemptible Kubernetes node pool”, which runs on preemptive instances.
GKE Autopilot Mode
Another way to run GKE is Autopilot mode, in which Google manages Day 2 Kubernetes operations, and implements best practices including security. Autopilot mode costs $0.10 per hour per cluster, and in addition, has special pricing for resources used to run worker pods: $0.0445 per vCPU-hour for compute, $0.0049225 per GB-hour for memory, and $0.0000548 per GB-hour for ephemeral storage.
GKE on Google Anthos
Google Cloud Anthos is a hybrid cloud solution that lets you run GKE on premises—either on bare metal servers or on top of VMware—as well as on Google or other public clouds. There are two pricing options for Anthos:
- Pay-as-you-go pricing—costing $0.01233 per vCPU hour on Google Cloud and AWS, and $0.10274 per vCPU hour on-premises.
- Subscription pricing—costing $6 per vCPU per month on Google Cloud and AWS, and $50 on-premises.
Kubernetes Storage with NetApp Cloud Volumes ONTAP
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP supports up to a capacity of 368TB, and supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.
In particular, Cloud Volumes ONTAP provides dynamic Kubernetes Persistent Volume provisioning for persistent storage requirements of containerized workloads.