hamburger icon close icon
Google Cloud Storage

Google Cloud Containers: Top 3 Options for Containers on GCP

How Can You Run Containers in Google Cloud?

Google provides several technologies you can use to run containers in Google Cloud environments. Here are the most commonly used services:

  • Google Kubernetes Engine (GKE)—a managed Kubernetes service that lets you run Kubernetes clusters on Google Cloud infrastructure. You can use the standard options, which let you configure nodes, or the autopilot option, which automatically oversees the entire cluster and node infrastructure.
  • Google Anthos—a hybrid and cloud-agnostic container environment management platform. The service lets you replace virtual machines (VMs) with container clusters to create a unified environment across public cloud and on-premises data centers.
  • Google Cloud Run—serverless computing management platform for your container resources. Cloud Run can scale deployments to fulfill traffic demands and can integrate with various tools in your containerization stack, including Docker.

Google Cloud provides additional tools to support flexible deployment and CI/CD pipelines, such as:

  • Knative, an open-source project for deploying Kubernetes cloud applications on-premises
  • Google Cloud Code for debugging and authoring code
  • Google Cloud Build to allow for CI/CD
  • Google’s Artifact Registry for image and package management

This is part of our series of articles about Google Cloud Storage.

In this article:

Google Kubernetes Engine

Google Kubernetes Engine (GKE) is a managed Kubernetes service run by Google Cloud Platform (GCP). It allows you to host highly scalable and available container workloads. There is also a GKE sandbox option available, which is useful if you have to run workloads that are susceptible to security threats—the sandbox lets you run them in an isolated environment.  

GKE clusters may be deployed as regional and multi-zonal, to safeguard workloads from cloud outages. GKE also has many out-of-the-box security attributes, including vulnerability scanning and data encryption for container images—this is enabled via integration with the Container Analysis service.

The amount of responsibility, control and flexibility that you need for your clusters will determine the type of operation you will need to use in GKE. GKE clusters feature two types of operations:

  • Autopilot—oversees the whole cluster and node infrastructure for you. Autopilot offers a hands-off Kubernetes process, which lets you attend to your workloads and only pay for the resources you need to run your applications. Autopilot clusters are pre-configured using an optimized cluster configuration, which is equipped for production workloads.

  • Standard—offers node configuration flexibility and total control when managing your node and cluster infrastructure. For clusters developed using the Standard mode, you decide the configurations required for your production workloads, and you are charged for the nodes that you use.

As is the case with other cloud service providers and managed Kubernetes service providers, GKE provides automated upgrades, repair of faulty notes, and on-demand scaling. It may also be integrated with GCP monitoring services for detailed visibility into the functioning of deployed applications.

If you aim to host HPC, graphic-intensive or ML workloads, you may augment GKE via specialized hardware accelerators such as TPU and GPU during deployment.

Google Anthos

Google Cloud Anthos is a cloud-agnostic, hybrid container environment. It lets organizations utilize container clusters rather than cloud virtual machines (VMs), which makes it possible to run workloads in a uniform manner across public clouds and on-premises data centers.

Not all organizations will want to get rid of their existing infrastructure. This multi-cloud platform provides organizations with the possibility of using cloud technology, including Kubernetes clusters and containers, with their current internal hardware.

Anthos provides a consistent series of services and design for both in-cloud and on-premises deployments. This affords an organization the freedom to select where to send applications, in addition to migrating workloads from environment to environment.

Google Cloud Anthos is developed using several systems, but Anthos’ core is a container cluster which is overseen by Google Kubernetes Engine. To allow for hybrid environments, Google Cloud Anthos has a GKE On-Premises environment and the Google Kubernetes Engine managed container service—this packages the same series of security and management features.

You can also register current non-GKE clusters with Anthos. GKE on AWS assists with multi-cloud situations, where a compatible GKE environment in AWS may be developed, updated or deleted via a management service from Anthos’ UI. In addition, Service Mesh and Anthos Config Management solutions assist with security management, policy automation and visibility into applications you run across multiple clusters, which makes management easier.

Learn more in our detailed guide to Google Anthos

Google Cloud Run

Serverless computing is helpful for resource management. IT teams utilize serverless computing for running code without managing or provisioning servers, and developers carry out functions that only use resources as required.

Google Cloud Run combines the idea of serverless computing with containerization to offer developers a suitable alternative. Cloud Run is reliant on the portability of containers to scale deployments and fulfill traffic demands, without needing developers to modify the fundamental technology to allow for the managed compute platform.

For Cloud Run, developers may utilize their chosen programming languages, including Python, Go, Java, Node and Ruby, plus any OS libraries. Cloud Run is fairly easy to learn, and IT teams can generally quickly increase development using the service.

Cloud Run is suitable for IT teams who wish to utilize containers, and also make use of serverless advantages. This service could also be more cost-effective—it is pay-per-use and features a free tier.

Google Cloud Tools for Containers

The following tools may be useful when working with containers in Google Cloud.

Google Cloud Build

Cloud Build is a continuous integration service that uses Docker containers with pre-installed software versions and tools to run serverless command-line builds. It functions with all source control repository services which Google Cloud Platform is able to connect to, including Bitbucket Google's Cloud Source Repositories, or GitHub.

The only prerequisite is to establish a JSON or YAML file in the repository along with the build directives. The files are straightforward and have at least two parameters:

  • The build instructions for running on that container
  • The container to be utilized

Every step involves starting the container stated in the configuration file, running the arguments to that specific container (as if from the command line), then destroying the container. This allows you to run several build instructions with various tools.

You can also run customer containers with customized tools and build steps. In this scenario, the only additional parameters you need to define are the entry point for the container to carry out commands, and the position of the container from the relevant container registry.

Artifact Registry

Artifact Registry lets you store artifacts and build dependencies centrally, within an integrated experience in Google Cloud. Artifact Registry extends the capabilities of Container Registry and is the best-suited container registry for Google Cloud.

Artifact Registry offers a single place for managing and storing your packages and Docker images. It allows you to:

  • Integrate Artifact Registry with your current CI/CD tools or Google Cloud CI/CD services. This includes retaining artifacts from Cloud Build, deploying artifacts to Google Cloud runtimes (such as Cloud Run, Google Kubernetes Engine, Computer Engine and App Engine), and controlling access with constant credentials via Identity and Access Management.
  • Safeguard your container software supply chain. This includes overseeing container metadata and looking for container vulnerabilities using Container Analysis and implementing deployment policies with Binary Authorization.
  • Safeguard repositories via a VPC Service Controls security perimeter.
  • Develop several regional repositories within one Google Cloud project. You can group images according to the team or control access and development stage at the repository level.

Artifact Registry integrates with Cloud Build and various CI/CD systems to retain packages from the builds. It also allows you to store any trusted dependencies you use for deployments and builds.

Google Cloud Containers with NetApp Cloud Volumes ONTAP

NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, storage efficiencies, Kubernetes integration, and more.

In particular, Cloud Volumes ONTAP supports Kubernetes Persistent Volume provisioning and management requirements of containerized workloads.

Learn more about how Cloud Volumes ONTAP helps to address the challenges of containerized applications in these Kubernetes Workloads with Cloud Volumes ONTAP Case Studies.

New call-to-action
Yifat Perry, Technical Content Manager

Technical Content Manager