Google Kubernetes Engine (GKE) is a managed environment for deploying, managing and scaling containerized applications using the Google Cloud Platform infrastructure. The environment that Google Kubernetes Engine provides consists of multiple machines, specifically Google Compute Engine instances, which are grouped together to form a cluster. Google Kubernetes Engine draws on the same reliable infrastructure and design principles that run popular Google services and provides the same benefits like automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates, and more.
This guide is designed to help you to get started with Google Kubernetes Engine(GKE). We will be using the Google cloud shell to setup the GKE cluster and host a multi-container application. This guide will walk through the steps to:
- Creating Clusters
- Administering Cluster
- Configuring and expanding Clusters
- Deploying workloads to clusters
- Deploying applications
In this section, we will enable the Kubernetes Engine API. Along with this we will setup a cloud shell and Local shell for deploying the Kubernetes Cluster.
Choosing a Shell
To complete this tutorial, you can either use a cloud shell or local Linux terminal.
Cloud Shell is a shell environment for managing resources hosted on Google Cloud. Cloud Shell comes preinstalled with the gcloud command-line tool and kubectl command-line tool. The gcloud tool provides the primary command-line interface for Google Cloud, and kubectl provides the primary command-line interface for running commands against Kubernetes clusters.
To launch Cloud shell, perform the following steps.
- Go to Google cloud console with your gmail account (https://console.cloud.google.com) Link.
From the upper-right corner of the console, click the Activate Cloud Shell.
This will launch the google cloudshell.
If you prefer using your local shell, you must install the gcloud tool and kubectl tool in your environment.
To install gcloud and kubectl, perform the following steps:
- Install the Cloud SDK, which includes the gcloud command-line tool.
- After installing Cloud SDK, install the kubectl command-line tool by running the following command:
Setting a default Project
To set a default project, run the following command from Cloud Shell:
To set a default compute zone, run the following command:
Now we are ready to launch a cluster and deploy an application.
Creating a GKE cluster
A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. You deploy applications to clusters, and the applications run on the nodes.
Creating a single-zone cluster
To create a cluster with the gcloud command-line tool, use one of the following gcloud container clusters commands.
Creating a multi-zonal cluster
To create a multi-zonal cluster, set –zone to the zone for the cluster control plane, and set –node-locations to a comma-separated list of compute zones where the control plane and nodes are created. Use one of the following commands.
Viewing your clusters
To view a specific cluster, run the following command:
Setting a default cluster for gcloud
To set a default cluster for gcloud commands
Configuring cluster access for Kubectl
To run kubectl commands against a cluster created in Cloud Console, from another computer, or by another member of the project, you need to generate a kubeconfig entry in your environment.
Generate a kubeconfig entry by running the following command:
Upgrading the cluster
You can manually upgrade your cluster using the Cloud Console or the gcloud command-line tool.
To upgrade to the latest version, run the following command:
Resizing a Cluster
You can resize a cluster to increase or decrease the number of nodes in that cluster.
To increase the size of your cluster
To decrease the size of your cluster
Auto-scaling a Cluster
To enable auto-scaling for an existing node pool, run the following command:
Deleting a Cluster
To delete an existing cluster, run the following command:
Deploying workloads to clusters
To deploy and manage your containerized applications and other workloads on your Google Kubernetes Engine (GKE) cluster, you use the Kubernetes system to create Kubernetes controller objects. These controller objects represent the applications, daemons, and batch jobs running on your clusters.
You can create these controller objects using the Kubernetes API or by using kubectl, a command-line interface to Kubernetes installed by gcloud. Typically, you build a representation of your desired Kubernetes controller object as a YAML configuration file, and then use that file with the Kubernetes API or the kubectl command-line interface.
Types of workloads
Kubernetes provides different kinds of controller objects that correspond to different kinds of workloads you can run. Certain controller objects are better suited to representing specific types of workloads. The following sections describe some common types of workloads and the Kubernetes controller objects you can create to run them on your cluster, including:
- Stateless applications: A stateless application does not preserve its state and saves no data to persistent storage — all user and session data stays with the client.
Some examples of stateless applications include web frontends like Nginx, web servers like Apache Tomcat, and other web applications.
- Stateful applications: A stateful application requires that its state be saved or persistent. Stateful applications use persistent storage, such as persistent volumes, to save data for use by the server or by other users.
Examples of stateful applications include databases like MongoDB and message queues like Apache ZooKeeper.
- Batch jobs: Batch jobs represent finite, independent, and often parallel tasks which run to their completion.
Some examples of batch jobs include automatic or scheduled tasks like sending emails, rendering video, and performing expensive computations.
- Daemons: Daemons perform ongoing background tasks in their assigned nodes without the need for user intervention.
Examples of daemons include log collectors like Fluentd and monitoring services.
Deploy a stateless application
Stateless applications are applications which do not store data or application state to the cluster or to persistent storage. Instead, data and application state stay with the client, which makes stateless applications more scalable. For example, a front-end application is stateless: you deploy multiple replicas to increase its availability and scale down when demand is low, and the replicas have no need for unique identities.
Kubernetes uses the Deployment controller to deploy stateless applications as uniform, non-unique Pods. Deployments manage the desired state of your application: how many Pods should run your application, what version of the container image should run, what the Pods should be labelled, and so on. The desired state can be changed dynamically through updates to the Deployment’s Pod specification.
Creating a Deployment
The following is an example of a simple Deployment manifest file. This Deployment creates three replicated Pods labelled app=my-app that run the hello-app image stored in Container Registry:
You can declaratively create and update Deployments from manifest files using kubectl apply. This method also retains updates made to live resources without merging the changes back into the manifest files.
To create a Deployment from its manifest file, run the following command:
To get detailed information about the Deployment, run the following command:
Deploying a stateful application
Stateful applications save data to persistent disk storage for use by the server, by clients, and by other applications. An example of a stateful application is a database or key-value store to which data is saved and retrieved by other applications.
Persistent storage can be dynamically provisioned, so that the underlying volumes are created on demand. In Kubernetes, you configure dynamic provisioning by creating a StorageClass. In GKE, a default StorageClass allows you to dynamically provision Compute Engine persistent disks.
Kubernetes uses the StatefulSet controller to deploy stateful applications as StatefulSet objects. Pods in StatefulSets are not interchangeable: each Pod has a unique identifier that is maintained no matter where it is scheduled.
Stateful applications are different from stateless applications, in which client data is not saved to the server between sessions.
Requesting persistent storage in a StatefulSets
Applications can request persistent storage with a [PersistentVolumeClaim]persistent disk storage.
Normally, PersistentVolumeClaim objects have to be created by the user in addition to the Pod. However, StatefulSets include a volumeClaimTemplates array, which automatically generates the PersistentVolumeClaim objects. Each StatefulSet replica gets its own PersistentVolumeClaim object.
kubectl apply uses manifest files to create, update, and delete resources in your cluster. This is a declarative method of object configuration. This method retains writes made to live objects without merging the changes back into the object configuration files.
The following is a simple example of a StatefulSet governed by a Service that has been created separately:
[STATEFULSET_NAME] is the name you choose for the StatefulSet
[SERVICE_NAME] is the name you choose for the Service
[APP_NAME] is the name you choose for the application run in the Pods
[CONTAINER_NAME] is name you choose for the containers in the Pods
[PORT_NAME] is the name you choose for the port opened by the StatefulSet
[PVC_NAME] is the name you choose for the PersistentVolumeClaim
In this file, the kind field specifies that a StatefulSet object should be created with the specifications defined in the file. This example StatefulSet produces three replicated Pods, and opens port 80 for exposing the StatefulSet to the Internet.
Deploying a containerized web application
This tutorial shows you how to package a web application in a Docker container image, and run that container image on a Google Kubernetes Engine cluster as a load-balanced set of replicas that can scale to the needs of your users.
Build the container image
GKE accepts Docker images as the application deployment format. To build a Docker image, you need to have an application and a Dockerfile.
The application is packaged as a Docker image, using the Dockerfile that contains instructions on how the image is built. You will use this Dockerfile to package your application.
To download the hello-app source code, run the following commands:
To build the container image of this application and tag it for uploading, run the following command:
You can run docker images command to verify that the build was successful:
Upload the container image
You need to upload the container image to a registry so that GKE can download and run it.
First, configure Docker command-line tool to authenticate to Container Registry (you need to run this only once):
You can now use the Docker command-line tool to upload the image to your Container Registry:
Run your container locally
To test your container image using your local Docker engine, run the following command:
Deploy your application
To deploy and manage applications on a GKE cluster, you must communicate with the Kubernetes cluster management system. You typically do this by using the kubectl command-line tool.
Kubernetes represents applications as Pods, which are units that represent a container (or group of tightly-coupled containers). The Pod is the smallest deployable unit in Kubernetes. In this tutorial, each Pod contains only your hello-app container.
The kubectl create deployment command below causes Kubernetes to create a Deployment named hello-web on your cluster. The Deployment manages multiple copies of your application, called replicas, and schedules them to run on the individual nodes in your cluster. In this case, the Deployment will be running only one Pod of your application.
Run the following command to deploy your application:
To see the Pod created by the Deployment, run the following command:
Expose your application to the Internet
By default, the containers you run on GKE are not accessible from the Internet, because they do not have external IP addresses. You must explicitly expose your application to traffic from the Internet, run the following command:
To get the external-IP of your application, run the below command:
Scale up your application
You add more replicas to your application’s Deployment resource by using the kubectl scale command. To add two additional replicas to your Deployment (for a total of three), run the following command:
You can see the new replicas running on your cluster by running the following commands:
Now, you have multiple instances of your application running independently of each other and you can use the kubectl scale command to adjust capacity of your application.
The load balancer you provisioned in the previous step will start routing traffic to these new replicas automatically.
Deploy a new version of your app
GKE’s rolling update mechanism ensures that your application remains up and available even as the system replaces instances of your old container image with your new one across all the running replicas.
You can create an image for the v2 version of your application by building the same source code and tagging it as v2 (or you can change the “Hello, World!” string to “Hello, GKE Version 2” before building the image):
Now build the image:
Push the image to the Google Container Registry:
Now, apply a rolling update to the existing deployment with an image update:
Visit your application again at http://[EXTERNAL_IP], and observe the changes you made take effect.
In this guide you have seen how easily we build a GKE cluster on Google environment. GKE offers a platform for managing your containers across a variety of operating environments, significantly reducing the time necessary to build, deploy, and scale them. As an open source next-generation virtualization tool, Google Kubernetes service provides you with all the functionality you need to optimize containerization usage with your existing IT resources.