Deploying microservices to Google Cloud Platform

duration 1 hour

Prerequisites:

Explore how to deploy microservices to Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP).

What you’ll learn

You will learn how to deploy two microservices in Open Liberty containers to a Kubernetes cluster on Google Kubernetes Engine (GKE).

Kubernetes is an open source container orchestrator that automates many tasks that are involved in deploying, managing, and scaling containerized applications. If you would like to learn more about Kubernetes, check out the Deploying microservices to Kubernetes guide.

There are different cloud-based solutions for running your Kubernetes workloads. With a cloud-based infrastructure, you can focus on developing your microservices without worrying about low-level infrastructure details for deployment. Using a cloud helps you easily scale and manage your microservices in a high-availability setup.

Google Cloud Platform offers a managed Kubernetes service called Google Kubernetes Engine (GKE). Using GKE simplifies the process of running Kubernetes on Google Cloud Platform without needing to install or maintain your own Kubernetes control plane. It provides a hosted Kubernetes cluster that you can deploy your microservices to. In this guide, you will use GKE with a Google Container Registry (GCR). GCR is a private registry that is used to store and distribute your container images. Because GKE is hosted on Google Cloud Platform, fees might be associated with running this guide. See the official GKE pricing documentation for more details.

The two microservices you will deploy are called system and inventory. The system microservice returns the JVM system properties of the running container. It also returns the name of the pod in the HTTP header, which makes replicas easy to distinguish from each other. The inventory microservice adds the properties from the system microservice to the inventory. This demonstrates how communication can be established between pods inside a cluster.

Additional prerequisites

Before you begin, the following tools need to be installed:

  • Google account: To run this guide and use Google Cloud Platform, you will need a Google account. If you do not have an account already, navigate to the Google account sign-up page to create a Google account.

  • Google Cloud Platform account: Visit the Google Cloud Platform console to link your Google account to Google Cloud Platform.

  • Google Cloud SDK - CLI: You will need to use the gcloud command-line tool that is included in the Google Cloud SDK. See the official Cloud SDK: Command Line Interface - Quickstart documentation and complete the “Before you begin” section to set up the Google Cloud Platform CLI for your platform. To verify that the gcloud tool is installed correctly, run the following command:

    gcloud info
  • kubectl: You need the Kubernetes kubectl command-line tool to interact with your Kubernetes cluster. If kubectl is not already installed, use the Google Cloud Platform CLI to download and install kubectl with the following command:

    gcloud components install kubectl

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone https://github.com/openliberty/guide-cloud-google.git
cd guide-cloud-google

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Before you begin, make sure you have all the necessary prerequisites.

Setting up your Google Cloud project

Initializing the Google Cloud SDK

To create a Google Cloud Project, first initialize the Google Cloud SDK by performing the gcloud initial setup. The gcloud init command starts an interactive setup that creates or modifies configuration for gcloud, such as setting the user account and specifying the project to use:

gcloud init

Follow the prompt to log in with your Google Cloud Platform account. This authorizes Google Cloud SDK to access Google Cloud Platform with your account credentials.

If you have existing projects, do not use them. Instead, create a new project for this guide. If you don’t have existing projects, you will be automatically prompted to create a new one.

You will need to specify a Project ID for your project. Enter a Project ID that is unique within Google Cloud and matches the pattern that is described in the prompt.

If the Project ID is available to use, you will see the following output:

Your current project has been set to: [project-id].
...
Your Google Cloud SDK is configured and ready to use!

Make sure that billing is enabled for your project so that you can use its Google Cloud services. Follow the Modify a Project’s Billing Settings documentation to enable billing for your Google Cloud project.

Enabling Google Cloud APIs for your project

To run this guide, you need to use certain Google Cloud services, such as the Compute Engine API, Cloud Build API, and the Kubernetes Engine API.

You will use the Compute Engine API to set the default Compute Engine region and zone where the resources for your cloud deployments will be hosted.

The Cloud Build API allows you to build container images and push them to a Google Container Registry. Your private container registry manages and stores the container images that you build in later steps.

To deploy your application to Google Kubernetes Engine (GKE), you will need to enable the Kubernetes Engine API. The container images that you build will run on a Google Kubernetes Engine cluster.

Enable the necessary Google Cloud APIs for your project by using the gcloud services enable command. To see a list of Google Cloud APIs and services that are available for your project, run the following command:

gcloud services list --available

You will see an output similar to the following example:

NAME                                                  TITLE
abusiveexperiencereport.googleapis.com                Abusive Experience Report API
cloudbuild.googleapis.com                             Cloud Build API
composer.googleapis.com                               Cloud Composer API
compute.googleapis.com                                Compute Engine API
computescanning.googleapis.com                        Compute Scanning API
contacts.googleapis.com                               Contacts API
container.googleapis.com                              Kubernetes Engine API
containeranalysis.googleapis.com                      Container Analysis API
containerregistry.googleapis.com                      Container Registry API

The NAME field is the value that you need to pass into the gcloud services enable command to enable an API.

Run the following command to enable the Compute Engine API, Cloud Build API, and Kubernetes Engine API:

gcloud services enable compute.googleapis.com cloudbuild.googleapis.com container.googleapis.com

Setting the default region and zone

A Compute Engine region is a geographical location that is used to host your Compute Engine resources. Each region is composed of multiple zones. For example, the asia-east1 region is divided into multiple zones: asia-east1-a, asia-east1-b, and asia-east1-c. Some resources are limited to specific regions or zones, and other resources are available across all regions. See the Global, Regional, and Zonal Resources documentation for more details.

If resources are created without specifying a region or zone, these new resources run in the default location for your project. The metadata for your resources are stored at this specified Google Cloud location.

Run the following command to see the list of available zones and its corresponding regions for your project:

gcloud compute zones list

You will see an output similar to the following example:

NAME                       REGION                   STATUS
us-west1-b                 us-west1                 UP
us-west1-c                 us-west1                 UP
us-west1-a                 us-west1                 UP
europe-west1-b             europe-west1             UP
europe-west1-d             europe-west1             UP
europe-west1-c             europe-west1             UP
asia-east1-b               asia-east1               UP
asia-east1-a               asia-east1               UP
asia-east1-c               asia-east1               UP
southamerica-east1-b       southamerica-east1       UP
southamerica-east1-c       southamerica-east1       UP
southamerica-east1-a       southamerica-east1       UP
northamerica-northeast1-a  northamerica-northeast1  UP
northamerica-northeast1-b  northamerica-northeast1  UP
northamerica-northeast1-c  northamerica-northeast1  UP

The NAME field and REGION field are the values that you will later substitute into [zone] and [region].

To set the default Compute Engine region and zone, run the gcloud config set compute command. Remember to replace [region] and [zone] with a region and a zone that are available for your project. Make sure that your zone is within the region that you set.

gcloud config set compute/region [region]
gcloud config set compute/zone [zone]

Uploading images to a container registry

The starting Java project, which you can find in the start directory, is a multi-module Maven project. It is made up of the system and inventory microservices. Each microservice exists in its own directory, start/system and start/inventory. Both of these directories contain a Dockerfile, which is necessary for building the container images. If you’re unfamiliar with Dockerfiles, check out the Containerizing microservices guide.

Navigate to the start directory and run the following command:

cd start
mvn package

Now that your microservices are packaged, build your container images by using Google Cloud Build. Instead of installing Docker locally to containerize your application, you can use Cloud Build’s gcloud builds submit --tag command to build a Docker image from a Dockerfile and push that image to a container registry. Cloud Build is similar to running the docker build and docker push commands.

Run the gcloud builds submit --tag command from the directories that contain the Dockerfiles. You will build images for system and inventory by running the gcloud builds submit --tag command from both the start/system and start/inventory directories.

Navigate to the start/system directory.

Build the system image and push it to your container registry by using Cloud Build. Your container registry is located at gcr.io/[project-id]. Replace [project-id] with the Project ID that you previously defined for your Google Cloud project. To get the Project ID for your project, run the gcloud config get-value project command.

gcloud builds submit --tag gcr.io/[project-id]/system:1.0-SNAPSHOT

If the system image builds and pushes successfully, you will see the following output:

DONE
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

ID                                    CREATE_TIME                DURATION  SOURCE                                                                                  IMAGES                                     STATUS
30a71b4c-3481-48da-9faa-63f689316c3b  2020-02-12T16:22:33+00:00  1M37S     gs://[project-id]_cloudbuild/source/1581524552.36-65181b73aa63423998ae8ecdfbaeddff.tgz  gcr.io/[project-id]/system:1.0-SNAPSHOT    SUCCESS

Navigate to the start/inventory directory.

Build the inventory image and push it to your container registry by using Cloud Build:

gcloud builds submit --tag gcr.io/[project-id]/inventory:1.0-SNAPSHOT

You will see the following output:

DONE
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

ID                                    CREATE_TIME                DURATION  SOURCE                                                                                  IMAGES                                       STATUS
edbf9f6f-f01b-46cf-a998-594ad2df9bb3  2020-02-12T16:25:49+00:00  1M11S     gs://[project-id]_cloudbuild/source/1581524748.42-445ddab4cd3b4ba18e28a965e3942cea.tgz  gcr.io/[project-id]/inventory:1.0-SNAPSHOT   SUCCESS

To verify that the images are built, run the following command to list all existing container images for your project:

gcloud container images list

Your system and inventory images should appear in the list of all container images:

NAME
gcr.io/[project-id]/inventory
gcr.io/[project-id]/system

Provisioning a Kubernetes cluster on GKE

To create your GKE cluster, use the gcloud container clusters create command. When the cluster is created, the command outputs information about the cluster. You might need to wait while your cluster is being created.

Replace [cluster-name] with a name that you want for your cluster. The name for your cluster must contain only lowercase alphanumeric characters and -, and must start with a letter and end with an alphanumeric character.

gcloud container clusters create [cluster-name] --num-nodes 1

When your cluster is successfully created, you will see the following output:

NAME            LOCATION   MASTER_VERSION  MASTER_IP     MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
[cluster-name]  [zone]     1.13.11-gke.23  35.203.77.52  n1-standard-1  1.13.11-gke.23  1          RUNNING

Since a zone was not specified in the gcloud container clusters create command, your cluster was created in the default zone that you previously set in the gcloud config set compute/zone command.

The --num-nodes option creates a cluster with a certain number of nodes in the Kubernetes node pool. By default, if this option is excluded, three nodes are assigned to the node pool. You created a single-node cluster since this application does not require a large amount of resources.

Run the following command to check the status of the available node in your GKE cluster:

kubectl get nodes

The kubectl get nodes command outputs information about the node. The STATUS of the node is in the Ready state:

NAME                                           STATUS   ROLES    AGE   VERSION
gke-[cluster-name]-default-pool-be4471fe-qnl6  Ready    <none>   46s   v1.14.10-gke.17

Deploying microservices to GKE

Now that your container images are built and you created a Kubernetes cluster, you can deploy the images using a Kubernetes resource definition.

A Kubernetes resource definition is a yaml file that contains a description of all your deployments, services, or any other resources that you want to deploy. All resources can also be deleted from the cluster by using the same yaml file that you used to deploy them. The kubernetes.yaml resource definition file is provided for you in the start directory. If you are interested in learning more about the Kubernetes resource definition, check out the Deploying microservices to Kubernetes guide.

Navigate to the start directory.

Update the kubernetes.yaml file in the start directory.
kubernetes.yaml

Replace [project-id] with your Project ID. You can get the Project ID for your project by running the gcloud config get-value project command.

kubernetes.yaml

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: system-deployment
 5  labels:
 6    app: system
 7spec:
 8  selector:
 9    matchLabels:
10      app: system
11  template:
12    metadata:
13      labels:
14        app: system
15    spec:
16      containers:
17      - name: system-container
18        # tag::sysImage[]
19        image: gcr.io/[project-id]/system:1.0-SNAPSHOT
20        # end::sysImage[]
21        ports:
22        - containerPort: 9080
23---
24apiVersion: apps/v1
25kind: Deployment
26metadata:
27  name: inventory-deployment
28  labels:
29    app: inventory
30spec:
31  selector:
32    matchLabels:
33      app: inventory
34  template:
35    metadata:
36      labels:
37        app: inventory
38    spec:
39      containers:
40      - name: inventory-container
41        # tag::invImage[]
42        image: gcr.io/[project-id]/inventory:1.0-SNAPSHOT
43        # end::invImage[]
44        ports:
45        - containerPort: 9081
46---
47apiVersion: v1
48kind: Service
49metadata:
50  name: system-service
51spec:
52  # tag::sysNodePort[]
53  type: NodePort
54  # end::sysNodePort[]
55  selector:
56    app: system
57  ports:
58  - protocol: TCP
59    port: 9080
60    targetPort: 9080
61    nodePort: 31000
62---
63apiVersion: v1
64kind: Service
65metadata:
66  name: inventory-service
67spec:
68  # tag::invNodePort[]
69  type: NodePort
70  # end::invNodePort[]
71  selector:
72    app: inventory
73  ports:
74  - protocol: TCP
75    port: 9081
76    targetPort: 9081
77    nodePort: 32000

The image is the name and tag of the container image that you want to use for the container. The kubernetes.yaml file references the images that you pushed to your registry for the system and inventory repositories.

The service that is used to expose your deployments has a type of NodePort. This type means you can access these services from outside of your cluster via a specific port. You can expose your services in other ways, such as using a LoadBalancer service type or by using an Ingress. In production, you would most likely use an Ingress.

Deploying your application

To deploy your microservices to Google Kubernetes Engine, you need Kubernetes to create the contents of the kubernetes.yaml file.

Navigate to the start directory and run the following command to deploy the resources defined in the kubernetes.yaml file:

kubectl apply -f kubernetes.yaml

You will see the following output:

deployment.apps/system-deployment created
deployment.apps/inventory-deployment created
service/system-service created
service/inventory-service created

Run the following command to check the status of your pods:

kubectl get pods

If all the pods are healthy and running, you will see an output similar to the following example:

NAME                                    READY     STATUS    RESTARTS   AGE
system-deployment-6bd97d9bf6-4ccds      1/1       Running   0          15s
inventory-deployment-645767664f-nbtd9   1/1       Running   0          15s

Making requests to the microservices

To try out your microservices, you need to allow TCP traffic on your node ports, 31000 and 32000, for the system and inventory microservices.

Create a firewall rule to allow TCP traffic on your node ports:

gcloud compute firewall-rules create sys-node-port --allow tcp:31000
gcloud compute firewall-rules create inv-node-port --allow tcp:32000

Take note of the EXTERNAL-IP in the output of the following command. It is the hostname that you will later substitute into [hostname]:

kubectl get nodes -o wide
NAME                                  STATUS   ROLES    AGE   VERSION           INTERNAL-IP   EXTERNAL-IP
gke-[cluster-name]-default-pool-be4   Ready    <none>   14m   v1.13.11-gke.23   10.162.0.2    35.203.106.216

To access your microservices, point your browser to the following URLs, substituting the appropriate [hostname] value:

  • http://[hostname]:31000/system/properties

  • http://[hostname]:32000/inventory/systems

In the first URL, you see a result in JSON format with the system properties of the container JVM. The second URL returns an empty list, which is expected because no system properties are stored in the inventory yet.

Point your browser to the http://[hostname]:32000/inventory/systems/system-service URL. When you visit this URL, these system properties are automatically stored in the inventory. Go back to http://[hostname]:32000/inventory/systems and you see a new entry for system-service.

Testing the microservices

A few tests are included for you to test the basic functionality of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before you proceed further.

pom.xml

  1<?xml version='1.0' encoding='utf-8'?>
  2<project xmlns="http://maven.apache.org/POM/4.0.0" 
  3    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
  4    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  5    
  6    <modelVersion>4.0.0</modelVersion>
  7
  8    <groupId>io.openliberty.guides</groupId>
  9
 10    <artifactId>inventory</artifactId>
 11    <version>1.0-SNAPSHOT</version>
 12    <packaging>war</packaging>
 13
 14    <properties>
 15        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
 16        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
 17        <maven.compiler.source>11</maven.compiler.source>
 18        <maven.compiler.target>11</maven.compiler.target>
 19        <!-- Default test properties -->
 20        <!-- tag::cluster[] -->
 21        <cluster.ip>localhost</cluster.ip>
 22        <!-- end::cluster[] -->
 23        <!-- tag::system-service[] -->
 24        <system.kube.service>system-service</system.kube.service>
 25        <!-- end::system-service[] -->
 26        <!-- tag::system-node-port[] -->
 27        <system.node.port>31000</system.node.port>
 28        <!-- end::system-node-port[] -->
 29        <!-- tag::inventory-node-port[] -->
 30        <inventory.node.port>32000</inventory.node.port>
 31        <!-- end::inventory-node-port[] -->
 32        <!-- Liberty configuration -->
 33        <liberty.var.http.port>9081</liberty.var.http.port>
 34        <liberty.var.https.port>9444</liberty.var.https.port>
 35    </properties>
 36
 37    <dependencies>
 38        <!-- Provided dependencies -->
 39        <dependency>
 40            <groupId>jakarta.platform</groupId>
 41            <artifactId>jakarta.jakartaee-api</artifactId>
 42            <version>10.0.0</version>
 43            <scope>provided</scope>
 44        </dependency>
 45        <dependency>
 46            <groupId>org.eclipse.microprofile</groupId>
 47            <artifactId>microprofile</artifactId>
 48            <version>6.1</version>
 49            <type>pom</type>
 50            <scope>provided</scope>
 51        </dependency>
 52        <!-- For tests -->
 53        <dependency>
 54            <groupId>org.junit.jupiter</groupId>
 55            <artifactId>junit-jupiter</artifactId>
 56            <version>5.10.2</version>
 57            <scope>test</scope>
 58        </dependency>
 59        <dependency>
 60            <groupId>org.jboss.resteasy</groupId>
 61            <artifactId>resteasy-client</artifactId>
 62            <version>6.2.7.Final</version>
 63            <scope>test</scope>
 64        </dependency>
 65        <dependency>
 66            <groupId>org.jboss.resteasy</groupId>
 67            <artifactId>resteasy-json-binding-provider</artifactId>
 68            <version>6.2.7.Final</version>
 69            <scope>test</scope>
 70        </dependency>
 71        <dependency>
 72            <groupId>org.glassfish</groupId>
 73            <artifactId>jakarta.json</artifactId>
 74            <version>2.0.1</version>
 75            <scope>test</scope>
 76        </dependency>
 77    </dependencies>
 78
 79    <build>
 80        <finalName>${project.artifactId}</finalName>
 81        <plugins>
 82            <plugin>
 83                <groupId>org.apache.maven.plugins</groupId>
 84                <artifactId>maven-war-plugin</artifactId>
 85                <version>3.4.0</version>
 86            </plugin>
 87            <!-- Enable Liberty Maven plugin -->
 88            <plugin>
 89                <groupId>io.openliberty.tools</groupId>
 90                <artifactId>liberty-maven-plugin</artifactId>
 91                <version>3.10.1</version>
 92            </plugin>
 93            <!-- Plugin to run unit tests -->
 94            <plugin>
 95                <groupId>org.apache.maven.plugins</groupId>
 96                <artifactId>maven-surefire-plugin</artifactId>
 97                <version>3.2.5</version>
 98            </plugin>
 99            <!-- Plugin to run functional tests -->
100            <plugin>
101                <groupId>org.apache.maven.plugins</groupId>
102                <artifactId>maven-failsafe-plugin</artifactId>
103                <version>3.2.5</version>
104                <configuration>
105                    <systemPropertyVariables>
106                        <cluster.ip>${cluster.ip}</cluster.ip>
107                        <system.node.port>${system.node.port}</system.node.port>
108                        <inventory.node.port>${inventory.node.port}</inventory.node.port>
109                        <system.kube.service>${system.kube.service}</system.kube.service>
110                    </systemPropertyVariables>
111                </configuration>
112            </plugin>
113        </plugins>
114    </build>
115</project>

The default properties that are defined in the pom.xml file are:

Property Description

cluster.ip

The IP or hostname for your cluster.

system.kube.service

The name of the Kubernetes Service wrapping the system pods, system-service by default.

system.node.port

The NodePort of the system-service Kubernetes Service, 31000 by default.

inventory.node.port

The NodePort of the inventory-service Kubernetes Service, 32000 by default.

Running the tests

Run the Maven failsafe:integration-test goal to test your microservices by replacing the [hostname] with the value determined in the previous section.

mvn failsafe:integration-test -Dcluster.ip=[hostname]

If the tests pass, you will see the following output for each service:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.system.SystemEndpointIT

Results:

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT

Results:

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

Tearing down the environment

It is important to clean up your resources when you are finished with the guide so that you do not incur extra charges for ongoing usage.

When you no longer need your deployed microservices, you can delete all Kubernetes resources by running the kubectl delete command:

kubectl delete -f kubernetes.yaml

Delete the firewall rules for your node ports:

gcloud compute firewall-rules delete sys-node-port inv-node-port

Since you are done testing your cluster, clean up all of its related sources by using the gcloud container clusters delete command:

gcloud container clusters delete [cluster-name]

Remove the container images from the container registry:

gcloud container images delete gcr.io/[project-id]/system:1.0-SNAPSHOT gcr.io/[project-id]/inventory:1.0-SNAPSHOT

Delete your Google Cloud project:

gcloud projects delete [project-id]

Great work! You’re done!

You have just deployed two microservices running in Open Liberty to Google Kubernetes Engine (GKE). You also learned how to use kubectl to deploy your microservices on a Kubernetes cluster.

Guide Attribution

Deploying microservices to Google Cloud Platform by Open Liberty is licensed under CC BY-ND 4.0

Copy file contents
Copied to clipboard

Prerequisites:

Nice work! Where to next?

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Like Open Liberty? Star our repo on GitHub.

Star