Deploying microservices to Kubernetes

duration 25 minutes

Prerequisites:

Deploy microservices in Open Liberty Docker containers to Kubernetes and manage them with the Kubernetes CLI, kubectl.

What is Kubernetes?

Kubernetes is an open source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications.

Over the years, Kubernetes has become a major tool in containerized environments as containers are being further leveraged for all steps of a continuous delivery pipeline.

Why use Kubernetes?

Managing individual containers can be challenging. A small team can easily manage a few containers for development but managing hundreds of containers can be a headache, even for a large team of experienced developers. Kubernetes is a tool for deployment in containerized environments. It handles scheduling, deployment, as well as mass deletion and creation of containers. It provides update rollout abilities on a large scale that would otherwise prove extremely tedious to do. Imagine that you updated a Docker image, which now needs to propagate to a dozen containers. While you could destroy and then re-create these containers, you can also run a short one-line command to have Kubernetes make all those updates for you. Of course, this is just a simple example. Kubernetes has a lot more to offer.

Architecture

Deploying an application to Kubernetes means deploying an application to a Kubernetes cluster.

A typical Kubernetes cluster is a collection of physical or virtual machines called nodes that run containerized applications. A cluster is made up of one parent node that manages the cluster, and many worker nodes that run the actual application instances inside Kubernetes objects called pods.

A pod is a basic building block in a Kubernetes cluster. It represents a single running process that encapsulates a container or in some scenarios many closely coupled containers. Pods can be replicated to scale applications and handle more traffic. From the perspective of a cluster, a set of replicated pods is still one application instance, although it might be made up of dozens of instances of itself. A single pod or a group of replicated pods are managed by Kubernetes objects called controllers. A controller handles replication, self-healing, rollout of updates, and general management of pods. One example of a controller that you will use in this guide is a deployment.

A pod or a group of replicated pods are abstracted through Kubernetes objects called services that define a set of rules by which the pods can be accessed. In a basic scenario, a Kubernetes service exposes a node port that can be used together with the cluster IP address to access the pods encapsulated by the service.

To learn about the various Kubernetes resources that you can configure, see the official Kubernetes documentation.

What you’ll learn

You will learn how to deploy two microservices in Open Liberty containers to a local Kubernetes cluster. You will then manage your deployed microservices using the kubectl command line interface for Kubernetes. The kubectl CLI is your primary tool for communicating with and managing your Kubernetes cluster.

The two microservices you will deploy are called system and inventory. The system microservice returns the JVM system properties of the running container and it returns the pod’s name in the HTTP header making replicas easy to distinguish from each other. The inventory microservice adds the properties from the system microservice to the inventory. This process demonstrates how communication can be established between pods inside a cluster.

You will use a local single-node Kubernetes cluster.

Additional prerequisites

Before you begin, you need a containerization software for building containers. Kubernetes supports various container runtimes. You will use Docker in this guide. For Docker installation instructions, refer to the official Docker documentation.

Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

You will use Minikube as a single-node Kubernetes cluster that runs locally in a virtual machine. Make sure you have kubectl installed. If you need to install kubectl, see the kubectl installation instructions. For Minikube installation instructions, see the Minikube documentation.

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone https://github.com/openliberty/guide-kubernetes-intro.git
cd guide-kubernetes-intro

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Before you begin, make sure you have all the necessary prerequisites.

Starting and preparing your cluster for deployment

Start your Kubernetes cluster.

Start your Docker Desktop environment.

Ensure that Kubernetes is running on Docker Desktop and that the context is set to docker-desktop.

Run the following command from a command-line session:

minikube start

Next, validate that you have a healthy Kubernetes environment by running the following command from the active command-line session.

kubectl get nodes

This command should return a Ready status for the master node.

You do not need to do any other step.

Run the following command to configure the Docker CLI to use Minikube’s Docker daemon. After you run this command, you will be able to interact with Minikube’s Docker daemon and build new images directly to it from your host machine:

eval $(minikube docker-env)

Building and containerizing the microservices

The first step of deploying to Kubernetes is to build your microservices and containerize them with Docker.

The starting Java project, which you can find in the start directory, is a multi-module Maven project that’s made up of the system and inventory microservices. Each microservice resides in its own directory, start/system and start/inventory. Each of these directories also contains a Dockerfile, which is necessary for building Docker images. If you’re unfamiliar with Dockerfiles, check out the Containerizing Microservices guide, which covers Dockerfiles in depth.

Navigate to the start directory and build the applications by running the following commands:

cd start
mvn clean package

Next, run the docker build commands to build container images for your application:

docker build -t system:1.0-SNAPSHOT system/.
docker build -t inventory:1.0-SNAPSHOT inventory/.

The -t flag in the docker build command allows the Docker image to be labeled (tagged) in the name[:tag] format. The tag for an image describes the specific image version. If the optional [:tag] tag is not specified, the latest tag is created by default.

During the build, you’ll see various Docker messages describing what images are being downloaded and built. When the build finishes, run the following command to list all local Docker images:

docker images

Verify that the system:1.0-SNAPSHOT and inventory:1.0-SNAPSHOT images are listed among them, for example:

REPOSITORY                                                       TAG
inventory                                                        1.0-SNAPSHOT
system                                                           1.0-SNAPSHOT
openliberty/open-liberty                                         kernel-slim-java11-openj9-ubi
k8s.gcr.io/kube-proxy-amd64                                      v1.10.3
k8s.gcr.io/kube-scheduler-amd64                                  v1.10.3
k8s.gcr.io/kube-controller-manager-amd64                         v1.10.3
k8s.gcr.io/kube-apiserver-amd64                                  v1.10.3
k8s.gcr.io/etcd-amd64                                            3.1.12
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64                           1.14.8
k8s.gcr.io/k8s-dns-sidecar-amd64                                 1.14.8
k8s.gcr.io/k8s-dns-kube-dns-amd64                                1.14.8
k8s.gcr.io/pause-amd64                                           3.1
REPOSITORY                                                       TAG
inventory                                                        1.0-SNAPSHOT
system                                                           1.0-SNAPSHOT
openliberty/open-liberty                                         kernel-slim-java11-openj9-ubi
k8s.gcr.io/kube-proxy-amd64                                      v1.10.0
k8s.gcr.io/kube-controller-manager-amd64                         v1.10.0
k8s.gcr.io/kube-apiserver-amd64                                  v1.10.0
k8s.gcr.io/kube-scheduler-amd64                                  v1.10.0
quay.io/kubernetes-ingress-controller/nginx-ingress-controller   0.12.0
k8s.gcr.io/etcd-amd64                                            3.1.12
k8s.gcr.io/kube-addon-manager                                    v8.6
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64                           1.14.8
k8s.gcr.io/k8s-dns-sidecar-amd64                                 1.14.8
k8s.gcr.io/k8s-dns-kube-dns-amd64                                1.14.8
k8s.gcr.io/pause-amd64                                           3.1
k8s.gcr.io/kubernetes-dashboard-amd64                            v1.8.1
k8s.gcr.io/kube-addon-manager                                    v6.5
gcr.io/k8s-minikube/storage-provisioner                          v1.8.0
gcr.io/k8s-minikube/storage-provisioner                          v1.8.1
k8s.gcr.io/defaultbackend                                        1.4
k8s.gcr.io/k8s-dns-sidecar-amd64                                 1.14.4
k8s.gcr.io/k8s-dns-kube-dns-amd64                                1.14.4
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64                           1.14.4
k8s.gcr.io/etcd-amd64                                            3.0.17
k8s.gcr.io/pause-amd64                                           3.0

If you don’t see the system:1.0-SNAPSHOT and inventory:1.0-SNAPSHOT images, then check the Maven build log for any potential errors. In addition, if you are using Minikube, make sure your Docker CLI is configured to use Minikube’s Docker daemon instead of your host’s Docker daemon.

Deploying the microservices

Now that your Docker images are built, deploy them using a Kubernetes resource definition.

A Kubernetes resource definition is a yaml file that contains a description of all your deployments, services, or any other resources that you want to deploy. All resources can also be deleted from the cluster by using the same yaml file that you used to deploy them.

Create the Kubernetes configuration file in the start directory.
kubernetes.yaml

kubernetes.yaml

  1apiVersion: apps/v1
  2kind: Deployment
  3metadata:
  4  name: system-deployment
  5  # tag::labels1[]
  6  labels:
  7  # end::labels1[]
  8    # tag::app1[]
  9    app: system
 10    # end::app1[]
 11spec:
 12  selector:
 13    matchLabels:
 14      # tag::app2[]
 15      app: system
 16      # end::app2[]
 17  # tag::rolling1[]
 18  strategy:
 19    type: RollingUpdate
 20    rollingUpdate:
 21      # tag::maxUnavailable1[]
 22      maxUnavailable: 1
 23      # end::maxUnavailable1[]
 24      # tag::maxSurge1[]
 25      maxSurge: 1
 26      # end::maxSurge1[]
 27  # end::rolling1[]
 28  template:
 29    metadata:
 30      # tag::labels2[]
 31      labels:
 32      # end::labels2[]
 33        # tag::app3[]
 34        app: system
 35        # end::app3[]
 36    spec:
 37      containers:
 38      - name: system-container
 39        # tag::image1[]
 40        image: system:1.0-SNAPSHOT
 41        # end::image1[]
 42        ports:
 43        # tag::containerPort1[]
 44        - containerPort: 9090
 45        # end::containerPort1[]
 46        # tag::readinessProbe1[]
 47        readinessProbe:
 48          httpGet:
 49            # tag::ready1[]
 50            path: /health/ready
 51            # end::ready1[]
 52            port: 9090
 53          # tag::delay2[]
 54          initialDelaySeconds: 30
 55          # end::delay2[]
 56          # tag::period2[]
 57          periodSeconds: 10
 58          # end::period2[]
 59          # tag::timeout2[]
 60          timeoutSeconds: 3
 61          # end::timeout2[]
 62          # tag::threshold2[]
 63          failureThreshold: 1
 64          # end::threshold2[]
 65        # end::readinessProbe1[]
 66---
 67apiVersion: apps/v1
 68kind: Deployment
 69metadata:
 70  name: inventory-deployment
 71  # tag::labels3[]
 72  labels:
 73  # end::labels3[]
 74    # tag::app4[]
 75    app: inventory
 76    # end::app4[]
 77spec:
 78  selector:
 79    matchLabels:
 80      # tag::app5[]
 81      app: inventory
 82      # end::app5[]
 83  # tag::rolling2[]
 84  strategy:
 85    type: RollingUpdate
 86    rollingUpdate:
 87      # tag::maxUnavailable2[]
 88      maxUnavailable: 1
 89      # end::maxUnavailable2[]
 90      # tag::maxSurge2[]
 91      maxSurge: 1
 92      # end::maxSurge2[]    
 93  # end::rolling2[]
 94  template:
 95    metadata:
 96      # tag::labels4[]
 97      labels:
 98      # end::labels4[]
 99        # tag::app6[]
100        app: inventory
101        # end::app6[]
102    spec:
103      containers:
104      - name: inventory-container
105        # tag::image2[]
106        image: inventory:1.0-SNAPSHOT
107        # end::image2[]
108        ports:
109        # tag::containerPort2[]
110        - containerPort: 9090
111        # end::containerPort2[]
112        env:
113        - name: SYS_APP_HOSTNAME
114          value: system-service
115        # tag::readinessProbe2[]
116        readinessProbe:
117          httpGet:
118            # tag::ready2[]
119            path: /health/ready
120            # end::ready2[]
121            port: 9090
122          # tag::delay4[]
123          initialDelaySeconds: 30
124          # end::delay4[]
125          # tag::period4[]
126          periodSeconds: 10
127          # end::period4[]
128          # tag::timeout4[]
129          timeoutSeconds: 3
130          # end::timeout4[]
131          # tag::threshold4[]
132          failureThreshold: 1
133          # end::threshold4[]
134        # end::readinessProbe2[]
135---
136apiVersion: v1
137kind: Service
138metadata:
139  name: system-service
140spec:
141  # tag::NodePort1[]
142  type: NodePort
143  # end::NodePort1[]
144  selector:
145    # tag::app7[]
146    app: system
147    # end::app7[]
148  ports:
149  - protocol: TCP
150    port: 9090
151    targetPort: 9090
152    # tag::nodePort1[]
153    nodePort: 31000
154    # end::nodePort1[]
155
156---
157apiVersion: v1
158kind: Service
159metadata:
160  name: inventory-service
161spec:
162  # tag::NodePort2[]
163  type: NodePort
164  # end::NodePort2[]
165  selector:
166    # tag::app8[]
167    app: inventory
168    # end::app8[]
169  ports:
170  - protocol: TCP
171    port: 9090
172    targetPort: 9090
173    # tag::nodePort2[]
174    nodePort: 32000
175    # end::nodePort2[]

This file defines four Kubernetes resources. It defines two deployments and two services. A Kubernetes deployment is a resource that controls the creation and management of pods. A service exposes your deployment so that you can make requests to your containers. Three key items to look at when creating the deployments are the labels, image, and containerPort fields. The labels is a way for a Kubernetes service to reference specific deployments. The image is the name and tag of the Docker image that you want to use for this container. Finally, the containerPort is the port that your container exposes to access your application. For the services, the key point to understand is that they expose your deployments. The binding between deployments and services is specified by labels — in this case the app label. You will also notice the service has a type of NodePort. This means you can access these services from outside of your cluster via a specific port. In this case, the ports are 31000 and 32000, but port numbers can also be randomized if the nodePort field is not used.

Run the following commands to deploy the resources as defined in kubernetes.yaml:

kubectl apply -f kubernetes.yaml

When the apps are deployed, run the following command to check the status of your pods:

kubectl get pods

You’ll see an output similar to the following if all the pods are healthy and running:

NAME                                    READY     STATUS    RESTARTS   AGE
system-deployment-6bd97d9bf6-4ccds      1/1       Running   0          15s
inventory-deployment-645767664f-nbtd9   1/1       Running   0          15s

You can also inspect individual pods in more detail by running the following command:

kubectl describe pods

You can also issue the kubectl get and kubectl describe commands on other Kubernetes resources, so feel free to inspect all other resources.

Next you will make requests to your services.

The default host name for Docker Desktop is localhost.

The default host name for minikube is 192.168.99.100. Otherwise it can be found using the minikube ip command.

Then, run the curl command or visit the following URLs to access your microservices, substituting the appropriate host name:

  • http://[hostname]:31000/system/properties

  • http://[hostname]:32000/inventory/systems/system-service

The first URL returns system properties and the name of the pod in an HTTP header called X-Pod-Name. To view the header, you may use the -I option in the curl when making a request to http://[hostname]:31000/system/properties. The second URL adds properties from the system-service endpoint to the inventory Kubernetes Service. Visiting http://[hostname]:32000/inventory/systems/[kube-service] in general adds to the inventory depending on whether kube-service is a valid Kubernetes service that can be accessed.

Rolling update

Without continuous updates, a Kubernetes cluster is susceptible to a denial of a service attack. Rolling updates continually install Kubernetes patches without disrupting the availability of the deployed applications. Update the yaml file as follows to add the rollingUpdate configuration.

Replace the Kubernetes configuration file
kubernetes.yaml

kubernetes.yaml

  1apiVersion: apps/v1
  2kind: Deployment
  3metadata:
  4  name: system-deployment
  5  # tag::labels1[]
  6  labels:
  7  # end::labels1[]
  8    # tag::app1[]
  9    app: system
 10    # end::app1[]
 11spec:
 12  selector:
 13    matchLabels:
 14      # tag::app2[]
 15      app: system
 16      # end::app2[]
 17  # tag::rolling1[]
 18  strategy:
 19    type: RollingUpdate
 20    rollingUpdate:
 21      # tag::maxUnavailable1[]
 22      maxUnavailable: 1
 23      # end::maxUnavailable1[]
 24      # tag::maxSurge1[]
 25      maxSurge: 1
 26      # end::maxSurge1[]
 27  # end::rolling1[]
 28  template:
 29    metadata:
 30      # tag::labels2[]
 31      labels:
 32      # end::labels2[]
 33        # tag::app3[]
 34        app: system
 35        # end::app3[]
 36    spec:
 37      containers:
 38      - name: system-container
 39        # tag::image1[]
 40        image: system:1.0-SNAPSHOT
 41        # end::image1[]
 42        ports:
 43        # tag::containerPort1[]
 44        - containerPort: 9090
 45        # end::containerPort1[]
 46        # tag::readinessProbe1[]
 47        readinessProbe:
 48          httpGet:
 49            # tag::ready1[]
 50            path: /health/ready
 51            # end::ready1[]
 52            port: 9090
 53          # tag::delay2[]
 54          initialDelaySeconds: 30
 55          # end::delay2[]
 56          # tag::period2[]
 57          periodSeconds: 10
 58          # end::period2[]
 59          # tag::timeout2[]
 60          timeoutSeconds: 3
 61          # end::timeout2[]
 62          # tag::threshold2[]
 63          failureThreshold: 1
 64          # end::threshold2[]
 65        # end::readinessProbe1[]
 66---
 67apiVersion: apps/v1
 68kind: Deployment
 69metadata:
 70  name: inventory-deployment
 71  # tag::labels3[]
 72  labels:
 73  # end::labels3[]
 74    # tag::app4[]
 75    app: inventory
 76    # end::app4[]
 77spec:
 78  selector:
 79    matchLabels:
 80      # tag::app5[]
 81      app: inventory
 82      # end::app5[]
 83  # tag::rolling2[]
 84  strategy:
 85    type: RollingUpdate
 86    rollingUpdate:
 87      # tag::maxUnavailable2[]
 88      maxUnavailable: 1
 89      # end::maxUnavailable2[]
 90      # tag::maxSurge2[]
 91      maxSurge: 1
 92      # end::maxSurge2[]    
 93  # end::rolling2[]
 94  template:
 95    metadata:
 96      # tag::labels4[]
 97      labels:
 98      # end::labels4[]
 99        # tag::app6[]
100        app: inventory
101        # end::app6[]
102    spec:
103      containers:
104      - name: inventory-container
105        # tag::image2[]
106        image: inventory:1.0-SNAPSHOT
107        # end::image2[]
108        ports:
109        # tag::containerPort2[]
110        - containerPort: 9090
111        # end::containerPort2[]
112        env:
113        - name: SYS_APP_HOSTNAME
114          value: system-service
115        # tag::readinessProbe2[]
116        readinessProbe:
117          httpGet:
118            # tag::ready2[]
119            path: /health/ready
120            # end::ready2[]
121            port: 9090
122          # tag::delay4[]
123          initialDelaySeconds: 30
124          # end::delay4[]
125          # tag::period4[]
126          periodSeconds: 10
127          # end::period4[]
128          # tag::timeout4[]
129          timeoutSeconds: 3
130          # end::timeout4[]
131          # tag::threshold4[]
132          failureThreshold: 1
133          # end::threshold4[]
134        # end::readinessProbe2[]
135---
136apiVersion: v1
137kind: Service
138metadata:
139  name: system-service
140spec:
141  # tag::NodePort1[]
142  type: NodePort
143  # end::NodePort1[]
144  selector:
145    # tag::app7[]
146    app: system
147    # end::app7[]
148  ports:
149  - protocol: TCP
150    port: 9090
151    targetPort: 9090
152    # tag::nodePort1[]
153    nodePort: 31000
154    # end::nodePort1[]
155
156---
157apiVersion: v1
158kind: Service
159metadata:
160  name: inventory-service
161spec:
162  # tag::NodePort2[]
163  type: NodePort
164  # end::NodePort2[]
165  selector:
166    # tag::app8[]
167    app: inventory
168    # end::app8[]
169  ports:
170  - protocol: TCP
171    port: 9090
172    targetPort: 9090
173    # tag::nodePort2[]
174    nodePort: 32000
175    # end::nodePort2[]

The rollingUpdate configuration has two attributes, maxUnavailable and maxSurge. The maxUnavailable attribute specifies the the maximum number of Kubernetes pods that can be unavailable during the update process. Similarly, the maxSurge attribute specifies the maximum number of additional pods that can be created during the update process.

The readinessProbe allows Kubernetes to know whether the service is ready to handle requests. The readiness health check classes for the /health/ready endpoint to the inventory and system services are provided for you. If you want to learn more about how to use health checks in Kubernetes, check out the Kubernetes-microprofile-health guide.

Run the following command to deploy the inventory and system microservices with the new configuration:

kubectl apply -f kubernetes.yaml

Run the following command to check the status of your pods are ready and running:

kubectl get pods

Scaling a deployment

To use load balancing, you need to scale your deployments. When you scale a deployment, you replicate its pods, creating more running instances of your applications. Scaling is one of the primary advantages of Kubernetes because you can replicate your application to accommodate more traffic, and then descale your deployments to free up resources when the traffic decreases.

As an example, scale the system deployment to three pods by running the following command:

kubectl scale deployment/system-deployment --replicas=3

Use the following command to verify that two new pods have been created.

kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
system-deployment-6bd97d9bf6-4ccds      1/1       Running   0          1m
system-deployment-6bd97d9bf6-jf9rs      1/1       Running   0          25s
system-deployment-6bd97d9bf6-x4zth      1/1       Running   0          25s
inventory-deployment-645767664f-nbtd9   1/1       Running   0          1m

Wait for your two new pods to be in the ready state, then make a curl -I request to, or visit the http://[hostname]:31000/system/properties URL.

Notice that the X-Pod-Name header has a different value when you call it multiple times. The value changes because three pods that all serve the system application are now running. Similarly, to descale your deployments you can use the same scale command with fewer replicas.

kubectl scale deployment/system-deployment --replicas=1

Redeploy microservices

When you’re building your application, you might want to quickly test a change. To run a quick test, you can rebuild your Docker images then delete and re-create your Kubernetes resources. Note that there is only one system pod after you redeploy because you’re deleting all of the existing pods.

kubectl delete -f kubernetes.yaml

mvn clean package
docker build -t system:1.0-SNAPSHOT system/.
docker build -t inventory:1.0-SNAPSHOT inventory/.

kubectl apply -f kubernetes.yaml

Updating your applications in this way is fine for development environments, but it is not suitable for production. If you want to deploy an updated image to a production cluster, you can update the container in your deployment with a new image. Once the new container is ready, Kubernetes automates both the creation of a new container and the decommissioning of the old one.

Testing microservices that are running on Kubernetes

pom.xml

  1<?xml version="1.0" encoding="UTF-8"?>
  2<project xmlns="http://maven.apache.org/POM/4.0.0"
  3    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  4    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
  5    http://maven.apache.org/xsd/maven-4.0.0.xsd">
  6
  7    <modelVersion>4.0.0</modelVersion>
  8
  9    <groupId>io.openliberty.guides</groupId>
 10    <artifactId>guide-kubernetes-intro-inventory</artifactId>
 11    <version>1.0-SNAPSHOT</version>
 12    <packaging>war</packaging>
 13
 14    <properties>
 15        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
 16        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
 17        <maven.compiler.source>11</maven.compiler.source>
 18        <maven.compiler.target>11</maven.compiler.target>
 19        <!-- Default test properties -->
 20        <!-- tag::system.kube.service[] -->
 21        <system.kube.service>system-service</system.kube.service>
 22        <!-- end::system.kube.service[] -->
 23        <!-- tag::system.service.root[] -->
 24        <system.service.root>localhost:31000</system.service.root>
 25        <!-- end::system.service.root[] -->
 26        <!-- tag::inventory.service.root[] -->
 27        <inventory.service.root>localhost:32000</inventory.service.root>
 28        <!-- end::inventory.service.root[] -->
 29        <!-- Liberty configuration -->
 30        <liberty.var.http.port>9090</liberty.var.http.port>
 31        <liberty.var.https.port>9453</liberty.var.https.port>
 32    </properties>
 33
 34    <dependencies>
 35        <!-- Provided dependencies -->
 36        <dependency>
 37            <groupId>jakarta.platform</groupId>
 38            <artifactId>jakarta.jakartaee-api</artifactId>
 39            <version>10.0.0</version>
 40            <scope>provided</scope>
 41        </dependency>
 42        <dependency>
 43            <groupId>org.eclipse.microprofile</groupId>
 44            <artifactId>microprofile</artifactId>
 45            <version>6.1</version>
 46            <type>pom</type>
 47            <scope>provided</scope>
 48        </dependency>
 49        <!-- Java utility classes -->
 50        <dependency>
 51            <groupId>org.apache.commons</groupId>
 52            <artifactId>commons-lang3</artifactId>
 53            <version>3.17.0</version>
 54        </dependency>
 55        <!-- For tests -->
 56        <dependency>
 57            <groupId>org.junit.jupiter</groupId>
 58            <artifactId>junit-jupiter</artifactId>
 59            <version>5.11.1</version>
 60            <scope>test</scope>
 61        </dependency>
 62        <dependency>
 63            <groupId>org.jboss.resteasy</groupId>
 64            <artifactId>resteasy-client</artifactId>
 65            <version>6.2.10.Final</version>
 66            <scope>test</scope>
 67        </dependency>
 68        <dependency>
 69            <groupId>org.jboss.resteasy</groupId>
 70            <artifactId>resteasy-json-binding-provider</artifactId>
 71            <version>6.2.10.Final</version>
 72            <scope>test</scope>
 73        </dependency>
 74        <dependency>
 75            <groupId>org.glassfish</groupId>
 76            <artifactId>jakarta.json</artifactId>
 77            <version>2.0.1</version>
 78            <scope>test</scope>
 79        </dependency>
 80    </dependencies>
 81
 82    <build>
 83        <finalName>${project.artifactId}</finalName>
 84        <plugins>
 85            <plugin>
 86                <groupId>org.apache.maven.plugins</groupId>
 87                <artifactId>maven-war-plugin</artifactId>
 88                <version>3.4.0</version>
 89            </plugin>
 90            <plugin>
 91                <groupId>io.openliberty.tools</groupId>
 92                <artifactId>liberty-maven-plugin</artifactId>
 93                <version>3.10.3</version>
 94            </plugin>
 95            <!-- Plugin to run unit tests -->
 96            <plugin>
 97                <groupId>org.apache.maven.plugins</groupId>
 98                <artifactId>maven-surefire-plugin</artifactId>
 99                <version>3.5.0</version>
100            </plugin>
101            <!-- Plugin to run functional tests -->
102            <plugin>
103                <groupId>org.apache.maven.plugins</groupId>
104                <artifactId>maven-failsafe-plugin</artifactId>
105                <version>3.5.0</version>
106                <configuration>
107                    <systemPropertyVariables>
108                        <inventory.service.root>${inventory.service.root}</inventory.service.root>
109                        <system.service.root>${system.service.root}</system.service.root>
110                        <system.kube.service>${system.kube.service}</system.kube.service>
111                    </systemPropertyVariables>
112                </configuration>
113            </plugin>
114        </plugins>
115    </build>
116</project>

A few tests are included for you to test the basic functionality of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before proceeding further. The default properties defined in the pom.xml are:

Property Description

system.kube.service

Name of the Kubernetes Service wrapping the system pods, system-service by default.

system.service.root

The Kubernetes Service system-service root path, localhost:31000 by default.

inventory.service.root

The Kubernetes Service inventory-service root path, localhost:32000 by default.

Navigate back to the start directory.

Run the integration tests against a cluster running with a host name of localhost:

mvn failsafe:integration-test

Run the integration tests with the IP address for Minikube:

mvn failsafe:integration-test -Dsystem.service.root=$(minikube ip):31000 -Dinventory.service.root=$(minikube ip):32000

If the tests pass, you’ll see an output similar to the following for each service respectively:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.372 s - in it.io.openliberty.guides.system.SystemEndpointIT

Results:

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.714 s - in it.io.openliberty.guides.inventory.InventoryEndpointIT

Results:

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

Tearing down the environment

When you no longer need your deployed microservices, you can delete all Kubernetes resources by running the kubectl delete command:

kubectl delete -f kubernetes.yaml

Nothing more needs to be done for Docker Desktop.

Perform the following steps to return your environment to a clean state.

  1. Point the Docker daemon back to your local machine:

    eval $(minikube docker-env -u)
  2. Stop your Minikube cluster:

    minikube stop
  3. Delete your cluster:

    minikube delete

Great work! You’re done!

You have just deployed two microservices that are running in Open Liberty to Kubernetes. You then scaled a microservice and ran integration tests against miroservices that are running in a Kubernetes cluster.

Guide Attribution

Deploying microservices to Kubernetes by Open Liberty is licensed under CC BY-ND 4.0

Copy file contents
Copied to clipboard

Prerequisites:

Nice work! Where to next?

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Like Open Liberty? Star our repo on GitHub.

Star