Deploying microservices to Kubernetes

duration 25 minutes
Git clone to get going right away:
git clone
Copy Github clone command

Deploy microservices in Open Liberty Docker containers to Kubernetes and manage them with the Kubernetes CLI, kubectl.

What is Kubernetes?

Kubernetes is an open source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications.

Over the years, Kubernetes has become a major tool in containerized environments as containers are being further leveraged for all steps of a continuous delivery pipeline.

Why use Kubernetes?

Managing individual containers can be challenging. A few containers used for development by a small team might not pose a problem, but managing hundreds of containers can give even a large team of experienced developers a headache. Kubernetes is a primary tool for deployment in containerized environments. It handles scheduling, deployment, as well as mass deletion and creation of containers. It provides update rollout abilities on a large scale that would otherwise prove extremely tedious to do. Imagine that you updated a Docker image, which now needs to propagate to a dozen containers. While you could destroy and then re-create these containers, you can also run a short one-line command to have Kubernetes make all those updates for you. Of course this is just a simple example. Kubernetes has a lot more to offer.


Deploying an application to Kubernetes means deploying an application to a Kubernetes cluster.

A typical Kubernetes cluster is a collection of physical or virtual machines called nodes that run containerized applications. A cluster is made up of one master node that manages the cluster, and many worker nodes that run the actual application instances inside Kubernetes objects called pods.

A pod is a basic building block in a Kubernetes cluster. It represents a single running process that encapsulates a container or in some scenarios many closely coupled containers. Pods can be replicated to scale applications and handle more traffic. From the perspective of a cluster, a set of replicated pods is still one application instance, although it might be made up of dozens of instances of itself. A single pod or a group of replicated pods are managed by Kubernetes objects called controllers. A controller handles replication, self-healing, rollout of updates, and general management of pods. One example of a controller that you will use in this guide is a deployment.

A pod or a group of replicated pods are abstracted through Kubernetes objects called services that define a set of rules by which the pods can be accessed. In a basic scenario, a Kubernetes service exposes a node port that can be used together with the cluster IP address to access the pods encapsulated by the service.

To learn about the various Kubernetes resources that you can configure, see the official Kubernetes documentation.

What you’ll learn

You will learn how to deploy two microservices in Open Liberty containers to a local Kubernetes cluster. You will then manage your deployed microservices using the kubectl command line interface for Kubernetes. The kubectl CLI is your primary tool for communicating with and managing your Kubernetes cluster.

The two microservices you will deploy are called name and ping. The name microservice simply displays a brief greeting and the name of the container that it runs in, making it easy to distinguish it from its other replicas. The ping microservice simply pings the Kubernetes Service that encapsulates the pods running the name microservice. This demonstrates how communication can be established between pods inside a cluster.

You will use a local single-node Kubernetes cluster.


Before you begin, have the following tools installed:

First, you will need a containerization software for building containers. Kubernetes supports a variety of container types. You will use Docker in this guide. For installation instructions, refer to the official Docker documentation.


Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab then you have an older version of Docker Desktop; upgrade to the latest version.

Complete the setup for your operating system:

  • Set up Docker for Windows. On the Docker for Windows General Setting page, ensure that the option Expose daemon on tcp://localhost:2375 without TLS is enabled. This is required by the dockerfile-maven part of the build.

  • Set up Docker for Mac.

  • After following one of the sets of instructions, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.


You will use Minikube as a single-node Kubernetes cluster that runs locally in a virtual machine. For Minikube installation instructions see the minikube installation instructions. Make sure to read the Requirements section as different operating systems require different prerequisites to get Minikube running.

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone
cd guide-kubernetes-intro

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Starting and preparing your cluster for deployment

Start your Kubernetes cluster.


Start your Docker Desktop environment.


Run the following command from a command line:

minikube start

Next, validate that you have a healthy Kubernetes environment by running the following command from the command line.

kubectl get nodes

This command should return a Ready status for the master node.


You do not need to do any other step.


Run the following command to configure the Docker CLI to use Minikube’s Docker daemon. After you run this command, you will be able to interact with Minikube’s Docker daemon and build new images directly to it from your host machine:

eval $(minikube docker-env)

Building and containerizing the microservices

The first step of deploying to Kubernetes is to build your microservices and containerize them with Docker.

The starting Java project, which you can find in the start directory, is a multi-module Maven project that’s made up of the name and ping microservices. Each microservice resides in its own directory, start/name and start/ping. Each of these directories also contains a Dockerfile, which is necessary for building Docker images. If you’re unfamiliar with Dockerfiles, check out the Using Docker containers to develop microservices guide, which covers Dockerfiles in depth.

If you’re familiar with Maven and Docker, you might be tempted to run a Maven build first and then use the .war file produced by the build to build a Docker image. While it is by no means a wrong approach, we’ve setup the projects such that this process is automated as a part of a single Maven build. This is done by using the dockerfile-maven plug-in, which automatically picks up the Dockerfile located in the same directory as its POM file and builds a Docker image from it. If you’re using Docker for Windows ensure that, on the Docker for Windows General Setting page, the option is set to Expose daemon on tcp://localhost:2375 without TLS. This is required by the dockerfile-maven part of the build.

Navigate to the start directory and run the following command:

mvn package

The package goal automatically invokes the dockerfile-maven:build goal, which runs during the package phase. This goal builds a Docker image from the Dockerfile located in the same directory as the POM file.

During the build, you’ll see various Docker messages describing what images are being downloaded and built. When the build finishes, run the following command to list all local Docker images:

docker images

Verify that the name:1.0-SNAPSHOT and ping:1.0-SNAPSHOT images are listed among them, for example:


REPOSITORY                                                       TAG
ping                                                             1.0-SNAPSHOT
name                                                             1.0-SNAPSHOT
open-liberty                                                     latest                                      v1.10.3                                  v1.10.3                         v1.10.3                                  v1.10.3                                            3.1.12                           1.14.8                                 1.14.8                                1.14.8                                           3.1


REPOSITORY                                                       TAG
ping                                                             1.0-SNAPSHOT
name                                                             1.0-SNAPSHOT
open-liberty                                                     latest                                      v1.10.0                         v1.10.0                                  v1.10.0                                  v1.10.0   0.12.0                                            3.1.12                                    v8.6                           1.14.8                                 1.14.8                                1.14.8                                           3.1                            v1.8.1                                    v6.5                          v1.8.0                          v1.8.1                                        1.4                                 1.14.4                                1.14.4                           1.14.4                                            3.0.17                                           3.0

If you don’t see the name:1.0-SNAPSHOT and ping:1.0-SNAPSHOT images, then check the Maven build log for any potential errors. In addition, if you are using Minikube, make sure your Docker CLI is configured to use Minikube’s Docker daemon and not your host’s as described in the previous section.

Deploying the microservices

Now that your Docker images are built, deploy them using a Kubernetes resource definition.

A Kubernetes resource definition is a yaml file that contains a description of all your deployments, services, or any other resources that you want to deploy. All resources can also be deleted from the cluster by using the same yaml file that you used to deploy them.

Create the Kubernetes configuration file.


 1apiVersion: apps/v1
 2kind: Deployment
 4  name: name-deployment
 5  labels:
 6    app: name
 8  selector:
 9    matchLabels:
10      app: name
11  template:
12    metadata:
13      labels:
14        app: name
15    spec:
16      containers:
17      - name: name-container
18        image: name:1.0-SNAPSHOT
19        ports:
20        - containerPort: 9080
22apiVersion: apps/v1
23kind: Deployment
25  name: ping-deployment
26  labels:
27    app: ping
29  selector:
30    matchLabels:
31      app: ping
32  template:
33    metadata:
34      labels:
35        app: ping
36    spec:
37      containers:
38      - name: ping-container
39        image: ping:1.0-SNAPSHOT
40        ports:
41        - containerPort: 9080
43apiVersion: v1
44kind: Service
46  name: name-service
48  type: NodePort
49  selector:
50    app: name
51  ports:
52  - protocol: TCP
53    port: 9080
54    targetPort: 9080
55    nodePort: 31000
57apiVersion: v1
58kind: Service
60  name: ping-service
62  type: NodePort
63  selector:
64    app: ping
65  ports:
66  - protocol: TCP
67    port: 9080
68    targetPort: 9080
69    nodePort: 32000

This file defines four Kubernetes resources. It defines two deployments and two services. A Kubernetes deployment is a resource responsible for controlling the creation and management of pods. A service exposes your deployment so that you can make requests to your containers. Three key items to look at when creating the deployments are the label, image, and containerPort fields. The label is a way for a Kubernetes service to reference specific deployments. The image is the name and tag of the docker image that you want to use for this container. Finally, the containerPort is the port that your container exposes for purposes of accessing your application. For the services, the key point to understand is that they expose your deployments. The binding between deployments and services is specified by the use of labels — in this case the app label. You will also notice the service has a type of NodePort. This means you can access these services from outside of your cluster via a specific port. In this case, the ports will be 31000 and 32000, but it can also be randomized if the nodePort field is not used.

Run the following commands to deploy the resources as defined in kubernetes.yaml:

kubectl apply -f kubernetes.yaml

When the apps are deployed, run the following command to check the status of your pods:

kubectl get pods

You’ll see an output similar to the following if all the pods are healthy and running:

NAME                               READY     STATUS    RESTARTS   AGE
name-deployment-6bd97d9bf6-4ccds   1/1       Running   0          15s
ping-deployment-645767664f-nbtd9   1/1       Running   0          15s

You can also inspect individual pods in more detail by running the following command:

kubectl describe pods

You can also issue the kubectl get and kubectl describe commands on other Kubernetes resources, so feel free to inspect all other resources.

Next you will make requests to your services.


The default hostname for Docker Desktop is localhost.


The default hostname for minikube is Otherwise it can be found using the minikube ip command.

Then curl or visit the following URLs to access your microservices, substituting the appropriate hostname:

  • http://[hostname]:31000/api/name

  • http://[hostname]:32000/api/ping/name-service

The first URL returns a brief greeting followed by the name of the pod that the name microservice runs in. The second URL returns pong if it received a good response from the name-service Kubernetes Service. Visiting http://[hostname]:32000/api/ping/[kube-service] in general returns either a good or a bad response depending on whether kube-service is a valid Kubernetes Service that can be accessed.

Scaling a deployment

To use load balancing, you need to scale your deployments. When you scale a deployment, you replicate its pods, creating more running instances of your applications. Scaling is one of the primary advantages of Kubernetes because replicating your application allows it to accommodate more traffic, and then descale your deployments to free up resources when the traffic decreases.

As an example, scale the name Deployment to 3 pods by running the following command:

kubectl scale deployment/name-deployment --replicas=3

Use the following command to verify that two new pods have been created.

kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
name-deployment-6bd97d9bf6-4ccds   1/1       Running   0          1m
name-deployment-6bd97d9bf6-jf9rs   1/1       Running   0          25s
name-deployment-6bd97d9bf6-x4zth   1/1       Running   0          25s
ping-deployment-645767664f-nbtd9   1/1       Running   0          1m

Wait for your two new pods to be in the ready state, then curl or visit the http://[hostname]:31000/api/name URL. You’ll notice that the service will respond with a different name when you call it multiple times. This is because there are now three pods running all serving the name application. Similarly, to descale your deployments you can use the same scale command with fewer replicas.

Redeploy microservices

When you’re building your application, you may find that you want to quickly test a change. To do that, you can rebuild your docker images then delete and re-create your Kubernetes resources. Note that there will only be one name pod after you redeploy since you’re deleting all of the existing pods.

mvn package
kubectl delete -f kubernetes.yaml
kubectl apply -f kubernetes.yaml

This is not how you would want to update your applications when running in production, but in a development environment this is fine. If you want to deploy an updated image to a production cluster, you can update the container in your deployment with a new image. Then, Kubernetes will automate the creation of a new container and decommissioning of the old one once the new container is ready.

Testing microservices that are running on Kubernetes


  1<?xml version="1.0" encoding="UTF-8"?>
  2<project xmlns=""
  3    xmlns:xsi=""
  4    xsi:schemaLocation="">
  6    <modelVersion>4.0.0</modelVersion>
  8    <parent>
  9        <groupId>net.wasdev.wlp.maven.parent</groupId>
 10        <artifactId>liberty-maven-app-parent</artifactId>
 11        <version>RELEASE</version>
 12    </parent>
 14    <groupId>io.openliberty.guides</groupId>
 15    <artifactId>kube-demo</artifactId>
 16    <version>1.0-SNAPSHOT</version>
 17    <packaging>pom</packaging>
 19    <properties>
 20        <>UTF-8</>
 21        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
 22        <maven.compiler.source>1.8</maven.compiler.source>
 23        <>1.8</>
 24        <!-- Plugins -->
 25        <version.maven-war-plugin>2.6</version.maven-war-plugin>
 26        <version.dockerfile-maven-plugin>1.4.10</version.dockerfile-maven-plugin>
 27        <version.exec-maven-plugin>1.6.0</version.exec-maven-plugin>
 28        <version.maven-surefire-plugin>3.0.0-M1</version.maven-surefire-plugin>
 29        <version.maven-failsafe-plugin>3.0.0-M1</version.maven-failsafe-plugin>
 30        <!-- OpenLiberty runtime -->
 31        <version.openliberty-runtime>RELEASE</version.openliberty-runtime>
 32        <http.port>9080</http.port>
 33        <https.port>9443</https.port>
 34        <!-- Default test properties -->
 35        <cluster.ip></cluster.ip>
 36        <name.kube.service>name-service</name.kube.service>
 37        <name.node.port>31000</name.node.port>
 38        <ping.node.port>32000</ping.node.port>
 39    </properties>
 41    <dependencyManagement>
 42        <dependencies>
 43           <dependency>
 44               <groupId>io.openliberty.features</groupId>
 45               <artifactId>features-bom</artifactId>
 46               <version>RELEASE</version>
 47               <type>pom</type>
 48               <scope>import</scope>
 49           </dependency>
 50           <dependency>
 51                <groupId></groupId>
 52                <artifactId>microprofile-rest-client-api</artifactId>
 53                <version>1.0.1</version>
 54                <scope>provided</scope>
 55            </dependency>
 56            <dependency>
 57                <groupId>junit</groupId>
 58                <artifactId>junit</artifactId>
 59                <version>4.12</version>
 60                <scope>test</scope>
 61            </dependency>
 62            <dependency>
 63                <groupId>org.apache.cxf</groupId>
 64                <artifactId>cxf-rt-rs-extension-providers</artifactId>
 65                <version>3.2.6</version>
 66                <scope>test</scope>
 67            </dependency>
 68            <dependency>
 69                <groupId>org.apache.cxf</groupId>
 70                <artifactId>cxf-rt-rs-client</artifactId>
 71                <version>3.2.6</version>
 72                <scope>test</scope>
 73            </dependency>
 74            <dependency>
 75                <groupId>org.apache.commons</groupId>
 76                <artifactId>commons-lang3</artifactId>
 77                <version>3.0</version>
 78                <scope>compile</scope>
 79            </dependency>
 80            <!-- Support for JDK 9 and above -->
 81            <dependency>
 82                <groupId>javax.xml.bind</groupId>
 83                <artifactId>jaxb-api</artifactId>
 84                <version>2.3.1</version>
 85            </dependency>
 86            <dependency>
 87                <groupId>com.sun.xml.bind</groupId>
 88                <artifactId>jaxb-core</artifactId>
 89                <version></version>
 90            </dependency>
 91            <dependency>
 92                <groupId>com.sun.xml.bind</groupId>
 93                <artifactId>jaxb-impl</artifactId>
 94                <version>2.3.2</version>
 95            </dependency>
 96            <dependency>
 97                <groupId>javax.activation</groupId>
 98                <artifactId>activation</artifactId>
 99                <version>1.1.1</version>
100            </dependency>
101        </dependencies>
102    </dependencyManagement>
104    <profiles>
105        <profile>
106            <id>windowsExtension</id>
107            <activation>
108                <os><family>Windows</family></os>
109            </activation>
110            <properties>
111                <kubectl.extension>.cmd</kubectl.extension>
112            </properties>
113        </profile>
114        <profile>
115            <id>nonWindowsExtension</id>
116            <activation>
117                <os><family>!Windows</family></os>
118            </activation>
119            <properties>
120                <kubectl.extension></kubectl.extension>
121            </properties>
122        </profile>
123    </profiles>
125    <build>
126        <pluginManagement>
127            <plugins>
128                <plugin>
129                    <groupId>org.apache.maven.plugins</groupId>
130                    <artifactId>maven-war-plugin</artifactId>
131                    <version>${version.maven-war-plugin}</version>
132                    <configuration>
133                        <failOnMissingWebXml>false</failOnMissingWebXml>
134                        <packagingExcludes>pom.xml</packagingExcludes>
135                    </configuration>
136                </plugin>
137                <plugin>
138                    <groupId>net.wasdev.wlp.maven.plugins</groupId>
139                    <artifactId>liberty-maven-plugin</artifactId>
140                    <configuration>
141                        <assemblyArtifact>
142                            <groupId>io.openliberty</groupId>
143                            <artifactId>openliberty-runtime</artifactId>
144                            <version>RELEASE</version>
145                            <type>zip</type>
146                        </assemblyArtifact>
147                    </configuration>
148                </plugin>
149                <plugin>
150                    <groupId>com.spotify</groupId>
151                    <artifactId>dockerfile-maven-plugin</artifactId>
152                    <version>${version.dockerfile-maven-plugin}</version>
153                    <executions>
154                        <execution>
155                            <id>default</id>
156                            <goals>
157                                <goal>build</goal>
158                            </goals>
159                        </execution>
160                    </executions>
161                    <configuration>
162                        <repository>${project.artifactId}</repository>
163                        <tag>${project.version}</tag>
164                    </configuration>
165                </plugin>
166                <!-- Plugin to run unit tests -->
167                <plugin>
168                    <groupId>org.apache.maven.plugins</groupId>
169                    <artifactId>maven-surefire-plugin</artifactId>
170                    <version>${version.maven-surefire-plugin}</version>
171                    <executions>
172                        <execution>
173                            <phase>test</phase>
174                            <id>default-test</id>
175                            <configuration>
176                                <excludes>
177                                    <exclude>**/it/**</exclude>
178                                </excludes>
179                                <reportsDirectory>
180                                    ${}/test-reports/unit
181                                </reportsDirectory>
182                            </configuration>
183                        </execution>
184                    </executions>
185                </plugin>
186                <!-- Plugin to run functional tests -->
187                <plugin>
188                    <groupId>org.apache.maven.plugins</groupId>
189                    <artifactId>maven-failsafe-plugin</artifactId>
190                    <version>${version.maven-failsafe-plugin}</version>
191                    <executions>
192                        <execution>
193                            <phase>integration-test</phase>
194                            <id>integration-test</id>
195                            <goals>
196                                <goal>integration-test</goal>
197                            </goals>
198                            <configuration>
199                                <includes>
200                                    <include>**/it/**</include>
201                                </includes>
202                                <systemPropertyVariables>
203                                    <cluster.ip>${cluster.ip}</cluster.ip>
204                                    <name.ingress.path>
205                                        ${name.ingress.path}
206                                    </name.ingress.path>
207                                    <name.node.port>
208                                        ${name.node.port}
209                                    </name.node.port>
210                                    <name.kube.service>
211                                        ${name.kube.service}
212                                    </name.kube.service>
213                                    <ping.ingress.path>
214                                        ${ping.ingress.path}
215                                    </ping.ingress.path>
216                                    <ping.node.port>${ping.node.port}</ping.node.port>
217                                </systemPropertyVariables>
218                            </configuration>
219                        </execution>
220                        <execution>
221                            <id>verify-results</id>
222                            <goals>
223                                <goal>verify</goal>
224                            </goals>
225                        </execution>
226                    </executions>
227                    <configuration>
228                        <summaryFile>
229                            ${}/test-reports/it/failsafe-summary.xml
230                        </summaryFile>
231                        <reportsDirectory>
232                            ${}/test-reports/it
233                        </reportsDirectory>
234                    </configuration>
235                </plugin>
236            </plugins>
237        </pluginManagement>
238    </build>
240    <modules>
241        <module>name</module>
242        <module>ping</module>
243    </modules>

A few tests are included for you to test the basic functionality of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before proceeding further. The default properties defined in the pom.xml are:



IP or hostname for your cluster, by default, which is appropriate when using Minikube.


Name of the Kubernetes Service wrapping the name pods, name-service by default.


The NodePort of the Kubernetes Service name-service, 31000 by default.


The NodePort of the Kubernetes Service ping-service, 32000 by default.

Navigate back to the start directory.


Run the integration tests against a cluster running with a hostname of localhost:

mvn verify -Ddockerfile.skip=true -Dcluster.ip=localhost


Run the integration tests against a cluster running at the default Minikube IP address:

mvn verify -Ddockerfile.skip=true

You can also run the integration tests with an IP address of

mvn verify -Ddockerfile.skip=true -Dcluster.ip=

The dockerfile.skip parameter is set to true in order to skip building a new Docker image.

If the tests pass, you’ll see an output similar to the following for each service respectively:

 T E S T S
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
 T E S T S
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in

Results :

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0

Tearing down the environment

When you no longer need your deployed microservices, you can delete all Kubernetes resources by running the kubectl delete command:

kubectl delete -f kubernetes.yaml


Nothing more needs to be done for Docker Desktop.


Perform the following steps to return your environment to a clean state.

  1. Point the Docker daemon back to your local machine:

    eval $(minikube docker-env -u)
  2. Stop your Minikube cluster:

    minikube stop
  3. Delete your cluster:

    minikube delete

Great work! You’re done!

You have just deployed two microservices running in Open Liberty to Kubernetes. You then scaled a microservice and ran integration tests against miroservices that are running in a Kubernetes cluster.

Guide Attribution

Deploying microservices to Kubernetes by Open Liberty is licensed under CC BY-ND 4.0

Copied to clipboard
Copy code block
Copy file contents
Git clone this repo to get going right away:
git clone
Copy github clone command
Copied to clipboard

Nice work! Where to next?

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like