Deploying microservices to OpenShift using CodeReady Containers

duration 45 minutes

Prerequisites:

Explore how to deploy microservices to a local OpenShift cluster running with CodeReady Containers

What you’ll learn

You’ll learn how to deploy two microservices in Open Liberty containers to an OpenShift 4 cluster that is running locally on your computer by using CodeReady Containers. To learn how to deploy to an OpenShift 4 cluster by using operators, see the Deploying microservices to OpenShift by using Kubernetes Operators guide.

Different cloud-based solutions are available for running your Kubernetes workloads. With a cloud-based infrastructure, you can focus on developing your microservices without worrying about low-level infrastructure details for deployment. By using the cloud, you can easily scale and manage your microservices in a high-availability setup.

Kubernetes is an open source container orchestrator that automates many tasks that are involved in deploying, managing, and scaling containerized applications. To learn more about Kubernetes, check out the Deploying microservices to Kubernetes guide.

Red Hat CodeReady Containers (CRC) is a tool that you can use to quickly build and run a minimal OpenShift 4 cluster on your local computer. CRC simplifies setup and testing while providing all of tools that are needed to develop container-based applications.

The two microservices that you’ll deploy are called system and inventory. The system microservice returns the JVM system properties of the running container. It also returns the pod name in the HTTP header, which makes pod replicas more distinguishable from each other. The inventory microservice adds the properties from the system microservice to the inventory. This process demonstrates how communication can be established between pods inside a cluster.

Additional prerequisites

Before you begin, the following tools need to be installed:

  • Docker: You need a containerization software for building containers. Kubernetes supports various container types, but you’ll use Docker in this guide. For installation instructions, refer to the official Docker documentation.

  • CodeReady Containers: You need to install CRC to run OpenShift 4 locally on your computer. For installation instructions, refer to the official CodeReady Containers documentation.

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone https://github.com/openliberty/guide-openshift-codeready-containers.git
cd guide-openshift-codeready-containers

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Before you begin, make sure you have all the necessary prerequisites.

Starting CodeReady Containers

Setting up CodeReady Containers

Run the following command to set up your host machine for CodeReady Containers:

crc setup

Starting the virtual machine

Next, run the following command to start the CodeReady Containers virtual machine and OpenShift cluster:

crc start

When prompted, supply your user pull secret. The pull secret can be found on the page where you downloaded the latest release of CodeReady Containers.

If the cluster starts successfully, you can see output similar to the following example:

Started the OpenShift cluster.

The server is accessible via web console at:
  https://console-openshift-console.apps-crc.testing

Log in as administrator:
  Username: kubeadmin
  Password: jPvDv-jgRZB-qhYP4-Hmkmj

Log in as user:
  Username: developer
  Password: developer

Save this output as it might be required later in this guide.

Logging in to the cluster

To interact with the OpenShift cluster, you need to use the oc commands. CRC already includes the oc binary. To use the oc commands, run the following command to see instructions on how to configure your PATH:

crc oc-env

The resulting output differs based on your OS and environment, but you should get an output similar to the following:

export PATH="/Users/developer/.bin/oc:$PATH"
# Run this command to configure your shell session:
# eval $(crc oc-env)

Run the command that is specified in the output to configure your shell session:

eval $(crc oc-env)
export PATH="/Users/developer/.bin/oc:$PATH"
# Run this command to configure your shell session:
# eval $(crc oc-env)

Run the appropriate command specified in the output to configure your environment:

eval $(crc oc-env)
SET PATH=C:\Users\developer\.crc\bin\oc;%PATH%
REM Run this command to configure your shell:
REM     @FOR /f "tokens=*" %i IN ('crc oc-env') DO @call %i

Run the appropriate command specified in the output to configure your environment:

@FOR /f "tokens=*" %i IN ('crc oc-env') DO @call %i

Run the following command to log in and gain access to your OpenShift cluster:

oc login -u developer https://api.crc.testing:6443

If prompted, enter the developer password from the output for the crc start command that you ran in the Starting the virtual machine section. Next, create a new OpenShift project with the name my-project by running the following command:

oc new-project my-project

You’re now ready to build and deploy microservices.

Deploying microservices to OpenShift

In this section, you’ll learn how to deploy two microservices in Open Liberty containers to a Kubernetes cluster on OpenShift. You’ll build and containerize the system and inventory microservices, push them to a container registry, and then deploy them to your Kubernetes cluster.

Building and containerizing the microservices

The first step of deploying to Kubernetes is to build your microservices and containerize them.

The starting Java project, which is located in the start directory, is a multi-module Maven project. It’s made up of the system and inventory microservices. Each microservice is located in its own directory: start/system for the system microservice and start/inventory for the inventory microservice. Each of these directories contains a Dockerfile, which is necessary for building the Docker images. See the Containerizing microservices guide if you’re unfamiliar with Dockerfiles.

If you’re familiar with Maven and Docker, you might be tempted to run a Maven build first and then use the .war file to build a Docker image. However, these projects are set up so that this process is automated as a part of a single Maven build.

Go to the start directory and build these microservices by running the following commands:

cd start
mvn package

Run the following command to download or update to the latest Open Liberty Docker image:

docker pull icr.io/appcafe/open-liberty:full-java11-openj9-ubi

Next, run the docker build commands to build container images for your application:

docker build -t system:1.0-SNAPSHOT system/.
docker build -t inventory:1.0-SNAPSHOT inventory/.

During the build, you see various Docker messages that describe what images are being downloaded and built. When the build finishes, run the following command to list all local Docker images:

docker images

Verify that the system:1.0-SNAPSHOT and inventory:1.0-SNAPSHOT images are listed among them, for example:

REPOSITORY                    TAG
system                        1.0-SNAPSHOT
inventory                     1.0-SNAPSHOT
openliberty/open-liberty      kernel-java8-openj9-ubi

If you don’t see the system:1.0-SNAPSHOT and inventory:1.0-SNAPSHOT images, check the Maven build log for any potential errors.

Pushing the images to OpenShift’s internal registry

In order to run the microservices on the cluster, you need to push the microservice images to a container image registry. You’ll use OpenShift Container Registry (OCR), which is the OpenShift integrated container image registry. After your images are pushed to the registry, you can use them in the pods that you create later in the guide.

Run the login command to authenticate your Docker client to your OCR.:

oc registry login --skip-check

To use the OpenShift registry domain, add it to the list of insecure registries for your Docker Engine.

Click the Docker icon in your menu bar, select Preferences, and then click Docker Engine. If a JSON object already exists, add the following key/value pair to it:

"insecure-registries": ["default-route-openshift-image-registry.apps-crc.testing"]

If no JSON object exists, add the following object:

{
  "insecure-registries": ["default-route-openshift-image-registry.apps-crc.testing"]
}

Next, press the Apply & Restart button for the changes to take effect.

Click the Docker icon in your system tray, select Settings, and then click Docker Engine. If a JSON object already exists, add the following key/value pair to it:

"insecure-registries": ["default-route-openshift-image-registry.apps-crc.testing"]

If there’s no existing JSON object, instead add the following object:

{
  "insecure-registries": ["default-route-openshift-image-registry.apps-crc.testing"]
}

Next, press the Apply & Restart button for the changes to take effect.

Open the /etc/docker/daemon.json file or create it if it doesn’t already exist. If the file already contains a JSON object, add the following key/value pair:

"insecure-registries": ["default-route-openshift-image-registry.apps-crc.testing"]

If the file doesn’t already contain a JSON object, instead add the following object:

{
  "insecure-registries": ["default-route-openshift-image-registry.apps-crc.testing"]
}

Next, restart Docker for the changes to take effect.

To learn more about testing insecure registries, check out the official Docker documentation. Remember to use a secure registry for production environments.

You can store your Docker credentials in a custom external credential store, which is more secure than using a Docker configuration file. If you’re using a custom credential store for securing your registry credentials, or if you’re unsure where your credentials are stored, use the following command:

echo $(oc whoami -t) | docker login -u developer --password-stdin $(oc registry info)

Because the Windows command prompt doesn’t support the command substitution that is displayed for Mac and Linux, run the following commands:

oc whoami
oc whoami -t
oc registry info

Replace the square brackets in the following docker login command with the results from the previous commands:

docker login -u [oc whoami] -p [oc whoami -t] [oc registry info]

This command authenticates your credentials against the internal registry so that you’re able to push and pull images.

You can view the registry address by running the following command:

oc registry info

The output is similar to the following:

default-route-openshift-image-registry.apps-crc.testing

Ensure that you’re logged in to OpenShift and the registry, and run the following commands to tag your applications:

docker tag system:1.0-SNAPSHOT $(oc registry info)/$(oc project -q)/system:1.0-SNAPSHOT
docker tag inventory:1.0-SNAPSHOT $(oc registry info)/$(oc project -q)/inventory:1.0-SNAPSHOT

Run the following commands:

oc registry info
oc project -q

Replace the square brackets in the following docker tag commands with the results from the previous commands:

docker tag system:1.0-SNAPSHOT [oc registry info]/[oc project -q]/system:1.0-SNAPSHOT
docker tag inventory:1.0-SNAPSHOT [oc registry info]/[oc project -q]/inventory:1.0-SNAPSHOT

Finally, push your images to the registry:

docker push $(oc registry info)/$(oc project -q)/system:1.0-SNAPSHOT
docker push $(oc registry info)/$(oc project -q)/inventory:1.0-SNAPSHOT

Run the following commands:

oc registry info
oc project -q

Replace the square brackets in the following docker push commands with the results from the previous commands:

docker push [oc registry info]/[oc project -q]/system:1.0-SNAPSHOT
docker push [oc registry info]/[oc project -q]/inventory:1.0-SNAPSHOT

After you push the images, run the following command to list the images that you pushed to the internal OCR:

oc get imagestream

Verify that the system and inventory images are listed among them, for example:

NAME        IMAGE REPOSITORY                                                                                     TAGS           UPDATED
inventory   default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com/guide/inventory   1.0-SNAPSHOT   3 seconds ago
system      default-route-openshift-image-registry.apps.us-west-1.starter.openshift-online.com/guide/system      1.0-SNAPSHOT   17 seconds ago

Deploying the microservices

Now that your container images are built, deploy them by using a Kubernetes object configuration file.

Kubernetes objects can be configured in a YAML file that contains a description of all your deployments, services, or any other objects that you want to deploy. All objects can also be deleted from the cluster by using the same YAML file that you used to deploy them. If you’re interested in learning more about using and configuring Kubernetes clusters, check out the Deploying microservices to Kubernetes guide.

kubernetes.yaml

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: system-deployment
 5  labels:
 6    app: system
 7spec:
 8  selector:
 9    matchLabels:
10      app: system
11  template:
12    metadata:
13      labels:
14        app: system
15    spec:
16      containers:
17      - name: system-container
18        # tag::systemImage[]
19        image: image-registry.openshift-image-registry.svc:5000/my-project/system:1.0-SNAPSHOT
20        # end::systemImage[]
21        ports:
22        - containerPort: 9080
23--- 
24apiVersion: apps/v1
25kind: Deployment
26metadata:
27  name: inventory-deployment
28  labels:
29    app: inventory
30spec:
31  selector:
32    matchLabels:
33      app: inventory
34  template:
35    metadata:
36      labels:
37        app: inventory
38    spec:
39      containers:
40      - name: inventory-container
41        # tag::inventoryImage[]
42        image: image-registry.openshift-image-registry.svc:5000/my-project/inventory:1.0-SNAPSHOT
43        # end::inventoryImage[]
44        ports:
45        - containerPort: 8080
46---
47apiVersion: v1
48kind: Service
49metadata:
50  name: system-service
51spec:
52  selector:
53    app: system
54  ports:
55  - protocol: TCP
56    port: 9080
57---
58apiVersion: v1
59kind: Service
60metadata:
61  name: inventory-service
62spec:
63  selector:
64    app: inventory
65  ports:
66  - protocol: TCP
67    port: 8080
68---
69# tag::systemRoute[]
70apiVersion: v1
71kind: Route
72metadata:
73  name: system-route
74spec:
75  to:
76    kind: Service
77    name: system-service
78# end::systemRoute[]
79---
80# tag::inventoryRoute[]
81apiVersion: v1
82kind: Route
83metadata:
84  name: inventory-route
85spec:
86  to:
87    kind: Service
88    name: inventory-service
89# end::inventoryRoute[]
Create the kubernetes.yaml file in the start directory.
kubernetes.yaml

In this file, the image is the name and tag of the container image that is used. The image address is the same OCR address that you logged in to in the previous section.

Run the following commands to deploy the objects as defined in kubernetes.yaml file:

oc apply -f kubernetes.yaml

You see an output similar to the following example:

deployment.apps/system-deployment created
deployment.apps/inventory-deployment created
service/system-service created
service/inventory-service created
route.route.openshift.io/system-route created
route.route.openshift.io/inventory-route created

When the apps are deployed, run the following command to check the status of your pods:

oc get pods

If all the pods are healthy and running, you see an output similar to the following example:

NAME                                    READY     STATUS    RESTARTS   AGE
system-deployment-6bd97d9bf6-4ccds      1/1       Running   0          15s
inventory-deployment-645767664f-nbtd9   1/1       Running   0          15s

Making requests to the microservices

Routes are used to access the services and the application. A route in OpenShift exposes a service at a hostname such as www.your-web-app.com so external users can access the application.

kubernetes.yaml

 1apiVersion: apps/v1
 2kind: Deployment
 3metadata:
 4  name: system-deployment
 5  labels:
 6    app: system
 7spec:
 8  selector:
 9    matchLabels:
10      app: system
11  template:
12    metadata:
13      labels:
14        app: system
15    spec:
16      containers:
17      - name: system-container
18        # tag::systemImage[]
19        image: image-registry.openshift-image-registry.svc:5000/my-project/system:1.0-SNAPSHOT
20        # end::systemImage[]
21        ports:
22        - containerPort: 9080
23--- 
24apiVersion: apps/v1
25kind: Deployment
26metadata:
27  name: inventory-deployment
28  labels:
29    app: inventory
30spec:
31  selector:
32    matchLabels:
33      app: inventory
34  template:
35    metadata:
36      labels:
37        app: inventory
38    spec:
39      containers:
40      - name: inventory-container
41        # tag::inventoryImage[]
42        image: image-registry.openshift-image-registry.svc:5000/my-project/inventory:1.0-SNAPSHOT
43        # end::inventoryImage[]
44        ports:
45        - containerPort: 8080
46---
47apiVersion: v1
48kind: Service
49metadata:
50  name: system-service
51spec:
52  selector:
53    app: system
54  ports:
55  - protocol: TCP
56    port: 9080
57---
58apiVersion: v1
59kind: Service
60metadata:
61  name: inventory-service
62spec:
63  selector:
64    app: inventory
65  ports:
66  - protocol: TCP
67    port: 8080
68---
69# tag::systemRoute[]
70apiVersion: v1
71kind: Route
72metadata:
73  name: system-route
74spec:
75  to:
76    kind: Service
77    name: system-service
78# end::systemRoute[]
79---
80# tag::inventoryRoute[]
81apiVersion: v1
82kind: Route
83metadata:
84  name: inventory-route
85spec:
86  to:
87    kind: Service
88    name: inventory-service
89# end::inventoryRoute[]

Both the system and inventory routes are configured in the kubernetes.yaml file, and running the oc apply -f kubernetes.yaml command exposed both services.

Your microservices can now be accessed through the hostnames that you can find by running the following command:

oc get routes

The routes can also be found in the web console. The web console URL is similar to console-openshift-console.apps-crc.testing and was given as part of the output of the crc start command at the start of the guide. When you access the web console, log in using the administrator account credentials. After logging in, you can view the routes by navigating to the Networking > Routes page. Hostnames are in the Location columnn in the inventory-route-my-project.apps-crc.testing format. Ensure that you’re in your project, not the default project, which is shown in the Project: field for the console.

Enter the following URLs into your web browser to access your microservices. Substitute the hostnames that you obtained from the oc get routes command for the system and inventory services:

  • http://[system-hostname]/system/properties/

  • http://[inventory-hostname]/inventory/systems

In the first URL, you see the system properties of the container JVM in JSON format. The second URL returns an empty list, which is expected because no system properties are stored in the inventory yet.

Point your browser to the http://[inventory-hostname]/inventory/systems/[system-hostname] URL. When you go to this URL, the system properties that are taken from the system service are automatically stored in the inventory. Revisit the http://[inventory-hostname]/inventory/systems URL and you see a new entry.

Testing the microservices

pom.xml

  1<?xml version="1.0" encoding="UTF-8"?>
  2<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  3
  4    <modelVersion>4.0.0</modelVersion>
  5
  6    <groupId>io.openliberty.guides</groupId>
  7    <artifactId>guide-openshift-codeready-containers-inventory</artifactId>
  8    <version>1.0-SNAPSHOT</version>
  9    <packaging>war</packaging>
 10
 11    <properties>
 12        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
 13        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
 14        <maven.compiler.source>1.8</maven.compiler.source>
 15        <maven.compiler.target>1.8</maven.compiler.target>
 16        <!-- OpenLiberty runtime -->
 17        <http.port>8080</http.port>
 18        <https.port>8443</https.port>
 19        <!-- Default test properties -->
 20        <!-- tag::systemIP[] -->
 21        <system.ip>localhost:9080</system.ip>
 22        <!-- end::systemIP[] -->
 23        <!-- tag::inventoryIP[] -->
 24        <inventory.ip>localhost:8080</inventory.ip>
 25        <!-- end::inventoryIP[] -->
 26    </properties>
 27
 28    <dependencies>
 29
 30        <!-- Provided dependencies -->
 31        <dependency>
 32            <groupId>jakarta.platform</groupId>
 33            <artifactId>jakarta.jakartaee-api</artifactId>
 34            <version>9.1.0</version>
 35            <scope>provided</scope>
 36        </dependency>
 37
 38        <dependency>
 39            <groupId>org.eclipse.microprofile</groupId>
 40            <artifactId>microprofile</artifactId>
 41            <version>5.0</version>
 42            <type>pom</type>
 43            <scope>provided</scope>
 44        </dependency>
 45
 46        <dependency>
 47            <groupId>org.eclipse.microprofile.config</groupId>
 48            <artifactId>microprofile-config-api</artifactId>
 49            <version>2.0</version>
 50        </dependency>
 51
 52        <!-- For tests -->
 53        <dependency>
 54            <groupId>org.jboss.resteasy</groupId>
 55            <artifactId>resteasy-client</artifactId>
 56            <version>6.0.0.Final</version>
 57            <scope>test</scope>
 58        </dependency>
 59
 60        <dependency>
 61            <groupId>org.jboss.resteasy</groupId>
 62            <artifactId>resteasy-json-binding-provider</artifactId>
 63            <version>6.0.0.Final</version>
 64            <scope>test</scope>
 65        </dependency>
 66
 67        <dependency>
 68            <groupId>org.glassfish</groupId>
 69            <artifactId>jakarta.json</artifactId>
 70            <version>2.0.1</version>
 71            <scope>test</scope>
 72        </dependency>
 73
 74        <dependency>
 75            <groupId>org.junit.jupiter</groupId>
 76            <artifactId>junit-jupiter</artifactId>
 77            <version>5.6.2</version>
 78            <scope>test</scope>
 79        </dependency>
 80
 81    </dependencies>
 82
 83    <build>
 84        <finalName>${project.artifactId}</finalName>
 85        <plugins>
 86
 87            <plugin>
 88                <groupId>org.apache.maven.plugins</groupId>
 89                <artifactId>maven-war-plugin</artifactId>
 90                <version>3.2.3</version>
 91                <configuration>
 92                    <packagingExcludes>pom.xml</packagingExcludes>
 93                </configuration>
 94            </plugin>
 95
 96            <!-- Liberty plugin -->
 97            <plugin>
 98                <groupId>io.openliberty.tools</groupId>
 99                <artifactId>liberty-maven-plugin</artifactId>
100                <version>3.7.1</version>
101            </plugin>
102
103            <!-- For unit tests -->
104            <plugin>
105                <groupId>org.apache.maven.plugins</groupId>
106                <artifactId>maven-surefire-plugin</artifactId>
107                <version>2.22.2</version>
108            </plugin>
109
110            <!-- For integration tests -->
111            <plugin>
112                <groupId>org.apache.maven.plugins</groupId>
113                <artifactId>maven-failsafe-plugin</artifactId>
114                <version>2.22.2</version>
115                <executions>
116                    <execution>
117                        <goals>
118                            <goal>integration-test</goal>
119                            <goal>verify</goal>
120                        </goals>
121                    </execution>
122                </executions>
123            </plugin>
124        </plugins>
125    </build>
126</project>

A few tests are included for you to test the basic functions of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before you proceed further. The default properties that are defined in the pom.xml file are:

Property Description

system.ip

IP or hostname of the system-service Kubernetes Service

inventory.ip

IP or hostname of the inventory-service Kubernetes Service

Use the following command to run the integration tests against your cluster:

mvn verify \
-Dsystem.ip=system-route-my-project.apps-crc.testing \
-Dinventory.ip=inventory-route-my-project.apps-crc.testing
  • Replace the system.ip parameter with the appropriate hostname to access your system microservice.

  • Replace the inventory.ip parameter with the appropriate hostname to access your inventory microservice.

If the tests pass, you see an output for each service similar to the following examples:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.system.SystemEndpointIT

Results:

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT

Results:

Tests run: 4, Failures: 0, Errors: 0, Skipped: 0

Tearing down the environment

When you no longer need your deployed microservices, you can delete the Kubernetes deployments, services, and routes by running the following command:

oc delete -f kubernetes.yaml

To delete the pushed images, run the following commands:

oc delete imagestream/inventory
oc delete imagestream/system

Next, you can delete the project by running the following command:

oc delete project my-project

Finally, you can stop and delete the CRC virtual machine by running the following commands:

crc stop
crc delete

Great work! You’re done!

You just deployed two microservices running in Open Liberty to OpenShift using CodeReady Containers. You also learned how to use oc to deploy your microservices on a Kubernetes cluster.

Guide Attribution

Deploying microservices to OpenShift using CodeReady Containers by Open Liberty is licensed under CC BY-ND 4.0

Copy file contents

Prerequisites:

Nice work! Where to next?

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Like Open Liberty? Star our repo on GitHub.

Star