oc version
Contents
- What you’ll learn
- Additional prerequisites
- Getting started
- Installing the Operator
- Adding a private Docker credential
- Deploying the system microservice to OpenShift
- Accessing the microservice
- Specifying optional parameters
- Tearing down the environment
- Great work! You’re done!
- Related Links
- Guide Attribution
Tags
Deploying a microservice to OpenShift 4 using Open Liberty Operator
Prerequisites:
Explore how to deploy a microservice to Red Hat OpenShift 4 using Open Liberty Operator.
What you’ll learn
You will learn how to deploy a cloud-native application with a microservice to Red Hat OpenShift 4 by using the Open Liberty Operator.
OpenShift is a Kubernetes-based platform with added functions. It streamlines the DevOps process by providing an intuitive development pipeline. It also provides integration with multiple tools to make the deployment and management of cloud applications easier. You can learn more about Kubernetes by checking out the Deploying microservices to Kubernetes guide.
Kubernetes operators provide an easy way to automate the management and updating of applications by abstracting away some of the details of cloud application management. To learn more about operators, check out this Operators tech topic article.
The application in this guide consists of one microservice, system
. The system microservice returns the JVM system properties of its host.
You will deploy the system
microservice by using the Open Liberty Operator. The Open Liberty Operator provides a method of packaging, deploying, and managing Open Liberty applications on Kubernetes-based clusters. The Open Liberty Operator watches Open Liberty resources and creates various Kubernetes resources, including Deployments
, Services
, and Routes
, depending on the configurations. The Operator then continuously compares the current state of the resources with the desired state of application deployment and reconciles them when necessary.
Additional prerequisites
Before you can deploy your microservice, you must gain access to a cluster on OpenShift and have an OpenShift client installed. For client installation instructions, refer to the official OpenShift Online documentation.
There are various OpenShift offerings. You can gain access to an OpenShift cluster that is hosted on IBM Cloud, or check out other offerings from OpenShift.
After you get access to a cluster, make sure you are logged in to the cluster as a cluster administrator by running the following command:
Look for output similar to the following example:
Client Version: 4.3.13
Server Version: 4.3.13
Kubernetes Version: v1.16.2
Before you install any resources, you need to create a project on your OpenShift cluster. Create a project named guide
by running the following command:
oc new-project guide
Ensure that you are working within the project guide
by running the following command:
oc projects
Look for an asterisk (*) with the guide
project in the list of projects to confirm that you are in the guide
project, as shown in the following example:
You have access to the following projects and can switch between them with 'oc project <projectname>':
default
* guide
Getting started
The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:
git clone https://github.com/openliberty/guide-openliberty-operator-openshift.git
cd guide-openliberty-operator-openshift
The start
directory contains the starting project that you will build upon.
The finish
directory contains the finished project that you will build.
Before you begin, make sure you have all the necessary prerequisites.
Installing the Operator
When you obtained your OpenShift cluster, you received login information for the OpenShift web console. The web console provides an interface to interact with your OpenShift cluster through your web browser.
To install the Operator, navigate to the web console and select Operators > OperatorHub from the sidebar menu. Search for and install the Open Liberty Operator.
Make sure you install the Operator into the guide
namespace.
Run the following command to view all the supported API resources that are available through the Open Liberty Operator:
oc api-resources --api-group=apps.openliberty.io
Look for the following output, which shows the custom resource definitions (CRDs) that can be used by the Open Liberty Operator:
NAME SHORTNAMES APIVERSION NAMESPACED KIND
openlibertyapplications olapp,olapps apps.openliberty.io/v1 true OpenLibertyApplication
openlibertydumps oldump,oldumps apps.openliberty.io/v1 true OpenLibertyDump
openlibertytraces oltrace,oltraces apps.openliberty.io/v1 true OpenLibertyTrace
Each CRD defines a kind of object that can be used, which is specified in the previous example by the KIND
value. The SHORTNAME
value specifies alternative names that you can substitute in the configuration to refer to an object kind. For example, you can refer to the OpenLibertyApplication
object kind by one of its specified shortnames, such as olapps
.
The openlibertyapplications
CRD defines a set of configurations for deploying an Open Liberty-based application, including the application image, number of instances, and storage settings. The Open Liberty Operator watches for changes to instances of the OpenLibertyApplication
object kind and creates Kubernetes resources that are based on the configuration that is defined in the CRD.
You can also confirm the installation of the operator from the web console. Navigate to the OperatorHub. You can filter the list of categories to see only installed operators.
Adding a private Docker credential
Docker limits container image pull requests for free DockerHub subscriptions. For more information, see Understanding Docker Hub Rate Limiting. If you have a Docker account with a Pro or Team subscription, you can add a private credential to avoid any errors as a result of the limits.
To add a private credential, navigate to the OpenShift web console and select Workloads > Secrets from the sidebar menu. Ensure that the selected project is openshift-config
. Search for pull-secret
and click the three vertical dots menu. Then select Edit Secret > Add credentials. Enter docker.io
to the Registry Server Address field, your Docker user ID to the Username field, and your Docker password to the Password field. Click the Save button to save the credential.
Deploying the system microservice to OpenShift
To deploy the system
microservice, you must first package the microservice, then create and run an OpenShift build to produce runnable container images of the packaged microservice.
Packaging the microservice
Ensure that you are in the start
directory and run the following command to package the system
microservice:
mvn clean package
Building and pushing the image
Create a build template to configure how to build your container image.
Create thebuild.yaml
template file in thestart
directory.build.yaml
build.yaml
1apiVersion: template.openshift.io/v1
2kind: Template
3metadata:
4 name: "build-template"
5 annotations:
6 description: "Build template for the system service"
7 tags: "build"
8objects:
9 # tag::imageStream[]
10 - apiVersion: v1
11 kind: ImageStream
12 metadata:
13 name: "system-imagestream"
14 labels:
15 name: "system"
16 # end::imageStream[]
17 # tag::buildConfig[]
18 - apiVersion: v1
19 kind: BuildConfig
20 metadata:
21 name: "system-buildconfig"
22 labels:
23 name: "system"
24 spec:
25 # tag::source[]
26 source:
27 # tag::binary[]
28 type: Binary
29 # end::binary[]
30 # end::source[]
31 # tag::docker[]
32 strategy:
33 type: Docker
34 # end::docker[]
35 output:
36 to:
37 kind: ImageStreamTag
38 name: "system-imagestream:1.0-SNAPSHOT"
39 # end::buildConfig[]
The build.yaml
template includes two objects. The ImageStream
object provides an abstraction from the image in the image registry, which allows you to reference and tag the image. The image registry is the integrated internal OpenShift Container Registry.
The BuildConfig
object defines a single build definition and any triggers that kickstart the build. The source
spec defines the build input. In this case, the build inputs are your binary
(local) files, which are streamed to OpenShift for the build. The uploaded files need to include the packaged WAR
application binaries, which is why you needed to run the Maven commands. The template specifies a Docker
strategy build, which invokes the docker build
command, and creates a runnable container image of the microservice from the build input.
Run the following command to create the objects for the system
microservice:
oc process -f build.yaml | oc create -f -
Next, run the following command to view the newly created ImageStream
objects and the build configurations for the microservice:
oc get all -l name=system
Look for the following similar resources:
NAME TYPE FROM LATEST
buildconfig.build.openshift.io/system-buildconfig Docker Binary 0
NAME IMAGE REPOSITORY TAGS UPDATED
imagestream.image.openshift.io/system-imagestream default-route-openshift-image-registry.apps-crc.testing/guide/system-imagestream
Ensure that you are in the start
directory and trigger the build by running the following command:
oc start-build system-buildconfig --from-dir=system/.
The local system
directory is uploaded to OpenShift to be built into the Docker image. Run the following command to list the build and track its status:
oc get builds
Look for the output that is similar to the following example:
NAME TYPE FROM STATUS STARTED
system-buildconfig-1 Docker Binary@f24cb58 Running 45 seconds ago
You might need to wait some time until the build is complete. To check whether the build is complete, run the following command to view the build log until the Push successful
message appears:
oc logs build/system-buildconfig-1
Checking the image
During the build process, the image associated with the ImageStream
object that you created earlier was pushed to the image registry and tagged. Run the following command to view the newly updated ImageStream
object:
oc get imagestreams
Run the following command to get more details on the newly pushed image within the stream:
oc describe imagestream/system-imagestream
The following example shows part of the system-imagestream
output:
Name: system-imagestream
Namespace: guide
Created: 2 minutes ago
Labels: name=system
Annotations: <none>
Image Repository: default-route-openshift-image-registry.apps-crc.testing/guide/system-imagestream
Image Lookup: local=false
Unique Images: 1
Tags: 1
...
Now you’re ready to deploy the image.
Deploying the image
You can configure the specifics of the Open Liberty Operator-controlled deployment with a YAML configuration file.
Create thedeploy.yaml
configuration file in thestart
directory.deploy.yaml
deploy.yaml
The deploy.yaml
file is configured to deploy one OpenLibertyApplication
resource, system
, which is controlled by the Open Liberty Operator.
The applicationImage
parameter defines what container image is deployed as part of the OpenLibertyApplication
CRD. This parameter follows the <project-name>/<image-stream-name>[:tag]
format. The parameter can also point to an image hosted on an external registry, such as Docker Hub. The system
microservice is configured to use the image
created from the earlier build.
One of the benefits of using ImageStream
objects is that the operator redeploys the application when it detects that a new image is pushed. The env
parameter is used to specify environment variables that are passed to the container at runtime.
Additionally, the microservice includes the service
and expose
parameters. The service.port
parameter specifies which port is exposed by the container, allowing the microservice to be accessed from outside the container. To access the microservice from outside of the cluster, it must be exposed by setting the expose
parameter to true
. After you expose the microservice, the Operator automatically creates and configures routes for external access to your microservice.
Run the following command to deploy the system
microservice with the previously explained configuration:
oc apply -f deploy.yaml
Next, run the following command to view your newly created OpenLibertyApplications
resources:
oc get OpenLibertyApplications
You can also replace OpenLibertyApplications
with the shortname olapps
.
Look for output that is similar to the following example:
NAME IMAGE EXPOSED RECONCILED AGE
system guide/system-imagestream:1.0-SNAPSHOT true True 10s
A RECONCILED
state value of True
indicates that the operator was able to successfully process the OpenLibertyApplications
instances. Run the following command to view details of your microservice:
oc describe olapps/system
This example shows part of the olapps/system
output:
Name: system
Namespace: guide
Labels: app.kubernetes.io/part-of=system
name=system
Annotations: <none>
API Version: apps.openliberty.io/v1
Kind: OpenLibertyApplication
...
Accessing the microservice
To access the exposed system
microservice, run the following command and make note of the HOST
:
oc get routes
Look for an output that is similar to the following example:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
system system-guide.2886795274-80-kota02.environments.katacoda.com system 9443-tcp None
Visit the microservice by going to the following URL:
https://[HOST]/system/properties
Make sure to substitute the appropriate [HOST]
value. For example, using the output from the command above, system-guide.2886795274-80-kota02.environments.katacoda.com
is the HOST
. The following example shows this value substituted for HOST
in the URL: https://system-guide.2886795274-80-kota02.environments.katacoda.com/system/properties
.
When you’re done trying out the microservice, run following command to stop the microservice:
oc delete -f deploy.yaml
Specifying optional parameters
You can also use the Open Liberty Operator to implement optional parameters in your application deployment by specifying the associated CRDs in your deploy.yaml
file. For example, you can configure the Kubernetes liveness, readiness and startup probes. Visit the Open Liberty Operator user guide to find all of the supported optional CRDs.
To configure the Kubernetes liveness, readiness and startup probes by using the Open Liberty Operator, specify the probes
in your deploy.yaml
file. The startup
probe verifies whether deployed application is fully initialized before the liveness probe takes over. Then, the liveness
probe determines whether the application is running and the readiness
probe determines whether the application is ready to process requests. For more information about application health checks, see the Checking the health of microservices on Kubernetes guide.
Replace thedeploy.yaml
configuration file.deploy.yaml
deploy.yaml
1# tag::system[]
2apiVersion: apps.openliberty.io/v1
3# tag::olapp[]
4kind: OpenLibertyApplication
5# end::olapp[]
6metadata:
7 name: system
8 labels:
9 name: system
10spec:
11 # tag::sysImage[]
12 applicationImage: guide/system-imagestream:1.0-SNAPSHOT
13 # end::sysImage[]
14 pullPolicy: Always
15 service:
16 # tag::servicePort[]
17 port: 9443
18 # end::servicePort[]
19 # end::service[]
20 # tag::expose[]
21 expose: true
22 # end::expose[]
23 # tag::systemEnv[]
24 env:
25 - name: WLP_LOGGING_MESSAGE_FORMAT
26 value: "json"
27 - name: WLP_LOGGING_MESSAGE_SOURCE
28 value: "message,trace,accessLog,ffdc,audit"
29 # end::systemEnv[]
30 # tag::healthProbes[]
31 # tag::startupProbe[]
32 probes:
33 startup:
34 failureThreshold: 12
35 httpGet:
36 path: /health/started
37 port: 9443
38 scheme: HTTPS
39 initialDelaySeconds: 30
40 periodSeconds: 2
41 timeoutSeconds: 10
42 # end::startupProbe[]
43 # tag::livenessProbe[]
44 liveness:
45 failureThreshold: 12
46 httpGet:
47 path: /health/live
48 port: 9443
49 scheme: HTTPS
50 initialDelaySeconds: 30
51 periodSeconds: 2
52 timeoutSeconds: 10
53 # end::livenessProbe[]
54 # tag::readinessProbe[]
55 readiness:
56 failureThreshold: 12
57 httpGet:
58 path: /health/ready
59 port: 9443
60 scheme: HTTPS
61 initialDelaySeconds: 30
62 periodSeconds: 2
63 timeoutSeconds: 10
64 # end::readinessProbe[]
65 # end::healthProbes[]
66# end::system[]
The /health/started
, /health/live
, and /health/ready
health check endpoints are already created for you.
Run the following command to deploy the system
microservice with the new configuration:
oc apply -f deploy.yaml
Run the following command to check status of the pods:
oc describe pods
Look for the following output to confirm that the health checks are successfully applied and working:
Liveness: http-get https://:9443/health/live delay=30s timeout=10s period=2s #success=1 #failure=12
Readiness: http-get https://:9443/health/ready delay=30s timeout=10s period=2s #success=1 #failure=12
Startup: http-get https://:9443/health/started delay=30s timeout=10s period=2s #success=1 #failure=12
You can revisit the microservice at https://[HOST]/system/properties
as the previous section.
Tearing down the environment
When you no longer need your project, switch to another project and delete the project guide
by running the following command:
oc delete project guide
This command deletes all the applications and resources.
Great work! You’re done!
You just deployed a microservice running in Open Liberty to OpenShift 4 and configured the Kubernetes liveness, readiness and startup probes by using the Open Liberty Operator.
Related Links
Guide Attribution
Deploying a microservice to OpenShift 4 using Open Liberty Operator by Open Liberty is licensed under CC BY-ND 4.0
Prerequisites:
Nice work! Where to next?
What did you think of this guide?
Thank you for your feedback!
What could make this guide better?
Raise an issue to share feedback
Create a pull request to contribute to this guide
Need help?
Ask a question on Stack Overflow