minishift version
Contents
- What you’ll learn
- Additional prerequisites
- Getting started
- Starting Minishift
- Building the system microservice
- Containerizing the system microservice
- Deploying the system microservice
- Deploying the inventory microservice
- Testing the microservices
- Tearing down the environment
- Great work! You’re done!
- Guide Attribution
Tags
Deploying microservices to an OKD cluster using Minishift
Prerequisites:
Explore how to use Minishift to deploy microservices to an Origin Community Distribution of Kubernetes (OKD) cluster.
What you’ll learn
You will learn how to deploy two simple microservices with Open Liberty to an Origin Community Distribution of Kubernetes (OKD) cluster that is running in Minishift.
What is Origin Community Distribution of Kubernetes (OKD)?
OKD, formerly known as OpenShift Origin, is the upstream open source project for all OpenShift products. OKD is a Kubernetes-based platform with added functionality. OKD streamlines the DevOps process by providing an intuitive development pipeline. It also provides integration with multiple tools to make the deployment and management of cloud applications easier.
To learn more about OKD, check out the official OKD page. To learn more about the different platforms that Red Hat OpenShift offers, check out the official OpenShift documentation. If you would like to learn more about Kubernetes, check out the Deploying microservices to Kubernetes guide.
Using Maven, you will build the system
microservice that collects basic system properties from your system and the inventory
microservice that will
interact with the system
microservice. Then, you will learn how to deploy both to the cluster and establish communication between them.
You will use Minishift, a tool for you to run OKD on a local system. Minishift allows developers to deploy a quick and easy OKD cluster for application development.
Minishift is based on OKD 3.11. To run OKD 4.1 or newer on your local system, you can use the CodeReady Containers tool instead. To learn how to use CodeReady Containers, check out the Deploying microservices to OpenShift using CodeReady Containers guide.
Additional prerequisites
The following tools need to be installed:
-
Minishift: With Minishift, you can try OKD by running a VM with a single-node cluster. You can use Minishift with any OS, making it a convenient and flexible tool for testing and development. For installation instructions, refer to the official OKD Minishift documentation.
To verify that Minishift is installed correctly, run the following command:
The output is similar to:
minishift v1.34.1+c2ff9cb
-
Docker: Docker is a containerization software for building the containers that you will eventually deploy onto the OKD cluster. For installation instructions, refer to the official Docker documentation.
To verify that Docker is installed correctly, run the following command:
docker version
The output is similar to:
Client: Docker Engine - Community Version: 19.03.5
Getting started
The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:
git clone https://github.com/openliberty/guide-okd.git
cd guide-okd
The start
directory contains the starting project that you will build upon.
The finish
directory contains the finished project that you will build.
Before you begin, make sure you have all the necessary prerequisites.
Starting Minishift
Deploying the cluster
Run the following command to start Minishift and create the OKD cluster:
minishift start
If the cluster started successfully, you see the following output:
Server Information ... OpenShift server started. The server is accessible via web console at: https://192.168.99.103:8443/console
Logging in to the cluster
To interact with the OpenShift cluster, you need to use the oc
command.
Minishift already includes the oc
binary.
To use the oc
commands, run the following command to configure your PATH to include the binary:
minishift oc-env
The resulting output differs based on your OS and environment, but you get an output similar to the following:
WINDOWS
MAC
LINUX
export PATH="/root/.minishift/cache/oc/v3.11.0/linux:$PATH" # Run this command to configure your command-line session: # eval $(minishift oc-env)
Run the appropriate command to configure your environment:
eval $(minishift oc-env)
export PATH="/Users/[email protected]/.minishift/cache/oc/v3.11.0/darwin:$PATH" # Run this command to configure your command-line session: # eval $(minishift oc-env)
Run the appropriate command to configure your environment:
eval $(minishift oc-env)
SET PATH=C:\Users\guides-bot\.minishift\cache\oc\v3.11.0\windows;%PATH% REM Run this command to configure your command-line session: REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
Run the appropriate command to configure your environment:
@FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
You can run through the development cycle by using OpenShift’s web console through the URL provided in the command output of the minishift start
command. You can also run the following command to open the web console:
minishift console
You can log in with the following credentials:
User: developer Password: [any value]
The web console provides a GUI alternative to their CLI tools that you can explore on your own. This guide will continue with the CLI tools.
You can confirm your credentials by running the oc whoami
command. You will get developer
as your output.
Next, create a new OpenShift project by running the following command:
oc new-project [project-name]
You are now ready to build and deploy a microservice.
Building the system microservice
Dockerfile
1FROM openliberty/open-liberty:kernel-java8-openj9-ubi
2
3ARG VERSION=1.0
4ARG REVISION=SNAPSHOT
5
6LABEL \
7 org.opencontainers.image.authors="Your Name" \
8 org.opencontainers.image.vendor="IBM" \
9 org.opencontainers.image.url="local" \
10 org.opencontainers.image.source="https://github.com/OpenLiberty/guide-okd" \
11 org.opencontainers.image.version="$VERSION" \
12 org.opencontainers.image.revision="$REVISION" \
13 vendor="Open Liberty" \
14 name="system" \
15 version="$VERSION-$REVISION" \
16 summary="The system microservice from the OKD guide" \
17 description="This image contains the system microservice running with the Open Liberty runtime."
18
19COPY src/main/liberty/config /config/
20COPY target/*.war /config/apps
21
22RUN configure.sh
A simple microservice named system
will be packaged, containerized, and deployed onto the OKD cluster.
The system
microservice collects the JVM properties of the host machine.
Navigate to the start
directory.
The source code of the system
and inventory
microservices is located at the system
and inventory
directories.
Focus on the system
microservice first, and you will learn about the inventory
microservice later.
In the start
directory, run the following command to package the system
microservice:
mvn -pl system package
The mvn package
command compiles, verifies, and builds the project.
The resulting compiled code is packaged into a war
web archive that can be found under the system/target
directory.
The archive contains the application that is needed to run the microservice on an Open Liberty,
and it is now ready to be injected into a Docker container for deployment.
Containerizing the system microservice
Reusing the Docker daemon
To simplify the local deployment process, you can reuse the built-in Minishift Docker daemon. Reusing the Minishift Docker daemon allows you to use the internal Docker registry, so you don’t have to build a Docker registry on your machine. To reuse the Docker daemon, run the following command to point your command-line session to use the Minishift’s daemon:
minishift docker-env
The result of the command is a list of bash environment variable exports that configure your environment to reuse the Docker daemon inside the single Minishift VM instance. The commands differ based on your OS and environment, but you get an output similar to the following example:
WINDOWS
MAC
LINUX
export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.103:2376" export DOCKER_CERT_PATH="/Users/[email protected]/.minishift/certs" # Run this command to configure your command-line session: # eval $(minishift docker-env)
Run the eval
command to configure your environment:
eval $(minishift docker-env)
SET DOCKER_TLS_VERIFY=1 SET DOCKER_HOST=tcp://9.26.69.218:2376 SET DOCKER_CERT_PATH=C:\Users\maihameed\.minishift\certs REM Run this command to configure your command-line session: REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i
Run the given @FOR
command to configure your environment:
@FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i
Building the Docker image
Run the following command to download or update to the latest Open Liberty Docker image:
docker pull openliberty/open-liberty:kernel-java8-openj9-ubi
Now that the environment is set up, ensure that you are in the start
directory and run the following command to build the system
Docker image:
docker build -t system system/
The command builds an image named system
from the Dockerfile
provided in the system
directory.
To verify that the images are built, run the following command to list all local Docker images:
docker images
Your system
image should appear in the list of all Docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE system latest e8a8393e9364 2 minutes ago 399MB
Accessing the internal registry
To run the microservice on the OKD cluster, you need to push the microservice image into a container image registry. You will use the OpenShift integrated container image registry called OpenShift Container Registry (OCR). First, you must authenticate your Docker client to your OCR. Start by running the login command:
WINDOWS
MAC
LINUX
echo $(oc whoami -t) | docker login -u developer --password-stdin $(oc registry info)
Because the Windows command prompt doesn’t support the command substitution that is displayed for Mac and Linux, run the following commands:
oc whoami
oc whoami -t
oc registry info
Replace the square brackets in the following docker login
command with the results from the previous commands:
docker login -u [oc whoami] -p [oc whoami -t] [oc registry info]
Now you must tag and push your system
microservice to the internal registry so that it is accessible for deployment.
Run the following command to tag your microservice:
WINDOWS
MAC
LINUX
docker tag system $(oc registry info)/$(oc project -q)/system
Run the following commands:
oc registry info
oc project -q
Replace the square brackets in the following docker tag
command with the results from the previous commands:
docker tag system [oc registry info]/[oc project -q]/system
Your newly tagged image should appear in the list of all Docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE system latest e8a8393e9364 2 minutes ago 399MB 172.30.1.1:5000/my-project/system latest e8a8393e9364 2 minutes ago 399MB
Now push your newly tagged image to the internal registry by running the following command:
WINDOWS
MAC
LINUX
docker push $(oc registry info)/$(oc project -q)/system
Run the following commands:
oc registry info
oc project -q
Replace the square brackets in the following docker push
command with the results from the previous commands:
docker push [oc registry info]/[oc project -q]/system
The microservice is now ready for deployment.
Deploying the system microservice
Now that the system
Docker image is built, deploy it using a resource configuration file.
Since OKD is built on top of Kubernetes, it supports the same concepts and deployment strategies.
The OpenShift oc
CLI tool supports most of the same commands as the Kubernetes kubectl
tool.
To learn more about Kubernetes and resource configuration files,
check out the Deploying microservices to Kubernetes guide.
The provided deploy.yaml
configuration file outlines a deployment
resource that creates and deploys a container named system-container
.
This container will run the Docker-formatted image provided in the image
field.
The image
field should point to your newly pushed image.
deploy.yaml
Run the following command to view the image stream:
oc get imagestream
You should find your newly pushed image:
NAME DOCKER REPO TAGS UPDATED system 172.30.1.1:5000/my-project/system latest 5 minutes ago
The OpenShift image stream displays all the Docker-formatted container images that are pushed to the internal registry. You can configure builds and deployments to trigger when an image is updated.
Update thedeploy.yaml
file in thestart
directory.deploy.yaml
The system image
field specifies the name and tag of the container image that you want to use for the system container. Update the
value of the system image
field to specify the image location found in the DOCKER REPO
column from the output of the
following command:
oc get imagestream
After you update the value of the system image
field,
run the following command to apply the configuration file and create your OpenShift resource:
oc apply -f deploy.yaml
You get an output similar to the following example:
deployment.apps/system-deployment created
Run the following command to view your pods:
oc get pods
Ensure that your system-deployment
pod is Running
:
NAME READY STATUS RESTARTS AGE system-deployment-768f95cf8f-fnjjj 1/1 Running 0 5m
Run the following command to get more details on your pod:
oc describe pod system-deployment
The pod description includes an events log, which is useful in debugging any issues that might arise. The log is formatted similar to the following example:
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1d default-scheduler Successfully assigned my-project/system-deployment-768f95cf8f-fnjjj to localhost Normal Pulling 1d kubelet, localhost pulling image "172.30.1.1:5000/my-project/system" Normal Pulled 1d kubelet, localhost Successfully pulled image "172.30.1.1:5000/my-project/system" Normal Created 1d kubelet, localhost Created container Normal Started 1d kubelet, localhost Started container
The container is deployed successfully, but it’s isolated and cannot be accessed for requests.
A service needs to be created to expose your deployment so that you can make requests to your container.
You also must expose the service by using a route so that external users can access the microservice through a hostname.
Update your deploy.yaml
file to include service and route resources.
Update thedeploy.yaml
file in thestart
directory.deploy.yaml
Update the configuration file to include the service
and route
resources.
deploy.yaml
To update your resources, run the following command:
oc apply -f deploy.yaml
Notice that the cluster only picks up changes, and doesn’t tear down and rebuild the deployment if it hasn’t changed:
deployment.apps/system-deployment unchanged service/system-service created route/system-route created
You can view all of your routes by running the following command:
oc get routes
You get an output similar to the following example:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD system-route system-route-my-project.192.168.99.103.nip.io system-service <all> None
Access your microservice through the hostname provided in the output,
by going to the http://[hostname]/system/properties
URL, or running the following command.
In the following command, replace [hostname]
with the hostname provided by the oc get routes
command.
curl http://[hostname]/system/properties
Deploying the inventory microservice
deploy.yaml
1apiVersion: apps/v1
2# tag::deployment[]
3kind: Deployment
4# end::deployment[]
5metadata:
6 name: system-deployment
7 labels:
8 app: system
9spec:
10 selector:
11 matchLabels:
12 app: system
13 template:
14 metadata:
15 labels:
16 app: system
17 spec:
18# tag::container[]
19 containers:
20 - name: system-container
21 # tag::image[]
22 image: [image-link]
23 # end::image[]
24 ports:
25 - containerPort: 9080
26# end::container[]
27# tag::everythingButSystemDeployment[]
28# tag::systemService[]
29---
30apiVersion: v1
31kind: Service
32metadata:
33 name: system-service
34spec:
35 selector:
36 app: system
37 ports:
38 - protocol: TCP
39 port: 9080
40# end::systemService[]
41# tag::everythingButSystemDeploymentAndService[]
42# tag::systemRoute[]
43---
44apiVersion: v1
45kind: Route
46metadata:
47 name: system-route
48spec:
49 to:
50 kind: Service
51 name: system-service
52# end::systemRoute[]
53# tag::inventoryResources[]
54---
55apiVersion: apps/v1
56kind: Deployment
57metadata:
58 name: inventory-deployment
59 labels:
60 app: inventory
61spec:
62 selector:
63 matchLabels:
64 app: inventory
65 template:
66 metadata:
67 labels:
68 app: inventory
69 spec:
70 containers:
71 - name: inventory-container
72 # tag::inventoryImage[]
73 image: [image-link]
74 # end::inventoryImage[]
75 ports:
76 - containerPort: 9080
77---
78apiVersion: v1
79kind: Service
80metadata:
81 name: inventory-service
82spec:
83 selector:
84 app: inventory
85 ports:
86 - protocol: TCP
87 port: 9080
88---
89apiVersion: v1
90kind: Route
91metadata:
92 name: inventory-route
93spec:
94 to:
95 kind: Service
96 name: inventory-service
97# end::inventoryResources[]
98# end::everythingButSystemDeploymentAndService[]
99# end::everythingButSystemDeployment[]
Now that the system
microservice is running, you will package and deploy the inventory
microservice,
which adds the properties from the system
microservice to the inventory
.
This process demonstrates how to establish communication between pods inside a cluster.
Building the microservice
In the start
directory, run the following command to package the inventory
microservice:
mvn -pl inventory package
Containerizing the microservice
Run the following command to use the inventory
Dockerfile to create an image:
docker build -t inventory inventory/
Next, tag and push the image to the internal registry.
WINDOWS
MAC
LINUX
Run the following command to tag your microservice:
docker tag inventory $(oc registry info)/$(oc project -q)/inventory
Now push your newly tagged image to the internal registry by running the following command:
docker push $(oc registry info)/$(oc project -q)/inventory
Run the following commands:
oc registry info
oc project -q
Replace the square brackets in the following command with the results from the previous commands to tag your microservice:
docker tag inventory [oc registry info]/[oc project -q]/inventory
Run the following command to push your microservice, ensuring to replace the square brackets:
docker push [oc registry info]/[oc project -q]/inventory
The microservice is now ready for deployment.
Deploying the microservice
You can use the same deploy.yaml
configuration file to deploy multiple microservices.
Update the configuration file to include the deployment, service, and route resources for your inventory
microservice.
Update thedeploy.yaml
file in thestart
directory.deploy.yaml
Update the configuration file to add the inventory
resources.
Make sure to update the inventory image
field with the appropriate
image link found in the DOCKER REPO
column from the output of the following command:
oc get imagestream
Now run the following command to allow the cluster to pick up the new changes:
oc apply -f deploy.yaml
Run the following command to get the hostname of the newly exposed inventory
service:
oc get route inventory-route
You get an output similar to the following example:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD inventory-route inventory-route-myproject.127.0.0.1.nip.io inventory-service <all> None
Go to the following http://[hostname]/inventory/systems
URL or run the following curl
command to view the current inventory.
In the curl
command, replace the [hostname]
with your appropriate hostname:
curl http://[hostname]/inventory/systems
You see a JSON response like the following example. Your JSON response might not be formatted. The sample output was formatted for readability:
{ "systems": [], "total": 0 }
Since this is a fresh deployment, there are no saved systems in the inventory. Go to the http://[hostname]/inventory/systems/system-service
URL or run
the following command to allow the inventory
microservice to access the system
microservice and save the system
result in the inventory
:
curl http://[hostname]/inventory/systems/system-service
You receive your JVM system properties as a response.
Go to the following http://[hostname]/inventory/systems
URL or run the following command to recheck the inventory:
curl http://[hostname]/inventory/systems
You see the following response:
{ "systems": [ { "hostname": "system-service", "properties": { "os.name": "Linux", "user.name": "unknown" } } ], "total": 1 }
Notice that the system count incremented by 1 and provided a list of a few key fields that are retrieved from the system response.
Testing the microservices
pom.xml
1<?xml version="1.0" encoding="UTF-8"?>
2<project xmlns="http://maven.apache.org/POM/4.0.0"
3 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4
5 <modelVersion>4.0.0</modelVersion>
6
7 <groupId>io.openliberty.guides</groupId>
8 <artifactId>inventory</artifactId>
9 <version>1.0-SNAPSHOT</version>
10 <packaging>war</packaging>
11
12 <properties>
13 <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
14 <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
15 <maven.compiler.source>1.8</maven.compiler.source>
16 <maven.compiler.target>1.8</maven.compiler.target>
17 <!-- OpenLiberty runtime -->
18 <http.port>9080</http.port>
19 <https.port>9443</https.port>
20 <!-- Default test properties -->
21 <system.kube.service>system-service</system.kube.service>
22 <!-- tag::systemIP[] -->
23 <system.ip>localhost:9090</system.ip>
24 <!-- end::systemIP[] -->
25 <!-- tag::inventoryIP[] -->
26 <inventory.ip>localhost:9080</inventory.ip>
27 <!-- end::inventoryIP[] -->
28 </properties>
29
30 <dependencies>
31
32 <dependency>
33 <groupId>jakarta.platform</groupId>
34 <artifactId>jakarta.jakartaee-api</artifactId>
35 <version>8.0.0</version>
36 <scope>provided</scope>
37 </dependency>
38 <dependency>
39 <groupId>org.eclipse.microprofile</groupId>
40 <artifactId>microprofile</artifactId>
41 <version>3.3</version>
42 <type>pom</type>
43 <scope>provided</scope>
44 </dependency>
45
46 <!-- For tests -->
47 <dependency>
48 <groupId>org.apache.cxf</groupId>
49 <artifactId>cxf-rt-rs-client</artifactId>
50 <version>3.3.6</version>
51 <scope>test</scope>
52 </dependency>
53
54 <dependency>
55 <groupId>org.apache.cxf</groupId>
56 <artifactId>cxf-rt-rs-extension-providers</artifactId>
57 <version>3.3.6</version>
58 <scope>test</scope>
59 </dependency>
60
61 <dependency>
62 <groupId>org.glassfish</groupId>
63 <artifactId>javax.json</artifactId>
64 <version>1.1.4</version>
65 <scope>test</scope>
66 </dependency>
67
68 <dependency>
69 <groupId>org.junit.jupiter</groupId>
70 <artifactId>junit-jupiter</artifactId>
71 <version>5.6.2</version>
72 <scope>test</scope>
73 </dependency>
74
75 </dependencies>
76
77 <build>
78 <finalName>${project.artifactId}</finalName>
79 <plugins>
80
81 <plugin>
82 <groupId>org.apache.maven.plugins</groupId>
83 <artifactId>maven-war-plugin</artifactId>
84 <version>3.2.3</version>
85 <configuration>
86 <packagingExcludes>pom.xml</packagingExcludes>
87 </configuration>
88 </plugin>
89
90 <!-- Liberty plugin -->
91 <plugin>
92 <groupId>io.openliberty.tools</groupId>
93 <artifactId>liberty-maven-plugin</artifactId>
94 <version>3.2</version>
95 </plugin>
96
97 <!-- For unit tests -->
98 <plugin>
99 <groupId>org.apache.maven.plugins</groupId>
100 <artifactId>maven-surefire-plugin</artifactId>
101 <version>2.22.2</version>
102 <configuration>
103 <systemPropertyVariables>
104 <system.kube.service>${system.kube.service}</system.kube.service>
105 <system.ip>${system.ip}</system.ip>
106 <inventory.ip>${inventory.ip}</inventory.ip>
107 </systemPropertyVariables>
108 </configuration>
109 </plugin>
110
111 <!-- For integration tests -->
112 <plugin>
113 <groupId>org.apache.maven.plugins</groupId>
114 <artifactId>maven-failsafe-plugin</artifactId>
115 <version>2.22.2</version>
116 <configuration>
117 <systemPropertyVariables>
118 <system.kube.service>${system.kube.service}</system.kube.service>
119 <system.ip>${system.ip}</system.ip>
120 <inventory.ip>${inventory.ip}</inventory.ip>
121 </systemPropertyVariables>
122 </configuration>
123 <executions>
124 <execution>
125 <id>integration-test</id>
126 <goals>
127 <goal>integration-test</goal>
128 </goals>
129 <configuration>
130 <trimStackTrace>false</trimStackTrace>
131 </configuration>
132 </execution>
133 <execution>
134 <id>verify</id>
135 <goals>
136 <goal>verify</goal>
137 </goals>
138 </execution>
139 </executions>
140 </plugin>
141
142 </plugins>
143 </build>
144</project>
A few tests are included for you to test the basic functions of the microservices.
If a test failure occurs, then you might have introduced a bug into the code.
To run the tests, wait for all pods to be in the ready state before you proceed further.
The default parameters that are defined in the pom.xml
file are:
Parameter | Description |
---|---|
|
IP or hostname of the |
|
IP or hostname of the |
Use the following command to run the integration tests against your running cluster:
WINDOWS
MAC
LINUX
Run the following command, noting the values of the system
and inventory
route hostnames:
oc get routes
Substitute [system-route-hostname]
and [inventory-route-hostname]
with the appropriate values and run the following command:
mvn verify -Ddockerfile.skip=true -Dsystem.ip=[system-route-hostname] -Dinventory.ip=[inventory-route-hostname]
mvn verify -Ddockerfile.skip=true \
-Dsystem.ip=$(oc get route system-route -o=jsonpath='{.spec.host}') \
-Dinventory.ip=$(oc get route inventory-route -o=jsonpath='{.spec.host}')
-
The
dockerfile.skip
parameter is set totrue
to skip building a new container image. -
The
system.ip
parameter is replaced with the appropriate hostname to access your system microservice. -
The
inventory.ip
parameter is replaced with the appropriate hostname to access your inventory microservice.
If the tests pass, you see an output for each service similar to the following example:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.system.SystemEndpointIT
Results:
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.inventory.InventoryEndpointIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.inventory.InventoryEndpointIT
Results:
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0
Tearing down the environment
When you no longer need your deployed microservices, you can use the same configuration file to delete them. Run the following command to delete your deployments, systems, and routes:
oc delete -f deploy.yaml
To completely delete your Minishift VM, cluster, and all associated files, refer to the official Uninstalling Minishift documentation.
To revert back to your default Docker settings, simply close your command-line session.
Great work! You’re done!
You just deployed two microservices running in Open Liberty to an OKD cluster using the oc
tool.
Guide Attribution
Deploying microservices to an OKD cluster using Minishift by Open Liberty is licensed under CC BY-ND 4.0
Prerequisites:
Nice work! Where to next?
What did you think of this guide?
Thank you for your feedback!
What could make this guide better?
Raise an issue to share feedback
Create a pull request to contribute to this guide
Need help?
Ask a question on Stack Overflow