Deploying microservices to Amazon Web Services

duration 45 minutes
Git clone to get going right away:
git clone
Copy Github clone command

Explore how to deploy microservices to Amazon Elastic Container Service for Kubernetes (EKS) on Amazon Web Services (AWS).

What you’ll learn

You will learn how to deploy two microservices in Open Liberty containers to a Kubernetes cluster on Amazon Elastic Container Service for Kubernetes (EKS).

Kubernetes is an open source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. If you would like to learn more about Kubernetes, check out the Deploying microservices to Kubernetes guide.

There are different cloud-based solutions for running your workloads in a Kubernetes cluster. A cloud-based infrastructure enables you to focus on developing your microservices without worrying about details related to the servers you deploy them to. Using a cloud helps you to easily scale and serve your microservices in a high-availability setup.

Amazon Web Services (AWS) offers a managed Kubernetes service called Amazon Elastic Container Service for Kubernetes (EKS). EKS simplifies the process of running Kubernetes on AWS without needing to install or maintain your own Kubernetes control plane. It provides a hosted Kubernetes cluster that you can deploy your microservices to. You will use EKS with Amazon Elastic Container Registry (ECR). Amazon ECR is a private registry that is used to store and distribute your container images. Note, since EKS is not free, there is a small cost that is associated with running this guide. See the official Amazon EKS pricing documentation for more details.

The two microservices you will deploy are called system and inventory. The system microservice returns the JVM system properties of the running container. It also returns the pod’s name in the HTTP header, making replicas easy to distinguish from each other. The inventory microservice adds the properties from the system microservice to the inventory. This demonstrates how communication can be established between pods inside a cluster.


Before you begin, the following tools need to be installed:

  • Docker: You need a containerization software for building containers. Kubernetes supports various container types, but you will use Docker in this guide. For installation instructions, refer to the official Docker documentation.

  • kubectl: You need the Kubernetes command-line tool kubectl to interact with your Kubernetes cluster. See the official Install and Set Up kubectl documentation for information about downloading and setting up kubectl on your platform.

  • IAM Authenticator: To allow IAM authentication for your Amazon EKS cluster, you must install the AWS IAM Authenticator for Kubernetes. Follow the Installing aws-iam-authenticator instructions to install the AWS IAM Authenticator on your platform..

  • eksctl: In this guide, you will use the eksctl CLI tool for provisioning your EKS cluster. Navigate to the eksctl releases page and download the latest stable release. Extract the archive and add the directory with the extracted files to your path.

  • AWS CLI: You will need to use the AWS Command Line Interface (CLI). To install the AWS CLI, Python must be installed. See the Downloading Python documentation to download the latest version of Python.

After Python is installed, install the AWS CLI by following the instructions in the official Installing the AWS CLI documentation.

To verify that the AWS CLI is installed correctly, run the following command:

aws --version

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone
cd guide-cloud-aws

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Creating a Kubernetes cluster on EKS

Before you can deploy your microservices, you must create a Kubernetes cluster.

Configuring the AWS CLI

After the AWS CLI is installed, it must be configured by running the AWS configure command. Before you configure the AWS CLI, you need to create an AWS Identity and Access Management (IAM) user. Navigate to the Identity and Access Management users dashboard and create a user through the UI. While creating a user, you must give the user programmatic access when selecting the AWS access type. You will also be prompted to add the user to a group. A group allows you to specify permissions for multiple users. If you do not have an existing group, you need to create a new one. Be sure to take note of the AWS Access Key ID and AWS Secret Access Key.

aws configure

You will be prompted for several pieces of information, including an AWS Access Key ID and an AWS Secret Access Key. These keys are associated with the AWS Identity and Access Management (IAM) user that you created.

Next, you will be prompted to enter a region. This region will be the region of the servers where your requests are sent. Select the region that is closest to you. For a full list of regions, see the AWS Regions and Endpoints.

Finally, enter json when you are prompted to enter the output format.

After you are done filling out this information, the settings are stored in the default profile. Anytime that you run an AWS CLI command without specifying a profile, the default profile is used.

Provisioning a cluster

The eksctl CLI tool greatly simplifies the process of creating clusters on EKS. To create your cluster, use the eksctl create cluster command:

eksctl create cluster --name=guide-cluster --nodes=1 --node-type=t2.small

Running this command creates a cluster that is called guide-cluster that uses a single t2.small Amazon Elastic Compute Cloud (EC2) instance as the worker node. The t2.small EC2 instance is not included in the AWS free tier. See the official Amazon EC2 pricing documentation for more details. When the cluster is created, you see an output similar to the following:

[✔]  EKS cluster "guide-cluster" in "us-east-2" region is ready

After your cluster is ready, EKS connects kubectl to the cluster. Verify that you’re connected to the cluster by checking the cluster’s nodes:

kubectl get nodes
NAME                            STATUS    ROLES     AGE       VERSION   Ready     <none>    7m        v1.11.5

Deploying microservices to Amazon Elastic Container Service for Kubernetes (EKS)

In this section, you will learn how to deploy two microservices in Open Liberty containers to a Kubernetes cluster on EKS. You will build and containerize the system and inventory microservices, push them to a container registry, and then deploy them to your Kubernetes cluster.

Building and containerizing the microservices

The first step of deploying to Kubernetes is to build your microservices and containerize them.

The starting Java project, which you can find in the start directory, is a multi-module Maven project. It is made up of the system and inventory microservices. Each microservice resides in its own directory, start/system and start/inventory. Both of these directories contain a Dockerfile, which is necessary for building the Docker images. If you’re unfamiliar with Dockerfiles, check out the Using Docker containers to develop microservices guide.

If you’re familiar with Maven and Docker, you might be tempted to run a Maven build first and then use the .war file to build a Docker image. The projects are setup such that this process is automated as a part of a single Maven build. It is created by using the dockerfile-maven plug-in. The plug-in automatically picks up the Dockerfile that is located in the same directory as its POM file and builds a Docker image from it.


On the Docker Desktop General Setting page, ensure that the option Expose daemon on tcp://localhost:2375 without TLS is enabled. This configuration is required by the dockerfile-maven part of the build.

Navigate to the start directory and run the following command:

mvn package

The package goal automatically starts the dockerfile-maven:build goal. It runs during the package phase. This goal builds a Docker image from the Dockerfile that is located in the same directory as the POM file.

During the build, you see various Docker messages that describe what images are being downloaded and built. When the build finishes, run the following command to list all local Docker images:

docker images

Verify that the system:1.0-SNAPSHOT and inventory:1.0-SNAPSHOT images are listed among them, for example:

REPOSITORY                    TAG
system                        1.0-SNAPSHOT
inventory                     1.0-SNAPSHOT
open-liberty                  latest

If you don’t see the system:1.0-SNAPSHOT and inventory:1.0-SNAPSHOT images, then check the Maven build log for any potential errors.

Pushing the images to a container registry

Pushing the images to a registry allows the cluster to create pods by using your container images. The registry that you use is called Amazon Elastic Container Registry (ECR).

First, you must authenticate your Docker client to your ECR registry. Start by running the get-login command:

aws ecr get-login --no-include-email

If you see Unknown options: --no-include-email, update your AWS CLI to the latest version. The get-login command returns a docker login command similar to the following:

docker login -u AWS -p [password_string] https://[aws_account_id]

Run the docker login command that is returned to finish authenticating your Docker client. The [aws_account_id] is a unique 12-digit ID that is assigned to every AWS account. You will notice this ID in the output from various commands because AWS uses it to differentiate your resources from other accounts.

Next, make a repository to store the system and inventory images:

aws ecr create-repository --repository-name awsguide/system
aws ecr create-repository --repository-name awsguide/inventory

You will see an output similar to the following:

    "repository": {
        "registryId": "[aws_account_id]",
        "repositoryName": "awsguide/system",
        "repositoryArn": "arn:aws:ecr:us-east-2:[aws_account_id]:repository/awsguide/system",
        "createdAt": 1553111916.0,
        "repositoryUri": "[aws_account_id]"

Take note of the repository URI for both the system and inventory repositories, as you need them when you tag and push your images.

Next, you need to tag your container images with the relevant data about your registry:

docker tag system:1.0-SNAPSHOT [system-repository-uri]:1.0-SNAPSHOT
docker tag inventory:1.0-SNAPSHOT [inventory-repository-uri]:1.0-SNAPSHOT

Finally, push your images to the registry:

docker push [system-repository-uri]:1.0-SNAPSHOT
docker push [inventory-repository-uri]:1.0-SNAPSHOT

When you tag and push your images, remember to substitute [system-repository-uri] and [inventory-repository-uri] with the appropriate URI for the system and inventory repositories.

Deploying the microservices

Now that your container images are built, deploy them using a Kubernetes resource definition.

A Kubernetes resource definition is a yaml file that contains a description of all your deployments, services, or any other resources that you want to deploy. All resources can also be deleted from the cluster by using the same yaml file that you used to deploy them. The kubernetes.yaml resource definition file is provided for you. If you are interested in learning more about the Kubernetes resource definition, check out the Deploying microservices to Kubernetes guide.

Update the kubernetes.yaml file.


 1apiVersion: apps/v1
 2kind: Deployment
 4  name: system-deployment
 5  labels:
 6    app: system
 8  selector:
 9    matchLabels:
10      app: system
11  template:
12    metadata:
13      labels:
14        app: system
15    spec:
16      containers:
17      - name: system-container
18        image: [system-repository-uri]:1.0-SNAPSHOT
19        ports:
20        - containerPort: 9080
22apiVersion: apps/v1
23kind: Deployment
25  name: inventory-deployment
26  labels:
27    app: inventory
29  selector:
30    matchLabels:
31      app: inventory
32  template:
33    metadata:
34      labels:
35        app: inventory
36    spec:
37      containers:
38      - name: inventory-container
39        image: [inventory-repository-uri]:1.0-SNAPSHOT
40        ports:
41        - containerPort: 9080
43apiVersion: v1
44kind: Service
46  name: system-service
48  type: NodePort
49  selector:
50    app: system
51  ports:
52  - protocol: TCP
53    port: 9080
54    targetPort: 9080
55    nodePort: 31000
57apiVersion: v1
58kind: Service
60  name: inventory-service
62  type: NodePort
63  selector:
64    app: inventory
65  ports:
66  - protocol: TCP
67    port: 9080
68    targetPort: 9080
69    nodePort: 32000

The image is the name and tag of the container image that you want to use for the container. Update the system image and the inventory image fields to point to your system and inventory repository URIs.

Run the following commands to deploy the resources as defined in kubernetes.yaml:

kubectl apply -f kubernetes.yaml

When the apps are deployed, run the following command to check the status of your pods:

kubectl get pods

If all the pods are healthy and running, you see an output similar to the following:

NAME                                    READY     STATUS    RESTARTS   AGE
system-deployment-6bd97d9bf6-4ccds      1/1       Running   0          15s
inventory-deployment-645767664f-nbtd9   1/1       Running   0          15s

Making requests to the microservices

Take note of the EXTERNAL-IP in the output of the following command. It is the hostname you will later substitute into [hostname]:

kubectl get nodes -o wide

Before you can make a request to [hostname]:31000 or [hostname]:32000, you must modify the security group to allow incoming traffic through ports 31000 and 32000. To get the group-id of the security group, use the aws ec2 describe-security-groups command:

aws ec2 describe-security-groups --filters Name=group-name,Values="*eksctl-guide-cluster-nodegroup*"  --query "SecurityGroups[*].{Name:GroupName,ID:GroupId}"

Then, add the following rules to the security group to allow incoming traffic through ports 31000 and 32000. Don’t forget to substitute [security-group-id] for the ID in the output of the previous command.

aws ec2 authorize-security-group-ingress --protocol tcp --port 31000 --group-id [security-group-id] --cidr
aws ec2 authorize-security-group-ingress --protocol tcp --port 32000 --group-id [security-group-id] --cidr

After you are finished adding the inbound rules to the security group, you might need to wait a few minutes before you try to access the system and inventory microservices.

Then, curl or visit the following URLs to access your microservices, substituting the appropriate hostname:

  • http://[hostname]:31000/system/properties

  • http://[hostname]:32000/inventory/systems/system-service

The first URL returns system properties and the name of the pod in an HTTP header called X-Pod-Name. To view the header, you can use the -I option in the curl when you make a request to http://[hostname]:31000/system/properties. The second URL adds properties from system-service to the inventory.

Testing microservices that are running on AWS EKS


  1<?xml version="1.0" encoding="UTF-8"?>
  2<project xmlns=""
  3    xmlns:xsi=""
  4    xsi:schemaLocation="">
  6    <modelVersion>4.0.0</modelVersion>
  8    <parent>
  9        <groupId>net.wasdev.wlp.maven.parent</groupId>
 10        <artifactId>liberty-maven-app-parent</artifactId>
 11        <version>RELEASE</version>
 12    </parent>
 14    <groupId>io.openliberty.guides</groupId>
 15    <artifactId>kube-demo</artifactId>
 16    <version>1.0-SNAPSHOT</version>
 17    <packaging>pom</packaging>
 19    <properties>
 20        <>UTF-8</>
 21        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
 22        <maven.compiler.source>1.8</maven.compiler.source>
 23        <>1.8</>
 24        <!-- Plugins -->
 25        <version.maven-war-plugin>2.6</version.maven-war-plugin>
 26        <version.dockerfile-maven-plugin>1.4.10</version.dockerfile-maven-plugin>
 27        <version.exec-maven-plugin>1.6.0</version.exec-maven-plugin>
 28        <version.maven-surefire-plugin>3.0.0-M1</version.maven-surefire-plugin>
 29        <version.maven-failsafe-plugin>3.0.0-M1</version.maven-failsafe-plugin>
 30        <!-- OpenLiberty runtime -->
 31        <version.openliberty-runtime>RELEASE</version.openliberty-runtime>
 32        <http.port>9080</http.port>
 33        <https.port>9443</https.port>
 34        <!-- Default test properties -->
 35        <cluster.ip>localhost</cluster.ip>
 36        <system.kube.service>system-service</system.kube.service>
 37        <system.node.port>31000</system.node.port>
 38        <inventory.node.port>32000</inventory.node.port>
 39    </properties>
 41    <dependencyManagement>
 42        <dependencies>
 43           <dependency>
 44               <groupId>io.openliberty.features</groupId>
 45               <artifactId>features-bom</artifactId>
 46               <version>RELEASE</version>
 47               <type>pom</type>
 48               <scope>import</scope>
 49           </dependency>
 50           <dependency>
 51                <groupId></groupId>
 52                <artifactId>microprofile-rest-client-api</artifactId>
 53                <version>1.0.1</version>
 54                <scope>provided</scope>
 55            </dependency>
 56            <dependency>
 57                <groupId>junit</groupId>
 58                <artifactId>junit</artifactId>
 59                <version>4.12</version>
 60                <scope>test</scope>
 61            </dependency>
 62            <dependency>
 63                <groupId>org.glassfish</groupId>
 64                <artifactId>javax.json</artifactId>
 65                <version>1.0.4</version>
 66                <scope>test</scope>
 67            </dependency>
 68            <dependency>
 69                <groupId>org.apache.cxf</groupId>
 70                <artifactId>cxf-rt-rs-extension-providers</artifactId>
 71                <version>3.2.6</version>
 72                <scope>test</scope>
 73            </dependency>
 74            <dependency>
 75                <groupId>org.apache.cxf</groupId>
 76                <artifactId>cxf-rt-rs-client</artifactId>
 77                <version>3.2.6</version>
 78                <scope>test</scope>
 79            </dependency>
 80            <dependency>
 81                <groupId>org.apache.commons</groupId>
 82                <artifactId>commons-lang3</artifactId>
 83                <version>3.0</version>
 84                <scope>compile</scope>
 85            </dependency>
 86            <!-- Support for JDK 9 and above -->
 87            <dependency>
 88                <groupId>javax.xml.bind</groupId>
 89                <artifactId>jaxb-api</artifactId>
 90                <version>2.3.1</version>
 91                <scope>test</scope>
 92            </dependency>
 93            <dependency>
 94                <groupId>com.sun.xml.bind</groupId>
 95                <artifactId>jaxb-core</artifactId>
 96                <version></version>
 97                <scope>test</scope>
 98            </dependency>
 99            <dependency>
100                <groupId>com.sun.xml.bind</groupId>
101                <artifactId>jaxb-impl</artifactId>
102                <version>2.3.2</version>
103                <scope>test</scope>
104            </dependency>
105            <dependency>
106                <groupId>javax.activation</groupId>
107                <artifactId>activation</artifactId>
108                <version>1.1.1</version>
109                <scope>test</scope>
110            </dependency>
111        </dependencies>
112    </dependencyManagement>
114    <profiles>
115        <profile>
116            <id>windowsExtension</id>
117            <activation>
118                <os><family>Windows</family></os>
119            </activation>
120            <properties>
121                <kubectl.extension>.cmd</kubectl.extension>
122            </properties>
123        </profile>
124        <profile>
125            <id>nonWindowsExtension</id>
126            <activation>
127                <os><family>!Windows</family></os>
128            </activation>
129            <properties>
130                <kubectl.extension></kubectl.extension>
131            </properties>
132        </profile>
133    </profiles>
135    <build>
136        <pluginManagement>
137            <plugins>
138                <plugin>
139                    <groupId>org.apache.maven.plugins</groupId>
140                    <artifactId>maven-war-plugin</artifactId>
141                    <version>${version.maven-war-plugin}</version>
142                    <configuration>
143                        <failOnMissingWebXml>false</failOnMissingWebXml>
144                        <packagingExcludes>pom.xml</packagingExcludes>
145                    </configuration>
146                </plugin>
147                <plugin>
148                    <groupId>net.wasdev.wlp.maven.plugins</groupId>
149                    <artifactId>liberty-maven-plugin</artifactId>
150                    <configuration>
151                        <assemblyArtifact>
152                            <groupId>io.openliberty</groupId>
153                            <artifactId>openliberty-runtime</artifactId>
154                            <version>RELEASE</version>
155                            <type>zip</type>
156                        </assemblyArtifact>
157                    </configuration>
158                </plugin>
159                <plugin>
160                    <groupId>com.spotify</groupId>
161                    <artifactId>dockerfile-maven-plugin</artifactId>
162                    <version>${version.dockerfile-maven-plugin}</version>
163                    <executions>
164                        <execution>
165                            <id>default</id>
166                            <goals>
167                                <goal>build</goal>
168                            </goals>
169                        </execution>
170                    </executions>
171                    <configuration>
172                        <repository>${project.artifactId}</repository>
173                        <tag>${project.version}</tag>
174                    </configuration>
175                </plugin>
176                <!-- Plugin to run unit tests -->
177                <plugin>
178                    <groupId>org.apache.maven.plugins</groupId>
179                    <artifactId>maven-surefire-plugin</artifactId>
180                    <version>${version.maven-surefire-plugin}</version>
181                    <executions>
182                        <execution>
183                            <phase>test</phase>
184                            <id>default-test</id>
185                            <configuration>
186                                <excludes>
187                                    <exclude>**/it/**</exclude>
188                                </excludes>
189                                <reportsDirectory>
190                                    ${}/test-reports/unit
191                                </reportsDirectory>
192                            </configuration>
193                        </execution>
194                    </executions>
195                </plugin>
196                <!-- Plugin to run functional tests -->
197                <plugin>
198                    <groupId>org.apache.maven.plugins</groupId>
199                    <artifactId>maven-failsafe-plugin</artifactId>
200                    <version>${version.maven-failsafe-plugin}</version>
201                    <executions>
202                        <execution>
203                            <phase>integration-test</phase>
204                            <id>integration-test</id>
205                            <goals>
206                                <goal>integration-test</goal>
207                            </goals>
208                            <configuration>
209                                <includes>
210                                    <include>**/it/**</include>
211                                </includes>
212                                <systemPropertyVariables>
213                                    <cluster.ip>${cluster.ip}</cluster.ip>
214                                    <system.ingress.path>
215                                        ${system.ingress.path}
216                                    </system.ingress.path>
217                                    <system.node.port>
218                                        ${system.node.port}
219                                    </system.node.port>
220                                    <system.kube.service>
221                                        ${system.kube.service}
222                                    </system.kube.service>
223                                    <inventory.ingress.path>
224                                        ${inventory.ingress.path}
225                                    </inventory.ingress.path>
226                                    <inventory.node.port>
227                                        ${inventory.node.port}
228                                    </inventory.node.port>
229                                </systemPropertyVariables>
230                            </configuration>
231                        </execution>
232                        <execution>
233                            <id>verify-results</id>
234                            <goals>
235                                <goal>verify</goal>
236                            </goals>
237                        </execution>
238                    </executions>
239                    <configuration>
240                        <summaryFile>
241                            ${}/test-reports/it/failsafe-summary.xml
242                        </summaryFile>
243                        <reportsDirectory>
244                            ${}/test-reports/it
245                        </reportsDirectory>
246                    </configuration>
247                </plugin>
248            </plugins>
249        </pluginManagement>
250    </build>
252    <modules>
253        <module>system</module>
254        <module>inventory</module>
255    </modules>

A few tests are included for you to test the basic functionality of the microservices. If a test failure occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be in the ready state before you proceed further. The default properties that are defined in the pom.xml file are:



IP or hostname for your cluster


Name of the Kubernetes Service wrapping the system pods, system-service by default.


The NodePort of the Kubernetes Service system-service, 31000 by default.


The NodePort of the Kubernetes Service inventory-service, 32000 by default.

Use the following command to run the integration tests against your cluster. Substitute [hostname] with the appropriate value:

mvn verify -Ddockerfile.skip=true -Dcluster.ip=[hostname]

The dockerfile.skip parameter is set to true to skip building a new container image.

If the tests pass, you see an output for each servive similar to the following:

 T E S T S
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in


Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
 T E S T S
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in


Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

Deploying new version of system microservice

Optionally, you might want to make changes to your microservice and learn how to redeploy the updated version of your microservice. In this section, you will bump the version of the system microservice to 2.0-SNAPSHOT and redeploy the new version of the microservice.

The tag for the container image depends on the version that is specified in the pom.xml file. Use the following Maven command to bump the version of the microservice to 2.0-SNAPSHOT:

mvn versions:set -DnewVersion=2.0-SNAPSHOT

Use Maven to repackage your microservice and build the new version of the container image:

mvn package

Since you built a new image, it must be pushed to the awsguide/system repository of your container registry again.

Tag your container image with the relevant data about your registry:

docker tag system:2.0-SNAPSHOT [system-repository-uri]:2.0-SNAPSHOT

Push your image to the registry:

docker push [system-repository-uri]:2.0-SNAPSHOT

Update the system-deployment deployment to use the new container image that you just pushed to the registry:

kubectl set image deployment/system-deployment system-container=[system-repository-uri]:2.0-SNAPSHOT

Use the following command to find the name of the pod that is running the system microservice:

kubectl get pods
NAME                                   READY     STATUS    RESTARTS   AGE
inventory-deployment-6fd959cc4-rf2m2   1/1       Running   0          7m
system-deployment-677b9f5d9c-nqzcf     1/1       Running   0          7m

Observe that in this case the system microservice is running in the pod called system-deployment-677b9f5d9c-nqzcf. Substitute the name of your pod into the following command to see more details about the pod:

kubectl describe pod [pod-name]

View the events at the bottom of the command’s output. Notice that the pod is using the new container image system:2.0-SNAPSHOT.

  Type    Reason     Age   From                                    Message
  ----    ------     ----  ----                                    -------
  Normal  Scheduled  1m    default-scheduler                       Successfully assigned default/system-deployment-dd44895f6-wmlkm to
  Normal  Pulling    1m    kubelet,  pulling image "[aws_account_id]"
  Normal  Pulled     1m    kubelet,  Successfully pulled image "[aws_account_id]"
  Normal  Created    1m    kubelet,  Created container
  Normal  Started    1m    kubelet,  Started container

Tearing down the environment

When you no longer need your deployed microservices, you can delete all Kubernetes resources by running the kubectl delete command:

kubectl delete -f kubernetes.yaml

Delete the ECR repositories used to store the system and inventory images:

aws ecr delete-repository --repository-name awsguide/system --force
aws ecr delete-repository --repository-name awsguide/inventory --force

Remove your EKS cluster:

eksctl delete cluster --name guide-cluster

Great work! You’re done!

You just deployed two microservices running in Open Liberty to AWS EKS. You also learned how to use kubectl to deploy your microservices on a Kubernetes cluster.

Guide Attribution

Deploying microservices to Amazon Web Services by Open Liberty is licensed under CC BY-ND 4.0

Copied to clipboard
Copy code block
Copy file contents
Git clone this repo to get going right away:
git clone
Copy github clone command
Copied to clipboard

Nice work! Where to next?

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Like Open Liberty? Star our repo on GitHub.