Developing fault-tolerant microservices with Istio Retry and MicroProfile Fallback

duration 30 minutes

Prerequisites:

Explore how to manage the impact of failures by using MicroProfile and Istio Fault Tolerance to add retry and fallback behaviours to microservices.

What you’ll learn

You will learn how to combine MicroProfile Retry and Fallback policies with Istio Retry to make your microservices more resilient to common failures, such as network problems.

Microservices that are created using Eclipse MicroProfile can be freely deployed in a service mesh to reduce the complexity associated with managing microservices. Istio is a service mesh, meaning that it’s a platform for managing how microservices interact with each other and the outside world. Istio consists of a control plane and sidecars that are injected into application pods. The sidecars contain the Envoy proxy. You can think of Envoy as a sidecar that intercepts and controls all the HTTP and TCP traffic to and from your container. If you would like to learn more about Istio, check out the Managing microservice traffic using Istio guide.

MicroProfile and Istio both provide simple and flexible solutions to build fault-tolerant microservices. Fault tolerance provides different strategies for building robust behaviour to cope with unexpected failures. A few fault tolerance policies that MicroProfile can offer include Retry, Timeout, Circuit Breaker, Bulkhead, and Fallback. There is some overlap that exists between MicroProfile and Istio Fault Tolerance, such as the Retry policy. However, Istio does not offer any fallback capabilities. To view the available fault tolerance policies in MicroProfile and Istio, refer to the comparison between MicroProfile and Istio fault handling.

Use retry policies to fail quickly and recover from brief intermittent issues. An application might experience these transient failures when a microservice is undeployed, a database is overloaded by queries, the network connection becomes unstable, or the site host has a brief downtime. In these cases, rather than failing quickly on these transient failures, a retry policy provides another chance for the request to succeed. Simply retrying the request might be all you need to do to make it succeed.

Fallback offers an alternative execution path when an execution does not complete successfully. You will use the @Fallback annotation from the MicroProfile Fault Tolerance specification to define criteria for when to provide an alternative solution for a failed execution.

You will develop microservices that demonstrate MicroProfile Fault Tolerance with Istio fault handling. Both MicroProfile and Istio can be used when you want your microservices to have a service mesh architecture with Istio, and use MicroProfile to provide the extra fault tolerance policies that do not exist within Istio.

The application that you will be working with is an inventory service, which collects, stores, and returns the system properties. It uses the system service to retrieve the system properties for a particular host. You will add fault tolerance to the inventory service so that it reacts accordingly when the system service is unavailable.

Additional prerequisites

Before you begin, you need a containerization software for building containers. Kubernetes supports various container runtimes. You will use Docker in this guide. For Docker installation instructions, refer to the official Docker documentation.

Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

You will use Minikube as a single-node Kubernetes cluster that runs locally in a virtual machine. Make sure you have kubectl installed. If you need to install kubectl, see the kubectl installation instructions. For Minikube installation instructions, see the Minikube documentation.

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone https://github.com/openliberty/guide-microprofile-istio-retry-fallback.git
cd guide-microprofile-istio-retry-fallback

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Before you begin, make sure you have all the necessary prerequisites.

Starting and preparing your cluster for deployment

Start your Kubernetes cluster.

Start your Docker Desktop environment.

Ensure that you have enough memory allocated to your Docker Desktop enviornment. 8GB is recommended but 4GB should be adequate if you don’t have enough RAM.

Ensure that Kubernetes is running on Docker Desktop and that the context is set to docker-desktop.

Run the following command from a command-line session:

minikube start --memory=8192 --cpus=4 --kubernetes-version=v1.19.0

The memory flag allocates 8GB of memory to your Minikube cluster. If you don’t have enough RAM, then 4GB should be adequate.

Next, validate that you have a healthy Kubernetes environment by running the following command from the active command-line session.

kubectl get nodes

This command should return a Ready status for the master node.

You do not need to do any other step.

Run the following command to configure the Docker CLI to use Minikube’s Docker daemon. After you run this command, you will be able to interact with Minikube’s Docker daemon and build new images directly to it from your host machine:

eval $(minikube docker-env)

Deploying Istio

Install Istio by following the instructions in the official Istio Getting started documentation.

Run the following command to verify that the istioctl path was set successfully:

istioctl version

The output will be similar to the following example:

no running Istio pods in "istio-system"
1.7.3

Run the following command to configure the Istio profile on Kubernetes:

istioctl install --set profile=demo

The following output appears when the installation is complete:

✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete

Verify that Istio was successfully deployed by running the following command:

kubectl get deployments -n istio-system

All the values in the AVAILABLE column will have a value of 1 after the deployment is complete.

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
istio-egressgateway      1/1     1            1           2m48s
istio-ingressgateway     1/1     1            1           2m48s
istiod                   1/1     1            1           2m48s

Ensure that the Istio deployments are all available before you continue. The deployments might take a few minutes to become available. If the deployments aren’t available after a few minutes, then increase the amount of memory available to your Kubernetes cluster. On Docker Desktop, you can increase the memory from your Docker preferences. On Minikube, you can increase the memory by using the --memory flag.

Finally, create the istio-injection label and set its value to enabled:

kubectl label namespace default istio-injection=enabled

Adding this label enables automatic Istio sidecar injection. Automatic injection means that sidecars are automatically injected into your pods when you deploy your application.

Enabling MicroProfile Fault Tolerance

Navigate to the guide-microprofile-istio-retry-fallback/start directory to begin.

The MicroProfile Fault Tolerance API is included in the MicroProfile dependency that is specified in your pom.xml file. Look for the dependency with the microprofile artifact ID. This dependency provides a library that allows you to use the fault tolerance policies in your microservices.

The InventoryResource.java file makes a request to the system service through the MicroProfile Rest Client API. If you want to learn more about MicroProfile Rest Client, you can follow the Consuming RESTful services with template interfaces guide.

pom.xml

 1<?xml version="1.0" encoding="UTF-8"?>
 2<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
 3
 4    <modelVersion>4.0.0</modelVersion>
 5
 6    <groupId>io.openliberty.guides</groupId>
 7    <artifactId>guide-microprofile-istio-retry-fallback-inventory</artifactId>
 8    <version>1.0-SNAPSHOT</version>
 9    <packaging>war</packaging>
10
11    <properties>
12        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
13        <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
14        <maven.compiler.source>11</maven.compiler.source>
15        <maven.compiler.target>11</maven.compiler.target>
16        <!-- Liberty configuration -->
17        <liberty.var.default.http.port>9081</liberty.var.default.http.port>
18        <liberty.var.default.https.port>9444</liberty.var.default.https.port>
19        <liberty.var.system.http.port>9080</liberty.var.system.http.port>
20        <liberty.var.system.https.port>9443</liberty.var.system.https.port>
21    </properties>
22
23    <dependencies>
24        <!-- Provided dependencies -->
25        <dependency>
26            <groupId>jakarta.platform</groupId>
27            <artifactId>jakarta.jakartaee-api</artifactId>
28            <version>10.0.0</version>
29            <scope>provided</scope>
30        </dependency>
31        <!-- tag::microprofile[] -->
32        <dependency>
33            <groupId>org.eclipse.microprofile</groupId>
34            <artifactId>microprofile</artifactId>
35            <version>6.0</version>
36            <type>pom</type>
37            <scope>provided</scope>
38        </dependency>
39        <!-- end::microprofile[] -->
40    </dependencies>
41
42    <build>
43        <finalName>${project.artifactId}</finalName>
44        <plugins>
45            <plugin>
46                <groupId>org.apache.maven.plugins</groupId>
47                <artifactId>maven-war-plugin</artifactId>
48                <version>3.3.2</version>
49            </plugin>
50            <!-- Enable liberty-maven plugin -->
51            <plugin>
52                <groupId>io.openliberty.tools</groupId>
53                <artifactId>liberty-maven-plugin</artifactId>
54                <version>3.8.2</version>
55            </plugin>
56            <!-- Plugin to run unit tests -->
57            <plugin>
58                <groupId>org.apache.maven.plugins</groupId>
59                <artifactId>maven-surefire-plugin</artifactId>
60                <version>3.0.0</version>
61            </plugin>
62            <!-- Plugin to run functional tests -->
63            <plugin>
64                <groupId>org.apache.maven.plugins</groupId>
65                <artifactId>maven-failsafe-plugin</artifactId>
66                <version>3.0.0</version>
67            </plugin>
68        </plugins>
69    </build>
70</project>

InventoryResource.java

  1// tag::copyright[]
  2/*******************************************************************************
  3 * Copyright (c) 2019, 2022 IBM Corporation and others.
  4 * All rights reserved. This program and the accompanying materials
  5 * are made available under the terms of the Eclipse Public License 2.0
  6 * which accompanies this distribution, and is available at
  7 * http://www.eclipse.org/legal/epl-2.0/
  8 *
  9 * SPDX-License-Identifier: EPL-2.0
 10 *******************************************************************************/
 11// end::copyright[]
 12package io.openliberty.guides.inventory;
 13
 14import java.net.MalformedURLException;
 15import java.net.URL;
 16import java.util.Properties;
 17
 18import jakarta.enterprise.context.RequestScoped;
 19import jakarta.inject.Inject;
 20import jakarta.ws.rs.GET;
 21import jakarta.ws.rs.POST;
 22import jakarta.ws.rs.Path;
 23import jakarta.ws.rs.PathParam;
 24import jakarta.ws.rs.ProcessingException;
 25import jakarta.ws.rs.WebApplicationException;
 26import jakarta.ws.rs.Produces;
 27import jakarta.ws.rs.core.MediaType;
 28import jakarta.ws.rs.core.Response;
 29
 30import org.eclipse.microprofile.faulttolerance.Fallback;
 31import org.eclipse.microprofile.faulttolerance.Retry;
 32import org.eclipse.microprofile.rest.client.RestClientBuilder;
 33import org.eclipse.microprofile.config.inject.ConfigProperty;
 34
 35import io.openliberty.guides.inventory.client.SystemClient;
 36import io.openliberty.guides.inventory.client.UnknownUrlException;
 37import io.openliberty.guides.inventory.client.UnknownUrlExceptionMapper;
 38import io.openliberty.guides.inventory.model.InventoryList;
 39
 40@RequestScoped
 41@Path("/systems")
 42public class InventoryResource {
 43
 44  @Inject
 45  @ConfigProperty(name = "system.http.port")
 46  String SYS_HTTP_PORT;
 47
 48  @Inject
 49  InventoryManager manager;
 50
 51  @GET
 52  @Path("/{hostname}")
 53  @Produces(MediaType.APPLICATION_JSON)
 54  // tag::fallback[]
 55  @Fallback(fallbackMethod = "getPropertiesFallback")
 56  // end::fallback[]
 57  // tag::mpRetry[]
 58  @Retry(maxRetries = 3, retryOn = WebApplicationException.class)
 59  // end::mpRetry[]
 60  // tag::getPropertiesForHost[]
 61  public Response getPropertiesForHost(@PathParam("hostname") String hostname)
 62        // tag::webApplicationException[]
 63        throws WebApplicationException, ProcessingException, UnknownUrlException {
 64        // end::webApplicationException[]
 65  // end::getPropertiesForHost[]
 66
 67    String customURLString = "http://" + hostname + ":" + SYS_HTTP_PORT + "/system";
 68    URL customURL = null;
 69    Properties props = null;
 70    try {
 71        customURL = new URL(customURLString);
 72        SystemClient systemClient = RestClientBuilder.newBuilder()
 73                .baseUrl(customURL)
 74                .register(UnknownUrlExceptionMapper.class)
 75                .build(SystemClient.class);
 76        // tag::getProperties[]
 77        props = systemClient.getProperties();
 78        // end::getProperties[]
 79    } catch (MalformedURLException e) {
 80      System.err.println("The given URL is not formatted correctly: "
 81                         + customURLString);
 82    }
 83
 84    if (props == null) {
 85      return Response.status(Response.Status.NOT_FOUND)
 86                     .entity("{ \"error\" : \"Unknown hostname" + hostname
 87                             + " or the resource may not be running on the"
 88                             + " host machine\" }")
 89                     .build();
 90    }
 91
 92    manager.add(hostname, props);
 93    return Response.ok(props).build();
 94  }
 95
 96  // tag::fallbackMethod[]
 97  @Produces(MediaType.APPLICATION_JSON)
 98  public Response getPropertiesFallback(@PathParam("hostname") String hostname) {
 99      Properties props = new Properties();
100      props.put("error", "Unknown hostname or the system service may not be running.");
101        return Response.ok(props)
102                  .header("X-From-Fallback", "yes")
103                  .build();
104  }
105  // end::fallbackMethod[]
106
107  @GET
108  @Produces(MediaType.APPLICATION_JSON)
109  public InventoryList listContents() {
110    return manager.list();
111  }
112
113  @POST
114  @Path("/reset")
115  public void reset() {
116    manager.reset();
117  }
118}

Adding the MicroProfile @Retry annotation

To simulate that your system service is temporarily down due to brief intermittent issues, you will pause the pod that is associated with your system service, then try to send requests to the service. When the system pod is paused, requests to the service return a 503 status code, and the systemClient.getProperties() in InventoryResource.java throws a WebApplicationException.

To retry the requests to your system service after a WebApplicationException has occurred, add the @Retry annotation.

Update the InventoryResource.java file.
inventory/src/main/java/io/openliberty/guides/inventory/InventoryResource.java

To retry the service request a maximum of 3 times, only when a WebApplicationException occurs, add the @Retry annotation before the getPropertiesForHost() method.

InventoryResource.java

  1// tag::copyright[]
  2/*******************************************************************************
  3 * Copyright (c) 2019, 2022 IBM Corporation and others.
  4 * All rights reserved. This program and the accompanying materials
  5 * are made available under the terms of the Eclipse Public License 2.0
  6 * which accompanies this distribution, and is available at
  7 * http://www.eclipse.org/legal/epl-2.0/
  8 *
  9 * SPDX-License-Identifier: EPL-2.0
 10 *******************************************************************************/
 11// end::copyright[]
 12package io.openliberty.guides.inventory;
 13
 14import java.net.MalformedURLException;
 15import java.net.URL;
 16import java.util.Properties;
 17
 18import jakarta.enterprise.context.RequestScoped;
 19import jakarta.inject.Inject;
 20import jakarta.ws.rs.GET;
 21import jakarta.ws.rs.POST;
 22import jakarta.ws.rs.Path;
 23import jakarta.ws.rs.PathParam;
 24import jakarta.ws.rs.ProcessingException;
 25import jakarta.ws.rs.WebApplicationException;
 26import jakarta.ws.rs.Produces;
 27import jakarta.ws.rs.core.MediaType;
 28import jakarta.ws.rs.core.Response;
 29
 30import org.eclipse.microprofile.faulttolerance.Fallback;
 31import org.eclipse.microprofile.faulttolerance.Retry;
 32import org.eclipse.microprofile.rest.client.RestClientBuilder;
 33import org.eclipse.microprofile.config.inject.ConfigProperty;
 34
 35import io.openliberty.guides.inventory.client.SystemClient;
 36import io.openliberty.guides.inventory.client.UnknownUrlException;
 37import io.openliberty.guides.inventory.client.UnknownUrlExceptionMapper;
 38import io.openliberty.guides.inventory.model.InventoryList;
 39
 40@RequestScoped
 41@Path("/systems")
 42public class InventoryResource {
 43
 44  @Inject
 45  @ConfigProperty(name = "system.http.port")
 46  String SYS_HTTP_PORT;
 47
 48  @Inject
 49  InventoryManager manager;
 50
 51  @GET
 52  @Path("/{hostname}")
 53  @Produces(MediaType.APPLICATION_JSON)
 54  // tag::fallback[]
 55  @Fallback(fallbackMethod = "getPropertiesFallback")
 56  // end::fallback[]
 57  // tag::mpRetry[]
 58  @Retry(maxRetries = 3, retryOn = WebApplicationException.class)
 59  // end::mpRetry[]
 60  // tag::getPropertiesForHost[]
 61  public Response getPropertiesForHost(@PathParam("hostname") String hostname)
 62        // tag::webApplicationException[]
 63        throws WebApplicationException, ProcessingException, UnknownUrlException {
 64        // end::webApplicationException[]
 65  // end::getPropertiesForHost[]
 66
 67    String customURLString = "http://" + hostname + ":" + SYS_HTTP_PORT + "/system";
 68    URL customURL = null;
 69    Properties props = null;
 70    try {
 71        customURL = new URL(customURLString);
 72        SystemClient systemClient = RestClientBuilder.newBuilder()
 73                .baseUrl(customURL)
 74                .register(UnknownUrlExceptionMapper.class)
 75                .build(SystemClient.class);
 76        // tag::getProperties[]
 77        props = systemClient.getProperties();
 78        // end::getProperties[]
 79    } catch (MalformedURLException e) {
 80      System.err.println("The given URL is not formatted correctly: "
 81                         + customURLString);
 82    }
 83
 84    if (props == null) {
 85      return Response.status(Response.Status.NOT_FOUND)
 86                     .entity("{ \"error\" : \"Unknown hostname" + hostname
 87                             + " or the resource may not be running on the"
 88                             + " host machine\" }")
 89                     .build();
 90    }
 91
 92    manager.add(hostname, props);
 93    return Response.ok(props).build();
 94  }
 95
 96  // tag::fallbackMethod[]
 97  @Produces(MediaType.APPLICATION_JSON)
 98  public Response getPropertiesFallback(@PathParam("hostname") String hostname) {
 99      Properties props = new Properties();
100      props.put("error", "Unknown hostname or the system service may not be running.");
101        return Response.ok(props)
102                  .header("X-From-Fallback", "yes")
103                  .build();
104  }
105  // end::fallbackMethod[]
106
107  @GET
108  @Produces(MediaType.APPLICATION_JSON)
109  public InventoryList listContents() {
110    return manager.list();
111  }
112
113  @POST
114  @Path("/reset")
115  public void reset() {
116    manager.reset();
117  }
118}

A request to a service might fail for many different reasons. The default Retry policy initiates a retry for every java.lang.Exception. However, you can base a Retry policy on a specific exception by using the retryOn parameter. You can identify more than one exception as an array of values. For example, @Retry(retryOn = {RuntimeException.class, TimeoutException.class}).

You can set limits on the number of retry attempts to avoid overloading a busy service with retry requests. The @Retry annotation has the maxRetries parameter to limit the number of retry attempts. The default number for maxRetries is 3 requests. The integer value must be greater than or equal to -1. A value of -1 indicates to continue retrying indefinitely.

Building and running the application

Navigate to the guide-microprofile-istio-retry-fallback/start directory and run the following command to build your application and integrate the Retry policy into your microservices:

mvn package

Next, run the docker build commands to build container images for your application:

docker build -t system:1.0-SNAPSHOT system/.
docker build -t inventory:1.0-SNAPSHOT inventory/.

The -t flag in the docker build command allows the Docker image to be labeled (tagged) in the name[:tag] format. The tag for an image describes the specific image version. If the optional [:tag] tag is not specified, the latest tag is created by default.

To verify that the images for your system and inventory microservices are built, run the docker images command to list all local Docker images.

docker images

Your two images system and inventory should appear in the list of all Docker images:

REPOSITORY         TAG
inventory          1.0-SNAPSHOT
system             1.0-SNAPSHOT

To deploy your microservices to the Kubernetes cluster, use the following command:

kubectl apply -f services.yaml

You will see an output similar to the following:

gateway.networking.istio.io/sys-app-gateway created
service/system-service created
service/inventory-service created
deployment.apps/system-deployment created
deployment.apps/inventory-deployment created

The traffic.yaml file contains two virtual services. A virtual service defines how requests are routed to your applications.

Deploy the resources defined in the traffic.yaml file:

kubectl apply -f traffic.yaml

Run the following command to check the status of your pods:

kubectl get pods

If all the pods are healthy and running, you will see an output similar to the following:

NAME                                    READY     STATUS    RESTARTS   AGE
inventory-deployment-645767664f-nbtd9   2/2       Running   0          30s
system-deployment-6bd97d9bf6-4ccds      2/2       Running   0          30s

Check that all of the deployments are available. You need to wait until all of your deployments are ready and available before making requests to your microservices.

kubectl get deployments
NAME                     READY     UP-TO-DATE   AVAILABLE   AGE
inventory-deployment     1/1       1            1           1m
system-deployment        1/1       1            1           1m

You will make a request to the system service from the inventory service to access the JVM system properties of your running container. The Istio gateway is the entry point for HTTP requests to the cluster. As defined in the services.yaml file, the gateway is expecting the Host header of your system service and inventory service to be system.example.com and inventory.example.com, respectively. However, requests to system.example.com and inventory.example.com won’t be routed to the appropriate IP address. To ensure that the gateway routes your requests appropriately, ensure that the Host header is set appropriately. You can set the Host header with the -H option of the curl command.

services.yaml

  1# tag::gateway[]
  2apiVersion: networking.istio.io/v1alpha3
  3kind: Gateway
  4metadata:
  5  name: sys-app-gateway
  6spec:
  7  selector:
  8    istio: ingressgateway
  9  servers:
 10  - port:
 11      number: 80
 12      name: http
 13      protocol: HTTP
 14    hosts:
 15    - "system.example.com"
 16    - "inventory.example.com"
 17# end::gateway[]
 18---
 19apiVersion: v1
 20kind: Service
 21metadata:
 22  name: system-service
 23  labels:
 24    app: system
 25spec:
 26  ports:
 27  - port: 9080
 28    name: http
 29  selector:
 30    app: system
 31---
 32apiVersion: v1
 33kind: Service
 34metadata:
 35  name: inventory-service
 36  labels:
 37    app: inventory
 38spec:
 39  ports:
 40  - port: 9081
 41    name: http
 42  selector:
 43    app: inventory
 44---
 45apiVersion: apps/v1
 46kind: Deployment
 47metadata:
 48  name: system-deployment
 49  labels:
 50    app: system
 51spec:
 52  selector:
 53    matchLabels:
 54      app: system
 55  template:
 56    metadata:
 57      labels:
 58        app: system
 59    spec:
 60      containers:
 61      - name: system-container
 62        image: system:1.0-SNAPSHOT
 63        ports:
 64        - containerPort: 9080
 65---
 66apiVersion: apps/v1
 67kind: Deployment
 68metadata:
 69  name: inventory-deployment
 70  labels:
 71    app: inventory
 72spec:
 73  selector:
 74    matchLabels:
 75      app: inventory
 76  template:
 77    metadata:
 78      labels:
 79        app: inventory
 80    spec:
 81      containers:
 82      - name: inventory-container
 83        # tag::invImage[]
 84        image: inventory:1.0-SNAPSHOT
 85        # end::invImage[]
 86        ports:
 87        - containerPort: 9081
 88        # tag::invConfig[]
 89        envFrom:
 90        - configMapRef:
 91            # tag::configName2[]
 92            name: inventory-config
 93            # end::configName2[]
 94        # end::invConfig[]
 95# tag::configHide[]
 96---
 97# tag::configMap[]
 98apiVersion: v1
 99kind: ConfigMap
100metadata:
101  # tag::configName[]
102  name: inventory-config
103  # end::configName[]
104data:
105  # tag::nonFallback[]
106  MP_Fault_Tolerance_NonFallback_Enabled: "false"
107  # end::nonFallback[]
108# end::configMap[]
109# end::configHide[]

traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "system.example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9080
15        host: system-service
16---
17apiVersion: networking.istio.io/v1alpha3
18kind: VirtualService
19metadata:
20  name: inventory-virtual-service
21spec:
22  hosts:
23  - "inventory.example.com"
24  gateways:
25  - sys-app-gateway
26  http:
27  # tag::route[]
28  - route:
29    - destination:
30        port:
31          number: 9081
32        host: inventory-service
33    # tag::istioRetry[]
34    retries:
35      # tag::attempts[]
36      attempts: 4
37      # end::attempts[]
38      # tag::retryOn[]
39      retryOn: 5xx
40      # end::retryOn[]
41    # end::istioRetry[]
42  # end::route[]

Make a request to the service by using curl:

curl -H Host:inventory.example.com http://localhost/inventory/systems/system-service -I

If the curl command is unavailable, then use Postman. Postman enables you to make requests using a graphical interface. To make a request with Postman, enter http://localhost/inventory/systems/system-service into the URL bar. Next, switch to the Headers tab and add a header with key of Host and value of inventory.example.com. Finally, click the blue Send button to make the request.

curl -H Host:inventory.example.com http://`minikube ip`:31380/inventory/systems/system-service -I

You will see the following output:

HTTP/1.1 200 OK
x-powered-by: Servlet/4.0
content-type: application/json
date: Mon, 19 Aug 2019 19:49:47 GMT
content-language: en-US
x-envoy-upstream-service-time: 4242
server: istio-envoy
transfer-encoding: chunked

Because the system service is available, the request to the service is successful and returns a 200 response code.

To see the number of times that the system service was called, check the logs of the system pod by using the kubectl logs command. Replace [system-pod-name] with the pod name associated with your system service, which you previously saw when running the kubectl get pods command.

kubectl logs [system-pod-name] -c istio-proxy | grep -c system-service:9080
kubectl logs [system-pod-name] -c istio-proxy | find /C "system-service:9080"

You will see that the kubectl logs command returns a value of 1, meaning that 1 request is made to the system service:

1

Now you will make the system service unavailable and observe that MicroProfile’s Retry policy will take effect.

Pause the system service pod to simulate that the service is unavailable. Remember to replace [system-pod-name] with the pod name that is associated with your system service.

kubectl exec -it [system-pod-name] -- /opt/ol/wlp/bin/server pause

You will see the following output:

Pausing the defaultServer server.
Pausing the defaultServer server completed.

Make a request to the service by using curl:

curl -H Host:inventory.example.com http://localhost/inventory/systems/system-service -I

If the curl command is unavailable, then use Postman.

curl -H Host:inventory.example.com http://`minikube ip`:31380/inventory/systems/system-service -I

You will see the following output:

HTTP/1.1 503 Service Unavailable
x-powered-by: Servlet/4.0
content-length: 91
content-type: text/plain
date: Thu, 15 Aug 2019 13:21:57 GMT
server: istio-envoy
x-envoy-upstream-service-time: 2929
content-language: en-US

Because the the system service is unavailable, the request returns a 503 response code. However, the request retried several times, as specified by the MicroProfile @Retry annotation.

See the number of times that the service was retried:

kubectl logs [system-pod-name] -c istio-proxy | grep -c system-service:9080
kubectl logs [system-pod-name] -c istio-proxy | find /C "system-service:9080"

You will see the following output:

37

The above command returns 37, because there was a total of 37 requests made to the system service. By default, Istio will retry 2 times to resolve any issues with a 503 response code. Including the initial requests and the retries, the requests made by Istio are multiplied by the requests made by MicroProfile. Hence, the 3 requests from the system service, the 3 requests from the inventory service, and the 4 MicroProfile requests are multiplied together, giving a total of 36 requests. Including the succesful request that you made before you paused the system service, there was a total of 37 requests.

Enabling Istio Fault Tolerance

Previously, you implemented the Retry policy to retry requests to your system service by using MicroProfile Fault Tolerance. This Retry policy can also be implemented with Istio Fault Tolerance.

Update the traffic.yaml file.
traffic.yaml

traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "system.example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9080
15        host: system-service
16---
17apiVersion: networking.istio.io/v1alpha3
18kind: VirtualService
19metadata:
20  name: inventory-virtual-service
21spec:
22  hosts:
23  - "inventory.example.com"
24  gateways:
25  - sys-app-gateway
26  http:
27  # tag::route[]
28  - route:
29    - destination:
30        port:
31          number: 9081
32        host: inventory-service
33    # tag::istioRetry[]
34    retries:
35      # tag::attempts[]
36      attempts: 4
37      # end::attempts[]
38      # tag::retryOn[]
39      retryOn: 5xx
40      # end::retryOn[]
41    # end::istioRetry[]
42  # end::route[]

Add the retries field under the route specification in the traffic.yaml file. This tells Istio to retry requests a maximum of 4 times when the request returns any 5xx response code.

The attempts field is required in the configuration of the Istio Retry policy. This field specifies the maximum number of retries that will be attempted for a given request. To retry a request on specific conditions, use the retryOn field. Because your paused system service responds with a 503 response code, you set retryOn to be 5xx. Other retry conditions can also be specified in retryOn. Optionally, the perTryTimeout field can be added to Istio’s Retry policy to specify the amount of time that is allocated to each retry attempt.

After you configure the number of retries that Istio performs, deploy your microservices again:

kubectl replace --force -f services.yaml
kubectl replace --force -f traffic.yaml

Wait until all of your deployments are ready and available. The [system-pod-name] will be regenerated and is different than the one you used previously. Run the kubectl get pods command to get the new [system-pod-name]. Pause the system service pod to simulate that the service is unavailable.

kubectl exec -it [system-pod-name] -- /opt/ol/wlp/bin/server pause

Make a request to the service by using curl:

curl -H Host:inventory.example.com http://localhost/inventory/systems/system-service -I

If the curl command is unavailable, then use Postman.

curl -H Host:inventory.example.com http://`minikube ip`:31380/inventory/systems/system-service -I

Because the system service is unavailable, the request still returns a 503 response code. This time, however, Istio retried the request several more times before failing.

See the number of times that the service was retried:

kubectl logs [system-pod-name] -c istio-proxy | grep -c system-service:9080
kubectl logs [system-pod-name] -c istio-proxy | find /C "system-service:9080"

You will see the following output:

60

The above command returns a value of 60, indicating that a total of 60 requests are made to the system service. The 3 default Istio requests for the system service, the 5 requests for the inventory service that you enabled in the traffic.yaml file, and the 4 requests sent by MicroProfile are multiplied together.

Next, you will disable some MicroProfile Fault Tolerance capabilities, so that your system service retries with only Istio’s Retry policy.

Turning off MicroProfile Fault Tolerance

When both MicroProfile and Istio Fault Tolerance capabilities are enabled, there is a compounding effect that may be unexpected. If both MicroProfile and Istio set their own Retry policies on a service, the maximum number of retries that occur is not equivalent to either of the number of retries specified in MicroProfile or Istio. The number of retries set by MicroProfile and Istio are actually multiplied.

If you want to use Istio as your service mesh and only its fault tolerance capabilities, you can turn off MicroProfile Fault Tolerance by adding a property. This configuration avoids any overlap in behaviours.

MicroProfile Fault Tolerance offers a config property MP_Fault_Tolerance_NonFallback_Enabled that disables all MicroProfile Fault Tolerance capabilities except fallback. If MP_Fault_Tolerance_NonFallback_Enabled is set to false, only the @Fallback behaviour is enabled. The other behaviours specified by the MicroProfile Fault Tolerance annotations, including @Retry, won’t take effect.

You will define the MP_Fault_Tolerance_NonFallback_Enabled config property in a ConfigMap. ConfigMaps store configuration settings about a Kubernetes pod. This configuration is loaded into the pod as an environment variable that is used by the pod’s containers. The environment variables are defined in the pod’s specification by using the envFrom field. To learn more about ConfigMaps, check out the Configuring microservices running in Kubernetes guide.

Use the MP_Fault_Tolerance_NonFallback_Enabled config property to disable the retries performed by MicroProfile, so that only Istio performs retries.

Update the services.yaml file.
services.yaml

services.yaml

  1# tag::gateway[]
  2apiVersion: networking.istio.io/v1alpha3
  3kind: Gateway
  4metadata:
  5  name: sys-app-gateway
  6spec:
  7  selector:
  8    istio: ingressgateway
  9  servers:
 10  - port:
 11      number: 80
 12      name: http
 13      protocol: HTTP
 14    hosts:
 15    - "system.example.com"
 16    - "inventory.example.com"
 17# end::gateway[]
 18---
 19apiVersion: v1
 20kind: Service
 21metadata:
 22  name: system-service
 23  labels:
 24    app: system
 25spec:
 26  ports:
 27  - port: 9080
 28    name: http
 29  selector:
 30    app: system
 31---
 32apiVersion: v1
 33kind: Service
 34metadata:
 35  name: inventory-service
 36  labels:
 37    app: inventory
 38spec:
 39  ports:
 40  - port: 9081
 41    name: http
 42  selector:
 43    app: inventory
 44---
 45apiVersion: apps/v1
 46kind: Deployment
 47metadata:
 48  name: system-deployment
 49  labels:
 50    app: system
 51spec:
 52  selector:
 53    matchLabels:
 54      app: system
 55  template:
 56    metadata:
 57      labels:
 58        app: system
 59    spec:
 60      containers:
 61      - name: system-container
 62        image: system:1.0-SNAPSHOT
 63        ports:
 64        - containerPort: 9080
 65---
 66apiVersion: apps/v1
 67kind: Deployment
 68metadata:
 69  name: inventory-deployment
 70  labels:
 71    app: inventory
 72spec:
 73  selector:
 74    matchLabels:
 75      app: inventory
 76  template:
 77    metadata:
 78      labels:
 79        app: inventory
 80    spec:
 81      containers:
 82      - name: inventory-container
 83        # tag::invImage[]
 84        image: inventory:1.0-SNAPSHOT
 85        # end::invImage[]
 86        ports:
 87        - containerPort: 9081
 88        # tag::invConfig[]
 89        envFrom:
 90        - configMapRef:
 91            # tag::configName2[]
 92            name: inventory-config
 93            # end::configName2[]
 94        # end::invConfig[]
 95# tag::configHide[]
 96---
 97# tag::configMap[]
 98apiVersion: v1
 99kind: ConfigMap
100metadata:
101  # tag::configName[]
102  name: inventory-config
103  # end::configName[]
104data:
105  # tag::nonFallback[]
106  MP_Fault_Tolerance_NonFallback_Enabled: "false"
107  # end::nonFallback[]
108# end::configMap[]
109# end::configHide[]

Add a ConfigMap into the services.yaml file, and set the MP_Fault_Tolerance_NonFallback_Enabled config property to false. Add the envFrom field to inject the ConfigMap with the MP_Fault_Tolerance_NonFallback_Enabled property into your pods.

The name of the ConfigMap, which is inventory-config, becomes the environment variable name that is specified in the envFrom field.

Deploy your microservices again to turn off all MicroProfile Fault Tolerance capabilities, except fallback:

kubectl replace --force -f services.yaml

Wait until all of your deployments are ready and available. Run the kubectl get pods command to get the new [system-pod-name]. Pause the system service pod to simulate that the service is unavailable:

kubectl exec -it [system-pod-name] -- /opt/ol/wlp/bin/server pause

Make a request to the service by using curl:

curl -H Host:inventory.example.com http://localhost/inventory/systems/system-service -I

If the curl command is unavailable, then use Postman.

curl -H Host:inventory.example.com http://`minikube ip`:31380/inventory/systems/system-service -I

Because the system service is unavailable, the request still returns a 503 response code. This time, however, the request was retried several times with Istio, without any retries from MicroProfile.

See the number of times that the service was retried:

kubectl logs [system-pod-name] -c istio-proxy | grep -c system-service:9080
kubectl logs [system-pod-name] -c istio-proxy | find /C "system-service:9080"

You will see the following output:

15

The above command returns 15, indicating that a total of 15 requests are made to the system service. Because MicroProfile’s Retry policy is disabled, only Istio’s retries are performed.

traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "system.example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9080
15        host: system-service
16---
17apiVersion: networking.istio.io/v1alpha3
18kind: VirtualService
19metadata:
20  name: inventory-virtual-service
21spec:
22  hosts:
23  - "inventory.example.com"
24  gateways:
25  - sys-app-gateway
26  http:
27  # tag::route[]
28  - route:
29    - destination:
30        port:
31          number: 9081
32        host: inventory-service
33    # tag::istioRetry[]
34    retries:
35      # tag::attempts[]
36      attempts: 4
37      # end::attempts[]
38      # tag::retryOn[]
39      retryOn: 5xx
40      # end::retryOn[]
41    # end::istioRetry[]
42  # end::route[]

Using MicroProfile Fallback

services.yaml

  1# tag::gateway[]
  2apiVersion: networking.istio.io/v1alpha3
  3kind: Gateway
  4metadata:
  5  name: sys-app-gateway
  6spec:
  7  selector:
  8    istio: ingressgateway
  9  servers:
 10  - port:
 11      number: 80
 12      name: http
 13      protocol: HTTP
 14    hosts:
 15    - "system.example.com"
 16    - "inventory.example.com"
 17# end::gateway[]
 18---
 19apiVersion: v1
 20kind: Service
 21metadata:
 22  name: system-service
 23  labels:
 24    app: system
 25spec:
 26  ports:
 27  - port: 9080
 28    name: http
 29  selector:
 30    app: system
 31---
 32apiVersion: v1
 33kind: Service
 34metadata:
 35  name: inventory-service
 36  labels:
 37    app: inventory
 38spec:
 39  ports:
 40  - port: 9081
 41    name: http
 42  selector:
 43    app: inventory
 44---
 45apiVersion: apps/v1
 46kind: Deployment
 47metadata:
 48  name: system-deployment
 49  labels:
 50    app: system
 51spec:
 52  selector:
 53    matchLabels:
 54      app: system
 55  template:
 56    metadata:
 57      labels:
 58        app: system
 59    spec:
 60      containers:
 61      - name: system-container
 62        image: system:1.0-SNAPSHOT
 63        ports:
 64        - containerPort: 9080
 65---
 66apiVersion: apps/v1
 67kind: Deployment
 68metadata:
 69  name: inventory-deployment
 70  labels:
 71    app: inventory
 72spec:
 73  selector:
 74    matchLabels:
 75      app: inventory
 76  template:
 77    metadata:
 78      labels:
 79        app: inventory
 80    spec:
 81      containers:
 82      - name: inventory-container
 83        # tag::invImage[]
 84        image: inventory:1.0-SNAPSHOT
 85        # end::invImage[]
 86        ports:
 87        - containerPort: 9081
 88        # tag::invConfig[]
 89        envFrom:
 90        - configMapRef:
 91            # tag::configName2[]
 92            name: inventory-config
 93            # end::configName2[]
 94        # end::invConfig[]
 95# tag::configHide[]
 96---
 97# tag::configMap[]
 98apiVersion: v1
 99kind: ConfigMap
100metadata:
101  # tag::configName[]
102  name: inventory-config
103  # end::configName[]
104data:
105  # tag::nonFallback[]
106  MP_Fault_Tolerance_NonFallback_Enabled: "false"
107  # end::nonFallback[]
108# end::configMap[]
109# end::configHide[]

Since retrying the requests to the system service still does not succeed, you need a "fall back" plan. You will create a fallback method as an alternative solution for when retry requests to the system service have failed.

Although you disabled MicroProfile @Retry and other MicroProfile Fault Tolerance policies using the MP_Fault_Tolerance_NonFallback_Enabled config property, the fallback policy is still available. As mentioned before, Istio does not offer any fallback capabilities, so the MicroProfile Fallback capability can be used to complement it.

The @Fallback annotation dictates a method to call when the original method encounters a failed execution. If your microservices have a Retry policy specified, then the fallback occurs after all of the retries have failed.

Update the InventoryResource.java file.
inventory/src/main/java/io/openliberty/guides/inventory/InventoryResource.java

InventoryResource.java

  1// tag::copyright[]
  2/*******************************************************************************
  3 * Copyright (c) 2019, 2022 IBM Corporation and others.
  4 * All rights reserved. This program and the accompanying materials
  5 * are made available under the terms of the Eclipse Public License 2.0
  6 * which accompanies this distribution, and is available at
  7 * http://www.eclipse.org/legal/epl-2.0/
  8 *
  9 * SPDX-License-Identifier: EPL-2.0
 10 *******************************************************************************/
 11// end::copyright[]
 12package io.openliberty.guides.inventory;
 13
 14import java.net.MalformedURLException;
 15import java.net.URL;
 16import java.util.Properties;
 17
 18import jakarta.enterprise.context.RequestScoped;
 19import jakarta.inject.Inject;
 20import jakarta.ws.rs.GET;
 21import jakarta.ws.rs.POST;
 22import jakarta.ws.rs.Path;
 23import jakarta.ws.rs.PathParam;
 24import jakarta.ws.rs.ProcessingException;
 25import jakarta.ws.rs.WebApplicationException;
 26import jakarta.ws.rs.Produces;
 27import jakarta.ws.rs.core.MediaType;
 28import jakarta.ws.rs.core.Response;
 29
 30import org.eclipse.microprofile.faulttolerance.Fallback;
 31import org.eclipse.microprofile.faulttolerance.Retry;
 32import org.eclipse.microprofile.rest.client.RestClientBuilder;
 33import org.eclipse.microprofile.config.inject.ConfigProperty;
 34
 35import io.openliberty.guides.inventory.client.SystemClient;
 36import io.openliberty.guides.inventory.client.UnknownUrlException;
 37import io.openliberty.guides.inventory.client.UnknownUrlExceptionMapper;
 38import io.openliberty.guides.inventory.model.InventoryList;
 39
 40@RequestScoped
 41@Path("/systems")
 42public class InventoryResource {
 43
 44  @Inject
 45  @ConfigProperty(name = "system.http.port")
 46  String SYS_HTTP_PORT;
 47
 48  @Inject
 49  InventoryManager manager;
 50
 51  @GET
 52  @Path("/{hostname}")
 53  @Produces(MediaType.APPLICATION_JSON)
 54  // tag::fallback[]
 55  @Fallback(fallbackMethod = "getPropertiesFallback")
 56  // end::fallback[]
 57  // tag::mpRetry[]
 58  @Retry(maxRetries = 3, retryOn = WebApplicationException.class)
 59  // end::mpRetry[]
 60  // tag::getPropertiesForHost[]
 61  public Response getPropertiesForHost(@PathParam("hostname") String hostname)
 62        // tag::webApplicationException[]
 63        throws WebApplicationException, ProcessingException, UnknownUrlException {
 64        // end::webApplicationException[]
 65  // end::getPropertiesForHost[]
 66
 67    String customURLString = "http://" + hostname + ":" + SYS_HTTP_PORT + "/system";
 68    URL customURL = null;
 69    Properties props = null;
 70    try {
 71        customURL = new URL(customURLString);
 72        SystemClient systemClient = RestClientBuilder.newBuilder()
 73                .baseUrl(customURL)
 74                .register(UnknownUrlExceptionMapper.class)
 75                .build(SystemClient.class);
 76        // tag::getProperties[]
 77        props = systemClient.getProperties();
 78        // end::getProperties[]
 79    } catch (MalformedURLException e) {
 80      System.err.println("The given URL is not formatted correctly: "
 81                         + customURLString);
 82    }
 83
 84    if (props == null) {
 85      return Response.status(Response.Status.NOT_FOUND)
 86                     .entity("{ \"error\" : \"Unknown hostname" + hostname
 87                             + " or the resource may not be running on the"
 88                             + " host machine\" }")
 89                     .build();
 90    }
 91
 92    manager.add(hostname, props);
 93    return Response.ok(props).build();
 94  }
 95
 96  // tag::fallbackMethod[]
 97  @Produces(MediaType.APPLICATION_JSON)
 98  public Response getPropertiesFallback(@PathParam("hostname") String hostname) {
 99      Properties props = new Properties();
100      props.put("error", "Unknown hostname or the system service may not be running.");
101        return Response.ok(props)
102                  .header("X-From-Fallback", "yes")
103                  .build();
104  }
105  // end::fallbackMethod[]
106
107  @GET
108  @Produces(MediaType.APPLICATION_JSON)
109  public InventoryList listContents() {
110    return manager.list();
111  }
112
113  @POST
114  @Path("/reset")
115  public void reset() {
116    manager.reset();
117  }
118}

Create the getPropertiesFallback() method. Add the @Fallback annotation before the getPropertiesForHost() method, to call the getPropertiesFallback() method when a failure occurs.

The getPropertiesFallback() method, which is the designated fallback method for the original getPropertiesForHost() method, prints out a warning message in the browser that says the system service may not be running.

Rebuild your application to add fallback behaviour to your microservices:

mvn package

Next, run the docker build commands to rebuild the container images for your application:

docker build -t inventory:1.0-SNAPSHOT inventory/.
docker build -t system:1.0-SNAPSHOT system/.

Deploy your microservices again to turn off all MicroProfile Fault Tolerance capabilities, except fallback:

kubectl replace --force -f services.yaml
kubectl replace --force -f traffic.yaml

Wait until all of your deployments are ready and available. Run the kubectl get pods command to get the new [system-pod-name]. Pause the system service pod to simulate that the service is unavailable:

kubectl exec -it [system-pod-name] -- /opt/ol/wlp/bin/server pause

Make a request to the service by using curl:

curl -H Host:inventory.example.com http://localhost/inventory/systems/system-service -I

If the curl command is unavailable, then use Postman.

curl -H Host:inventory.example.com http://`minikube ip`:31380/inventory/systems/system-service -I

You will see the following output:

HTTP/1.1 200 OK
x-powered-by: Servlet/4.0
x-from-fallback: yes
content-type: application/json
date: Mon, 19 Aug 2019 19:49:47 GMT
content-language: en-US
x-envoy-upstream-service-time: 4242
server: istio-envoy
transfer-encoding: chunked

You can see that the request is now successful and returns a 200 response code, with a header called x-from-fallback, indicating that the fallback method is called when the system service is not available.

See the number of times that the service is retried before the fallback method is called:

kubectl logs [system-pod-name] -c istio-proxy | grep -c system-service:9080
kubectl logs [system-pod-name] -c istio-proxy | find /C "system-service:9080"

You will see the following output:

3

The above command returns 3, indicating that a total of 3 requests are made to the system service. The Istio retries that you enabled on the inventory service are not performed, because the Fallback policy is enabled. However, the 3 default requests by Istio on the server end are still performed. Because all of these requests failed, the getPropertiesFallback() fallback method is called.

Tearing down your environment

When you are done checking out the MicroProfile and Istio Fault Tolerance features, you might want to tear down all the deployed resources as a cleanup step.

Delete your resources from the cluster:

kubectl delete -f services.yaml
kubectl delete -f traffic.yaml

Delete the istio-injection label from the default namespace. The hyphen immediately after the label name indicates that the label should be deleted.

kubectl label namespace default istio-injection-

Navigate to the directory where you extracted Istio and delete the Istio resources from the cluster:

kubectl delete -f install/kubernetes/istio-demo.yaml

Delete all Istio resources from the cluster:

istioctl x uninstall --purge
istioctl x uninstall --purge
istioctl x uninstall --purge

Perform the following steps to return your environment to a clean state.

  1. Point the Docker daemon back to your local machine:

    eval $(minikube docker-env -u)
  2. Stop and delete your Minikube cluster:

    minikube stop
    minikube delete

Great work! You’re done!

You learned how to build resilient microservices by using Istio Retry and MicroProfile Fallback. You also observed how MicroProfile Fault Tolerance integrates with and complements Istio Fault Tolerance.

Guide Attribution

Developing fault-tolerant microservices with Istio Retry and MicroProfile Fallback by Open Liberty is licensed under CC BY-ND 4.0

Copy file contents
Copied to clipboard

Prerequisites:

Nice work! Where to next?

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Like Open Liberty? Star our repo on GitHub.

Star

Where to next?