Managing microservice traffic using Istio

duration 30 minutes
Git clone to get going right away:
git clone https://github.com/OpenLiberty/guide-istio-intro.git
Copy Github clone command

Explore how to manage microservice traffic using Istio.

What you’ll learn

You will learn how to deploy an application to a Kubernetes cluster and enable Istio on it. You will also learn how to configure Istio to shift traffic to implement blue-green deployments for microservices.

What is Istio?

Istio is a service mesh, meaning that it’s a platform for managing how microservices interact with each other and the outside world. Istio consists of a control plane and sidecars that are injected into application pods. The sidecars contain the Envoy proxy. You can think of Envoy as a sidecar that intercepts and controls all the HTTP and TCP traffic to and from your container.

While Istio runs on top of Kubernetes and that will be the focus of this guide, you can also use Istio with other environments such as Docker Compose. Istio has many features such as traffic shifting, request routing, access control, and distributed tracing, but the focus of this guide will be on traffic shifting.

Why Istio?

Istio provides a collection of features that allows you to manage several aspects of your services. One example is Istio’s routing features. You can route HTTP requests based on several factors such as HTTP headers or cookies. Another use case for Istio is telemetry, which you can use to enable distributed tracing. Distributed tracing allows you to visualize how HTTP requests travel between different services in your cluster by using a tool such as Jaeger. Additionally, as part of its collection of security features, Istio allows you to enable mutual TLS between pods in your cluster. Enabling TLS between pods secures communication between microservices internally.

Blue-green deployments are a method of deploying your applications such that you have two nearly identical environments where one acts as a sort of staging environment and the other is a production environment. This allows you to switch traffic from staging to production once a new version of your application has been verified to work. You’ll use Istio to implement blue-green deployments. The traffic shifting feature allows you to allocate a percentage of traffic to certain versions of services. You can use this feature to shift 100 percent of live traffic to blue deployments and 100 percent of test traffic to green deployments. Then, you can shift the traffic to point to the opposite deployments as necessary to perform blue-green deployments.

The microservice you’ll deploy is called system. It responds with current system’s JVM properties and it returns the app version in the response header. The version will be automatically incremented when you update the pom.xml version. This number will allow you to view which version of the microservice is running in your production or test environments.

What are blue-green deployments?

Blue-green deployments are a way of deploying your applications such that you have two environments where your application runs. In this scenario, you will have a production environment and a test environment. At any point in time, the blue deployment can accept production traffic and the green deployment can accept test traffic, or vice-versa. When you want to deploy a new version of your application, you will deploy to the color that is acting as your test environment. After the new version is verified on the test environment, the traffic will be shifted over. Thus, your live traffic is now being handled by what used to be the test site.

Prerequisites

Before you begin, the following tools must be installed:

First, you need a containerization software for building containers. Kubernetes supports various container runtimes. You will use Docker in this guide. For Docker installation instructions, refer to the official Docker documentation.

WINDOWS | MAC

Use Docker Desktop, where a local Kubernetes environment is preinstalled and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

  • Set up Docker for Windows. In the Docker Preferences, ensure that the Expose daemon on tcp://localhost:2375 without TLS option is enabled in the General tab. This setting is required when the dockerfile-maven plug-in is used in the POM file.

  • Set up Docker for Mac.

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

LINUX

You will use Minikube as a single-node Kubernetes cluster that runs locally in a virtual machine. For Minikube installation instructions, see the Minikube documentation. Be sure to read the Requirements section, as different operating systems require different prerequisites to run Minikube.

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone https://github.com/openliberty/guide-istio-intro.git
cd guide-istio-intro

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Starting and preparing your cluster for deployment

Start your Kubernetes cluster.

WINDOWS | MAC

Start your Docker Desktop environment.

Check your settings to ensure that you have an adequate amount of memory allocated to your Docker Desktop enviornment, 8GB is recommended but 4GB should be adequate if you don’t have enough RAM.

LINUX

Run the following command from a command line:

minikube start --memory=8192 --cpus=4 --kubernetes-version=v1.13.0

The memory flag allocates 8GB of memory to your Minikube cluster. If you don’t have enough RAM, then 4GB should be adequate.

Next, validate that you have a healthy Kubernetes environment by running the following command from the command line.

kubectl get nodes

This command should return a Ready status for the master node.

WINDOWS | MAC

You do not need to do any other step.

LINUX

Run the following command to configure the Docker CLI to use Minikube’s Docker daemon. After you run this command, you will be able to interact with Minikube’s Docker daemon and build new images directly to it from your host machine:

eval $(minikube docker-env)

Deploying Istio

First, go to the Istio release page and download the latest stable release. Extract the archive and navigate to the directory with the extracted files.

Next, deploy the Istio custom resource definitions. Custom resource definitions allow Istio to define custom Kubernetes resources that you can use in your resource definition files.

LINUX | MAC

for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

WINDOWS

FOR %i in (install\kubernetes\helm\istio-init\files\crd*yaml) DO kubectl apply -f %i

Next, deploy Istio resources to your cluster by running the kubectl apply command, which creates or updates Kubernetes resources defined in a yaml file. This command deploys Istio.

kubectl apply -f install/kubernetes/istio-demo.yaml

Verify that Istio was successfully deployed. All the values in the AVAILABLE column will have a value of 1 after the deployment is complete.

kubectl get deployments -n istio-system

Ensure that the Istio deployments are all available before you continue. The deployments might take a few minutes to become available. If the deployments aren’t available after a few minutes, then increase the amount of memory available to your Kubernetes cluster. On Docker Desktop, you can increase the memory from your Docker preferences. On Minikube, you can increase the memory using the --memory flag.

NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
grafana                  1         1         1            1           44s
istio-citadel            1         1         1            1           44s
istio-egressgateway      1         1         1            1           44s
istio-galley             1         1         1            1           44s
istio-ingressgateway     1         1         1            1           44s
istio-pilot              1         1         1            1           44s
istio-policy             1         1         1            1           44s
istio-sidecar-injector   1         1         1            1           44s
istio-telemetry          1         1         1            1           44s
istio-tracing            1         1         1            1           43s
prometheus               1         1         1            1           44s
servicegraph             1         1         1            1           44s

Finally, create the istio-injection label and set its value to enabled.

kubectl label namespace default istio-injection=enabled

Adding this label enables automatic Istio sidecar injection. Automatic injection means that sidecars will automatically be injected into your pods when you deploy your application. You don’t need to perform any additional steps for the sidecars to be injected.

Deploying version 1 of the system microservice

Navigate to the start directory and run the following command. It might take a few minutes to complete. It builds the application and then packages it into a Docker image. To build the Docker image, it uses a Maven plug-in called dockerfile-maven-plugin.

mvn clean package

The command builds a Docker image for the system microservice. You can verify that this image was created by running the following command.

docker images

You’ll see an image called system:1.0-SNAPSHOT listed in a table similar to the output.

REPOSITORY                 TAG            IMAGE ID       CREATED          SIZE
system                     1.0-SNAPSHOT   d316c2c2c6ba   9 seconds ago    501MB
istio/galley               1.0.1          7ac6c7be3d3e   5 days ago       65.8MB
istio/citadel              1.0.1          abcc721c2454   5 days ago       51.7MB
istio/mixer                1.0.1          0d97b4000ed5   5 days ago       64.5MB
istio/sidecar_injector     1.0.1          a122adc160b7   5 days ago       45.3MB
istio/proxyv2              1.0.1          f1bf7b920fe1   5 days ago       352MB
istio/pilot                1.0.1          46d3b4e95fc3   5 days ago       290MB
open-liberty               latest         ed1ca62c4bd5   7 days ago       501MB
prom/prometheus            v2.3.1         b82ef1f3aa07   2 months ago     119MB

To deploy the system microservice to the Kubernetes cluster, use the following command to deploy the microservice.

kubectl apply -f system.yaml

You can see that your resources are created:

gateway.networking.istio.io/sys-app-gateway created
service/system-service created
deployment.apps/system-deployment-blue created
deployment.apps/system-deployment-green created
destinationrule.networking.istio.io/system-destination-rule created

system.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: Gateway
 3metadata:
 4  name: sys-app-gateway
 5spec:
 6  selector:
 7    istio: ingressgateway
 8  servers:
 9  - port:
10      number: 80
11      name: http
12      protocol: HTTP
13    hosts:
14    - "example.com"
15    - "test.example.com"
16---
17apiVersion: v1
18kind: Service
19metadata:
20  name: system-service
21  labels:
22    app: system
23spec:
24  ports:
25  - port: 9080
26    name: http
27  selector:
28    app: system
29---
30apiVersion: apps/v1
31kind: Deployment
32metadata:
33  name: system-deployment-blue
34spec:
35  replicas: 1
36  selector:
37    matchLabels:
38      app: system
39      version: blue
40  template:
41    metadata:
42      labels:
43        app: system
44        version: blue
45    spec:
46      containers:
47      - name: system-container
48        image: system:1.0-SNAPSHOT
49        ports:
50        - containerPort: 9080
51---
52apiVersion: apps/v1
53kind: Deployment
54metadata:
55  name: system-deployment-green
56spec:
57  replicas: 1
58  selector:
59    matchLabels:
60      app: system
61      version: green
62  template:
63    metadata:
64      labels:
65        app: system
66        version: green
67    spec:
68      containers:
69      - name: system-container
70        image: system:1.0-SNAPSHOT
71        ports:
72        - containerPort: 9080
73---
74apiVersion: networking.istio.io/v1alpha3
75kind: DestinationRule
76metadata:
77  name: system-destination-rule
78spec:
79  host: system-service
80  subsets:
81  - name: blue
82    labels:
83      version: blue
84  - name: green
85    labels:
86      version: green

View the system.yaml file. It contains two deployments, a service, a gateway, and a destination rule. One of the deployments is labeled blue and the second deployment is labeled green. The service points to both of these deployments. The Istio gateway is the entry point for HTTP requests to the cluster. A destination rule is used to apply policies post-routing, in this situation it is used to define service subsets that can be specifically routed to.

start/traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9080
15        host: system-service
16        subset: blue
17      weight: 100
18    - destination:
19        port:
20          number: 9080
21        host: system-service
22        subset: green
23      weight: 0
24---
25apiVersion: networking.istio.io/v1alpha3
26kind: VirtualService
27metadata:
28  name: system-test-virtual-service
29spec:
30  hosts:
31  - "test.example.com"
32  gateways:
33  - sys-app-gateway
34  http:
35  - route:
36    - destination:
37        port:
38          number: 9080
39        host: system-service
40        subset: blue
41      weight: 0
42    - destination:
43        port:
44          number: 9080
45        host: system-service
46        subset: green
47      weight: 100

View the traffic.yaml file. It contains two virtual services. A virtual service defines how requests are routed to your applications. In the virtual services, you can configure the weight, which controls the amount of traffic going to each deployment. In this case, the weights should be 100 or 0, which corresponds to which deployment is live.

Deploy the resources defined in the traffic.yaml file.

kubectl apply -f traffic.yaml

You can see that the virtual services have been created.

virtualservice.networking.istio.io/system-virtual-service created
virtualservice.networking.istio.io/system-test-virtual-service created

You can check that all of the deployments are available by running the following command.

kubectl get deployments

The command produces a list of deployments for your microservices that is similar to the following output.

NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
system-deployment-blue    1         1         1            1           1m
system-deployment-green   1         1         1            1           1m

After all the deployments are available, you will make a request to version 1 of the deployed application. As defined in the system.yaml, file the gateway is expecting the host to be example.com. However, requests to example.com won’t be routed to the appropriate IP address. To ensure that the gateway routes your requests appropriately, ensure that the Host header is set to example.com. For instance, you can set the Host header with the -H option of the curl command.

WINDOWS | MAC

Make a request to the service by running the following curl command.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman. Postman enables you to make requests using a graphical interface. To make a request with Postman, enter http://localhost/system/properties into the URL bar. Next, switch to the Headers tab and add a header with key of Host and value of example.com. Finally, click the blue Send button to make the request.

LINUX

Make a request to the service by using curl.

curl -H "Host:example.com" -I http://`minikube ip`:31380/system/properties

You’ll see a header called x-app-version along with the corresponding version.

x-app-version: 1.0-SNAPSHOT

Deploying version 2 of the system microservice

The system microservice is set up to respond with the version that is set in the pom.xml file. The tag for the Docker image is also dependent on the version specified in the pom.xml file. Use the Maven command to bump the version of the microservice to 2.0-SNAPSHOT.

mvn versions:set -DnewVersion=2.0-SNAPSHOT

Build the new version of the Docker container.

mvn clean package

Deploy the new image to the green deployment.

kubectl set image deployment/system-deployment-green system-container=system:2.0-SNAPSHOT

You will work with two environments. One of the environments is a test site located at test.example.com. The other environment is your production environment located at example.com. To start with, the production environment is tied to the blue deployment and the test environment is tied to the green deployment.

Test the updated microservice by making requests to the test site. The x-app-version header now has a value of 2.0-SNAPSHOT on the test site and is still 1.0-SNAPSHOT on the live site.

WINDOWS | MAC

Make a request to the service by running the following curl command.

curl -H "Host:test.example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman.

LINUX

Make a request to the service by using curl.

curl -H "Host:test.example.com" -I http://`minikube ip`:31380/system/properties

You’ll see the new version in the x-app-version response header.

x-app-version: 2.0-SNAPSHOT
Update the traffic.yaml file.
traffic.yaml

finish/traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9080
15        host: system-service
16        subset: blue
17      weight: 0
18    - destination:
19        port:
20          number: 9080
21        host: system-service
22        subset: green
23      weight: 100
24---
25apiVersion: networking.istio.io/v1alpha3
26kind: VirtualService
27metadata:
28  name: system-test-virtual-service
29spec:
30  hosts:
31  - "test.example.com"
32  gateways:
33  - sys-app-gateway
34  http:
35  - route:
36    - destination:
37        port:
38          number: 9080
39        host: system-service
40        subset: blue
41      weight: 100
42    - destination:
43        port:
44          number: 9080
45        host: system-service
46        subset: green
47      weight: 0

After you see that the microservice is working on the test site, modify the weights in the traffic.yaml file to shift 100 percent of the example.com traffic to the green deployment, and 100 percent of the test.example.com traffic to the blue deployment.

Deploy the updated traffic.yaml file.

kubectl apply -f traffic.yaml

Ensure that the live traffic is now being routed to version 2 of the microservice.

WINDOWS | MAC

Make a request to the service by running the following curl command.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman.

LINUX

Make a request to the service by using curl.

curl -H "Host:example.com" -I http://`minikube ip`:31380/system/properties

You’ll see the new version in the x-app-version response header.

x-app-version: 2.0-SNAPSHOT

Testing microservices that are running on Kubernetes

Next, you will create a test to verify that the correct version of your microservice is running.

Create the SystemEndpointTest class.
src/test/java/it/io/openliberty/guides/system/SystemEndpointTest.java

SystemEndpointTest.java

  1package it.io.openliberty.guides.system;
  2
  3import static org.junit.Assert.assertEquals;
  4import static org.junit.Assert.assertNotNull;
  5
  6import javax.net.ssl.HostnameVerifier;
  7import javax.net.ssl.SSLSession;
  8import javax.ws.rs.client.Client;
  9import javax.ws.rs.client.ClientBuilder;
 10import javax.ws.rs.core.Response;
 11
 12import org.junit.After;
 13import org.junit.Before;
 14import org.junit.BeforeClass;
 15import org.junit.Test;
 16
 17import javax.ws.rs.client.WebTarget;
 18import org.apache.cxf.jaxrs.provider.jsrjsonp.JsrJsonpProvider;
 19
 20public class SystemEndpointTest {
 21
 22    private static String clusterUrl;
 23
 24    private Client client;
 25    private Response response;
 26
 27    @BeforeClass
 28    public static void oneTimeSetup() {
 29        // Allows for overriding the "Host" http header
 30        System.setProperty("sun.net.http.allowRestrictedHeaders", "true");
 31
 32        String clusterIp = System.getProperty("cluster.ip");
 33        String nodePort = System.getProperty("port");
 34
 35        clusterUrl = "http://" + clusterIp + ":" + nodePort + "/system/properties/";
 36    }
 37
 38    @Before
 39    public void setup() {
 40        response = null;
 41        client = ClientBuilder.newBuilder()
 42                    .hostnameVerifier(new HostnameVerifier() {
 43                        public boolean verify(String hostname, SSLSession session) {
 44                            return true;
 45                        }
 46                    })
 47                    .build();
 48    }
 49
 50    @After
 51    public void teardown() {
 52        client.close();
 53    }
 54
 55    @Test
 56    public void testAppVersionMatchesPom() {
 57        response = this.getResponse(clusterUrl);
 58
 59        String expectedVersion = System.getProperty("app.name");
 60        String actualVersion = response.getHeaderString("X-App-Version");
 61
 62        assertEquals(expectedVersion, actualVersion);
 63    }
 64
 65    @Test
 66    public void testPodNameNotNull() {
 67        response = this.getResponse(clusterUrl);
 68        this.assertResponse(clusterUrl, response);
 69        String greeting = response.getHeaderString("X-Pod-Name");
 70
 71        String message = "Container name should not be null but it was. " +
 72            "The service is probably not running inside a container";
 73
 74        assertNotNull(
 75            message,
 76            greeting);
 77    }
 78
 79    @Test
 80    public void testGetProperties() {
 81        Client client = ClientBuilder.newClient();
 82        client.register(JsrJsonpProvider.class);
 83
 84        WebTarget target = client.target(clusterUrl);
 85        Response response = target
 86            .request()
 87            .header("Host", System.getProperty("host-header"))
 88            .get();
 89
 90        assertEquals("Incorrect response code from " + clusterUrl,
 91            200,
 92            response.getStatus());
 93
 94        response.close();
 95    }
 96
 97    private Response getResponse(String url) {
 98        return client
 99            .target(url)
100            .request()
101            .header("Host", System.getProperty("host-header"))
102            .get();
103    }
104
105    private void assertResponse(String url, Response response) {
106        assertEquals("Incorrect response code from " + url,
107            200,
108            response.getStatus());
109    }
110
111}

The testAppVersionMatchesPom test case verifies that the correct version number is returned in the response headers.

WINDOWS | MAC

Run the command to start the tests:

mvn verify -Ddockerfile.skip=true

LINUX

Run the command to start the tests:

mvn verify -Ddockerfile.skip=true -Dcluster.ip=`minikube ip` -Dport=31380

The cluster.ip and port parameters refer to the IP address and port for the Istio gateway.

The dockerfile.skip=true flag skips rebuilding the Docker images.

If the tests pass, then you should see output similar to the following example:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.503 s - in it.io.openliberty.guides.system.SystemEndpointTest

Results:

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

Tearing down your environment

You might want to teardown all the deployed resources as a cleanup step.

Delete your resources from the cluster.

kubectl delete -f system.yaml
kubectl delete -f traffic.yaml

Delete the istio-injection label from the default namespace. The hyphen immediately after the label name indicates that the label should be deleted.

kubectl label namespace default istio-injection-

Navigate to the directory where you extracted Istio and delete the Istio resources from the cluster.

kubectl delete -f install/kubernetes/istio-demo.yaml

LINUX | MAC

for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl delete -f $i; done

WINDOWS

FOR %i in (install\kubernetes\helm\istio-init\files\crd*yaml) DO kubectl delete -f %i

WINDOWS | MAC

Nothing more needs to be done for Docker Desktop.

LINUX

Perform the following steps to return your environment to a clean state.

  1. Point the Docker daemon back to your local machine:

    eval $(minikube docker-env -u)
  2. Stop your Minikube cluster:

    minikube stop
  3. Delete your cluster:

    minikube delete

Great work! You’re done!

You have deployed a microservice that runs on Open Liberty to a Kubernetes cluster and used Istio to implement a blue-green deployment scheme.

Guide Attribution

Managing microservice traffic using Istio by Open Liberty is licensed under CC BY-ND 4.0

Copied to clipboard
Copy code block
Copy file contents
Git clone this repo to get going right away:
git clone https://github.com/OpenLiberty/guide-istio-intro.git
Copy github clone command
Copied to clipboard

Nice work! Where to next?

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Like Open Liberty? Star our repo on GitHub.

Star