Managing microservice traffic using Istio

duration 30 minutes

Prerequisites:

Explore how to manage microservice traffic using Istio.

What you’ll learn

You will learn how to deploy an application to a Kubernetes cluster and enable Istio on it. You will also learn how to configure Istio to shift traffic to implement blue-green deployments for microservices.

What is Istio?

Istio is a service mesh, meaning that it’s a platform for managing how microservices interact with each other and the outside world. Istio consists of a control plane and sidecars that are injected into application pods. The sidecars contain the Envoy proxy. You can think of Envoy as a sidecar that intercepts and controls all the HTTP and TCP traffic to and from your container.

While Istio runs on top of Kubernetes and that will be the focus of this guide, you can also use Istio with other environments such as Docker Compose. Istio has many features such as traffic shifting, request routing, access control, and distributed tracing, but the focus of this guide will be on traffic shifting.

Why Istio?

Istio provides a collection of features that allows you to manage several aspects of your services. One example is Istio’s routing features. You can route HTTP requests based on several factors such as HTTP headers or cookies. Another use case for Istio is telemetry, which you can use to enable distributed tracing. Distributed tracing allows you to visualize how HTTP requests travel between different services in your cluster by using a tool such as Jaeger. Additionally, as part of its collection of security features, Istio allows you to enable mutual TLS between pods in your cluster. Enabling TLS between pods secures communication between microservices internally.

Blue-green deployments are a method of deploying your applications such that you have two nearly identical environments where one acts as a sort of staging environment and the other is a production environment. This allows you to switch traffic from staging to production once a new version of your application has been verified to work. You’ll use Istio to implement blue-green deployments. The traffic shifting feature allows you to allocate a percentage of traffic to certain versions of services. You can use this feature to shift 100 percent of live traffic to blue deployments and 100 percent of test traffic to green deployments. Then, you can shift the traffic to point to the opposite deployments as necessary to perform blue-green deployments.

The microservice you’ll deploy is called system. It responds with current system’s JVM properties and it returns the app version in the response header. The version will be automatically incremented when you update the pom.xml version. This number will allow you to view which version of the microservice is running in your production or test environments.

What are blue-green deployments?

Blue-green deployments are a way of deploying your applications such that you have two environments where your application runs. In this scenario, you will have a production environment and a test environment. At any point in time, the blue deployment can accept production traffic and the green deployment can accept test traffic, or vice-versa. When you want to deploy a new version of your application, you will deploy to the color that is acting as your test environment. After the new version is verified on the test environment, the traffic will be shifted over. Thus, your live traffic is now being handled by what used to be the test site.

Additional prerequisites

Before you begin, you need a containerization software for building containers. Kubernetes supports various container runtimes. You will use Docker in this guide. For Docker installation instructions, refer to the official Docker documentation.

Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

Use Docker Desktop, where a local Kubernetes environment is pre-installed and enabled. If you do not see the Kubernetes tab, then upgrade to the latest version of Docker Desktop.

Complete the setup for your operating system:

After you complete the Docker setup instructions for your operating system, ensure that Kubernetes (not Swarm) is selected as the orchestrator in Docker Preferences.

You will use Minikube as a single-node Kubernetes cluster that runs locally in a virtual machine. For Minikube installation instructions, see the Minikube documentation. Be sure to read the Requirements section, as different operating systems require different prerequisites to run Minikube.

Getting started

The fastest way to work through this guide is to clone the Git repository and use the projects that are provided inside:

git clone https://github.com/openliberty/guide-istio-intro.git
cd guide-istio-intro

The start directory contains the starting project that you will build upon.

The finish directory contains the finished project that you will build.

Starting and preparing your cluster for deployment

Start your Kubernetes cluster.

Start your Docker Desktop environment.

Check your settings to ensure that you have an adequate amount of memory allocated to your Docker Desktop enviornment, 8GB is recommended but 4GB should be adequate if you don’t have enough RAM.

Run the following command from a command line:

minikube start --memory=8192 --cpus=4 --kubernetes-version=v1.13.0

The memory flag allocates 8GB of memory to your Minikube cluster. If you don’t have enough RAM, then 4GB should be adequate.

Next, validate that you have a healthy Kubernetes environment by running the following command from the command line.

kubectl get nodes

This command should return a Ready status for the master node.

You do not need to do any other step.

Run the following command to configure the Docker CLI to use Minikube’s Docker daemon. After you run this command, you will be able to interact with Minikube’s Docker daemon and build new images directly to it from your host machine:

eval $(minikube docker-env)

Deploying Istio

First, go to the Istio release page and download the latest stable release. Extract the archive and navigate to the directory with the extracted files.

Next, deploy the Istio custom resource definitions. Custom resource definitions allow Istio to define custom Kubernetes resources that you can use in your resource definition files.

FOR %i in (install\kubernetes\helm\istio-init\files\crd*yaml) DO kubectl apply -f %i
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done

Next, deploy Istio resources to your cluster by running the kubectl apply command, which creates or updates Kubernetes resources defined in a yaml file.

kubectl apply -f install/kubernetes/istio-demo.yaml

Verify that Istio was successfully deployed by running the following command:

kubectl get deployments -n istio-system

All the values in the AVAILABLE column will have a value of 1 after the deployment is complete.

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
grafana                  1/1     1            1           2m48s
istio-citadel            1/1     1            1           2m48s
istio-egressgateway      1/1     1            1           2m48s
istio-galley             1/1     1            1           2m48s
istio-ingressgateway     1/1     1            1           2m48s
istio-pilot              1/1     1            1           2m48s
istio-policy             1/1     1            1           2m48s
istio-sidecar-injector   1/1     1            1           2m48s
istio-telemetry          1/1     1            1           2m48s
istio-tracing            1/1     1            1           2m48s
kiali                    1/1     1            1           2m48s
prometheus               1/1     1            1           2m48s

Ensure that the Istio deployments are all available before you continue. The deployments might take a few minutes to become available. If the deployments aren’t available after a few minutes, then increase the amount of memory available to your Kubernetes cluster. On Docker Desktop, you can increase the memory from your Docker preferences. On Minikube, you can increase the memory using the --memory flag.

Finally, create the istio-injection label and set its value to enabled.

kubectl label namespace default istio-injection=enabled

Adding this label enables automatic Istio sidecar injection. Automatic injection means that sidecars will automatically be injected into your pods when you deploy your application.

Deploying version 1 of the system microservice

Navigate to the start directory and run the following command to build the application locally.

mvn clean package

Next, run the docker build commands to build the container image for your application:

docker build -t system:1.0-SNAPSHOT .

The command builds a Docker image for the system microservice. The -t flag in the docker build command allows the Docker image to be labeled (tagged) in the name[:tag] format. The tag for an image describes the specific image version. If the optional [:tag] tag is not specified, the latest tag is created by default. You can verify that this image was created by running the following command.

docker images

You’ll see an image called system:1.0-SNAPSHOT listed in a table similar to the output.

REPOSITORY                 TAG            IMAGE ID       CREATED          SIZE
system                     1.0-SNAPSHOT   d316c2c2c6ba   9 seconds ago    501MB
istio/galley               1.0.1          7ac6c7be3d3e   5 days ago       65.8MB
istio/citadel              1.0.1          abcc721c2454   5 days ago       51.7MB
istio/mixer                1.0.1          0d97b4000ed5   5 days ago       64.5MB
istio/sidecar_injector     1.0.1          a122adc160b7   5 days ago       45.3MB
istio/proxyv2              1.0.1          f1bf7b920fe1   5 days ago       352MB
istio/pilot                1.0.1          46d3b4e95fc3   5 days ago       290MB
open-liberty               latest         ed1ca62c4bd5   7 days ago       501MB
prom/prometheus            v2.3.1         b82ef1f3aa07   2 months ago     119MB

To deploy the system microservice to the Kubernetes cluster, use the following command to deploy the microservice.

kubectl apply -f system.yaml

You can see that your resources are created:

gateway.networking.istio.io/sys-app-gateway created
service/system-service created
deployment.apps/system-deployment-blue created
deployment.apps/system-deployment-green created
destinationrule.networking.istio.io/system-destination-rule created

system.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: Gateway
 3metadata:
 4  name: sys-app-gateway
 5spec:
 6  selector:
 7    istio: ingressgateway
 8  servers:
 9  - port:
10      number: 80
11      name: http
12      protocol: HTTP
13    hosts:
14    - "example.com"
15    - "test.example.com"
16---
17apiVersion: v1
18kind: Service
19metadata:
20  name: system-service
21  labels:
22    app: system
23spec:
24  ports:
25  - port: 9080
26    name: http
27  selector:
28    app: system
29---
30apiVersion: apps/v1
31kind: Deployment
32metadata:
33  name: system-deployment-blue
34spec:
35  replicas: 1
36  selector:
37    matchLabels:
38      app: system
39      version: blue
40  template:
41    metadata:
42      labels:
43        app: system
44        version: blue
45    spec:
46      containers:
47      - name: system-container
48        image: system:1.0-SNAPSHOT
49        ports:
50        - containerPort: 9080
51---
52apiVersion: apps/v1
53kind: Deployment
54metadata:
55  name: system-deployment-green
56spec:
57  replicas: 1
58  selector:
59    matchLabels:
60      app: system
61      version: green
62  template:
63    metadata:
64      labels:
65        app: system
66        version: green
67    spec:
68      containers:
69      - name: system-container
70        image: system:1.0-SNAPSHOT
71        ports:
72        - containerPort: 9080
73---
74apiVersion: networking.istio.io/v1alpha3
75kind: DestinationRule
76metadata:
77  name: system-destination-rule
78spec:
79  host: system-service
80  subsets:
81  - name: blue
82    labels:
83      version: blue
84  - name: green
85    labels:
86      version: green

View the system.yaml file. It contains two deployments, a service, a gateway, and a destination rule. One of the deployments is labeled blue and the second deployment is labeled green. The service points to both of these deployments. The Istio gateway is the entry point for HTTP requests to the cluster. A destination rule is used to apply policies post-routing, in this situation it is used to define service subsets that can be specifically routed to.

start/traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9080
15        host: system-service
16        subset: blue
17      weight: 100
18    - destination:
19        port:
20          number: 9080
21        host: system-service
22        subset: green
23      weight: 0
24---
25apiVersion: networking.istio.io/v1alpha3
26kind: VirtualService
27metadata:
28  name: system-test-virtual-service
29spec:
30  hosts:
31  - "test.example.com"
32  gateways:
33  - sys-app-gateway
34  http:
35  - route:
36    - destination:
37        port:
38          number: 9080
39        host: system-service
40        subset: blue
41      weight: 0
42    - destination:
43        port:
44          number: 9080
45        host: system-service
46        subset: green
47      weight: 100

View the traffic.yaml file. It contains two virtual services. A virtual service defines how requests are routed to your applications. In the virtual services, you can configure the weight, which controls the amount of traffic going to each deployment. In this case, the weights should be 100 or 0, which corresponds to which deployment is live.

Deploy the resources defined in the traffic.yaml file.

kubectl apply -f traffic.yaml

You can see that the virtual services have been created.

virtualservice.networking.istio.io/system-virtual-service created
virtualservice.networking.istio.io/system-test-virtual-service created

You can check that all of the deployments are available by running the following command.

kubectl get deployments

The command produces a list of deployments for your microservices that is similar to the following output.

NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
system-deployment-blue    1         1         1            1           1m
system-deployment-green   1         1         1            1           1m

After all the deployments are available, you will make a request to version 1 of the deployed application. As defined in the system.yaml, file the gateway is expecting the host to be example.com. However, requests to example.com won’t be routed to the appropriate IP address. To ensure that the gateway routes your requests appropriately, ensure that the Host header is set to example.com. For instance, you can set the Host header with the -H option of the curl command.

Make a request to the service by running the following curl command.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman. Postman enables you to make requests using a graphical interface. To make a request with Postman, enter http://localhost/system/properties into the URL bar. Next, switch to the Headers tab and add a header with key of Host and value of example.com. Finally, click the blue Send button to make the request.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman. Postman enables you to make requests using a graphical interface. To make a request with Postman, enter http://localhost/system/properties into the URL bar. Next, switch to the Headers tab and add a header with key of Host and value of example.com. Finally, click the blue Send button to make the request.

curl -H "Host:example.com" -I http://`minikube ip`:31380/system/properties

You’ll see a header called x-app-version along with the corresponding version.

x-app-version: 1.0-SNAPSHOT

Deploying version 2 of the system microservice

The system microservice is set up to respond with the version that is set in the pom.xml file. The tag for the Docker image is also dependent on the version specified in the pom.xml file. Use the Maven command to bump the version of the microservice to 2.0-SNAPSHOT.

mvn versions:set -DnewVersion=2.0-SNAPSHOT

Use Maven to repackage your microservice:

mvn clean package

Next, build the new version of the container image as 2.0-SNAPSHOT:

docker build -t system:2.0-SNAPSHOT .

Deploy the new image to the green deployment.

kubectl set image deployment/system-deployment-green system-container=system:2.0-SNAPSHOT

You will work with two environments. One of the environments is a test site located at test.example.com. The other environment is your production environment located at example.com. To start with, the production environment is tied to the blue deployment and the test environment is tied to the green deployment.

Test the updated microservice by making requests to the test site. The x-app-version header now has a value of 2.0-SNAPSHOT on the test site and is still 1.0-SNAPSHOT on the live site.

Make a request to the service by running the following curl command.

curl -H "Host:test.example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman.

curl -H "Host:test.example.com" -I http://`minikube ip`:31380/system/properties

You’ll see the new version in the x-app-version response header.

x-app-version: 2.0-SNAPSHOT
Update the traffic.yaml file.
traffic.yaml

finish/traffic.yaml

 1apiVersion: networking.istio.io/v1alpha3
 2kind: VirtualService
 3metadata:
 4  name: system-virtual-service
 5spec:
 6  hosts:
 7  - "example.com"
 8  gateways:
 9  - sys-app-gateway
10  http:
11  - route:
12    - destination:
13        port:
14          number: 9080
15        host: system-service
16        subset: blue
17      weight: 0
18    - destination:
19        port:
20          number: 9080
21        host: system-service
22        subset: green
23      weight: 100
24---
25apiVersion: networking.istio.io/v1alpha3
26kind: VirtualService
27metadata:
28  name: system-test-virtual-service
29spec:
30  hosts:
31  - "test.example.com"
32  gateways:
33  - sys-app-gateway
34  http:
35  - route:
36    - destination:
37        port:
38          number: 9080
39        host: system-service
40        subset: blue
41      weight: 100
42    - destination:
43        port:
44          number: 9080
45        host: system-service
46        subset: green
47      weight: 0

After you see that the microservice is working on the test site, modify the weights in the traffic.yaml file to shift 100 percent of the example.com traffic to the green deployment, and 100 percent of the test.example.com traffic to the blue deployment.

Deploy the updated traffic.yaml file.

kubectl apply -f traffic.yaml

Ensure that the live traffic is now being routed to version 2 of the microservice.

Make a request to the service by running the following curl command.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman.

curl -H "Host:example.com" -I http://localhost/system/properties

If the curl command is unavailable, then use Postman.

curl -H "Host:example.com" -I http://`minikube ip`:31380/system/properties

You’ll see the new version in the x-app-version response header.

x-app-version: 2.0-SNAPSHOT

Testing microservices that are running on Kubernetes

Next, you will create a test to verify that the correct version of your microservice is running.

Create the SystemEndpointTest class.
src/test/java/it/io/openliberty/guides/system/SystemEndpointTest.java

SystemEndpointTest.java

  1// tag::copyright[]
  2/*******************************************************************************
  3 * Copyright (c) 2019 IBM Corporation and others.
  4 * All rights reserved. This program and the accompanying materials
  5 * are made available under the terms of the Eclipse Public License v1.0
  6 * which accompanies this distribution, and is available at
  7 * http://www.eclipse.org/legal/epl-v10.html
  8 *
  9 * Contributors:
 10 *     IBM Corporation - Initial implementation
 11 *******************************************************************************/
 12// end::copyright[]
 13package it.io.openliberty.guides.system;
 14
 15import static org.junit.Assert.assertEquals;
 16import static org.junit.Assert.assertNotNull;
 17
 18import javax.net.ssl.HostnameVerifier;
 19import javax.net.ssl.SSLSession;
 20import javax.ws.rs.client.Client;
 21import javax.ws.rs.client.ClientBuilder;
 22import javax.ws.rs.core.Response;
 23
 24import org.junit.After;
 25import org.junit.Before;
 26import org.junit.BeforeClass;
 27import org.junit.Test;
 28
 29import javax.ws.rs.client.WebTarget;
 30import org.apache.cxf.jaxrs.provider.jsrjsonp.JsrJsonpProvider;
 31
 32public class SystemEndpointTest {
 33
 34    private static String clusterUrl;
 35
 36    private Client client;
 37    private Response response;
 38
 39    @BeforeClass
 40    public static void oneTimeSetup() {
 41        // Allows for overriding the "Host" http header
 42        System.setProperty("sun.net.http.allowRestrictedHeaders", "true");
 43
 44        String clusterIp = System.getProperty("cluster.ip");
 45        String nodePort = System.getProperty("port");
 46
 47        clusterUrl = "http://" + clusterIp + ":" + nodePort + "/system/properties/";
 48    }
 49
 50    @Before
 51    public void setup() {
 52        response = null;
 53        client = ClientBuilder.newBuilder()
 54                    .hostnameVerifier(new HostnameVerifier() {
 55                        public boolean verify(String hostname, SSLSession session) {
 56                            return true;
 57                        }
 58                    })
 59                    .build();
 60    }
 61
 62    @After
 63    public void teardown() {
 64        client.close();
 65    }
 66
 67    // tag::testAppVersion[]
 68    @Test
 69    public void testAppVersionMatchesPom() {
 70        response = this.getResponse(clusterUrl);
 71
 72        String expectedVersion = System.getProperty("app.name");
 73        String actualVersion = response.getHeaderString("X-App-Version");
 74
 75        assertEquals(expectedVersion, actualVersion);
 76    }
 77    // end::testAppVersion[]
 78
 79    @Test
 80    public void testPodNameNotNull() {
 81        response = this.getResponse(clusterUrl);
 82        this.assertResponse(clusterUrl, response);
 83        String greeting = response.getHeaderString("X-Pod-Name");
 84
 85        String message = "Container name should not be null but it was. " +
 86            "The service is probably not running inside a container";
 87
 88        assertNotNull(
 89            message,
 90            greeting);
 91    }
 92
 93    @Test
 94    public void testGetProperties() {
 95        Client client = ClientBuilder.newClient();
 96        client.register(JsrJsonpProvider.class);
 97
 98        WebTarget target = client.target(clusterUrl);
 99        Response response = target
100            .request()
101            .header("Host", System.getProperty("host-header"))
102            .get();
103
104        assertEquals("Incorrect response code from " + clusterUrl,
105            200,
106            response.getStatus());
107
108        response.close();
109    }
110
111    private Response getResponse(String url) {
112        return client
113            .target(url)
114            .request()
115            .header("Host", System.getProperty("host-header"))
116            .get();
117    }
118
119    private void assertResponse(String url, Response response) {
120        assertEquals("Incorrect response code from " + url,
121            200,
122            response.getStatus());
123    }
124
125}

The testAppVersionMatchesPom test case verifies that the correct version number is returned in the response headers.

Run the command to start the tests:

mvn verify
mvn verify -Dcluster.ip=`minikube ip` -Dport=31380

The cluster.ip and port parameters refer to the IP address and port for the Istio gateway.

If the tests pass, then you should see output similar to the following example:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.system.SystemEndpointTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.503 s - in it.io.openliberty.guides.system.SystemEndpointTest

Results:

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0

Tearing down your environment

You might want to teardown all the deployed resources as a cleanup step.

Delete your resources from the cluster:

kubectl delete -f system.yaml
kubectl delete -f traffic.yaml

Delete the istio-injection label from the default namespace. The hyphen immediately after the label name indicates that the label should be deleted.

kubectl label namespace default istio-injection-

Navigate to the directory where you extracted Istio and delete the Istio resources from the cluster:

kubectl delete -f install/kubernetes/istio-demo.yaml

Next, delete the Istio custom resource definitions, by running the following command:

FOR %i in (install\kubernetes\helm\istio-init\files\crd*yaml) DO kubectl delete -f %i

Nothing more needs to be done for Docker Desktop.

for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl delete -f $i; done

Nothing more needs to be done for Docker Desktop.

for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl delete -f $i; done

Perform the following steps to return your environment to a clean state.

  1. Point the Docker daemon back to your local machine:

    eval $(minikube docker-env -u)
  2. Stop and delete your Minikube cluster:

    minikube stop
    minikube delete

Great work! You’re done!

You have deployed a microservice that runs on Open Liberty to a Kubernetes cluster and used Istio to implement a blue-green deployment scheme.

Guide Attribution

Managing microservice traffic using Istio by Open Liberty is licensed under CC BY-ND 4.0

Copied to clipboard
Copy code block
Copy file contents

Prerequisites:

Nice work! Where to next?

What did you think of this guide?

Extreme Dislike Dislike Like Extreme Like

What could make this guide better?

Raise an issue to share feedback

Create a pull request to contribute to this guide

Need help?

Ask a question on Stack Overflow

Like Open Liberty? Star our repo on GitHub.

Star