Building an Istio 1.6 Service Mesh for Bee Travels, a Microservices Based Application Deployed on Kubernetes
In this code pattern, we will deploy a microservices based application to IBM Kubernetes Service and create a service mesh with Istio 1.6.
When you have completed this code pattern, you will understand how to:
- Deploy a microservices application on Kubernetes
- Configure an Istio service mesh, including:
- Install and configure the IBM Managed Istio add-on
- Route traffic to specific microservice versions
- Shift traffic between multiple microservice versions (A/B testing)
- Access distributed trace spans through Jaeger
- Analyze service traffic and latency through Grafana
- Visualize the service mesh through Kiali
- View access logs
- Generate load tests with Artillery
- Step 1.
- Step 2.
- Step 3.
- Step 4.
- Step 5.
- Complete the IBM Cloud set-up for Kubernetes and Istio
- Clone the repositories
- Deploy the application to Kubernetes
- Deploy version 1 (data stored in json flat files)
- Deploy version 2 (data stored in an in-cluster database)
- Deploy version 3 (data stored in a database in the cloud)
- Access the application (ingress gateway)
- Configure the Istio service mesh
- Route traffic to specific microservice versions
- Shift traffic between multiple microservice versions
- Access distributed trace spans through Jaeger
- Analyze service traffic and latency through Grafana
- Visualize the service mesh through Kiali
- View access logs
-
Sign up for an IBM Cloud account. if you do not have one. You must have a Pay-As-You-Go or Subscription account to deploy this code pattern. See here to upgrade your account.
-
Provision a new Kubernetes cluster. Follow the steps to create a standard classic cluster.
NOTE: This may take up to 30 minutes.
-
When your cluster has been created, navigate to the Add-ons panel on the left side of your cluster console. Click Install for the Managed Istio Add-on.
-
After Istio has finished installing, install the
istioctl
CLI. -
Customize your Istio installation by following steps 1 through 4 to enable monitoring and increase trace sampling to 100.
- Clone the
bee-travels-istio
repository locally. In a terminal window, run:$ git clone https://github.com/IBM/bee-travels-istio.git
-
If you haven't already, log in to IBM Cloud using the command line.
$ ibmcloud login
-
Set the cluster that you created as the context for this session.
$ ibmcloud ks cluster config -c <cluster_name_or_ID>
-
Verify that
kubectl
commands run properly and that the Kubernetes context is set to your cluster.$ kubectl config current-context
Example output:
<cluster_name>/<cluster_id>
-
Enable automatic Istio sidecar injection.
$ kubectl label namespace default istio-injection=enabled
- Navigate to the
bee-travels-istio
root directory and deploy the application with version 1 services:$ ./deploy-k8s-v1.sh
The following outlines specific steps to connect to an in-cluster MongoDB database, but the Bee Travels application also supports PostgreSQL, CouchDB, and Cloudant.
-
Deploy the application with version 2 services.
$ ./deploy-k8s-v2.sh
-
We have created a NodePort for MongoDB which exposes the service outside the cluster at
<NodeIP>:<NodePort>
. Take not of the node port (second port number) of themongo
service.$ kubectl get svc mongo
-
Take not of
EXTERNAL-IP
of any of the nodes in the cluster. This is the<NodeIP>
we will use to connect to the MongoDB service as described in step 2.$ kubectl get node -o wide
-
Run the following script to populate the database that was created.
$ ./generate.sh
-
Answer the prompts as seen below. For the Database Connection URL, replace
<NODE-IP>
with the IP address from step 4 and<NODE-PORT>
from step 3. Use existing credentials when prompted for hotel and car rental data.Welcome to the Bee Travels Data Generating Script Please answer the following options to configure your data: Destination Data (Y/N): y Generate Destination Data (Y/N): n Database (mongodb/postgres/couchdb/cloudant): mongodb Database Connection URL: mongodb://admin:admin@<NODE-IP>:<NODE-PORT> Use SSL/TLS (Y/N): n
The following outlines specific steps to connect to a MongoDB database in the cloud, but the Bee Travels application also supports PostgreSQL, CouchDB, and Cloudant.
-
When your Mongo as a service deployment has been created, navigate to the Manage panel on the left side of your cluster console and click the Settings tab. Set a new password for your service connection.
-
Navigate to the Overview tab and take note of the Public mongo endpoint.
-
Run the following script to populate the database that was created.
$ ./generate.sh
-
Answer the prompts as seen below. For the Database Connection URL, input the endpoint from Step 3 and replace
$USERNAME
withadmin
and$PASSWORD
with the password set in step 2. For the Certificate File Path, input the path to the TLS certificate downloaded in step 4. Use existing credentials when prompted for hotel and car rental data.NOTE: This code pattern will not use the flight service.
Welcome to the Bee Travels Data Generating Script Please answer the following options to configure your data: Destination Data (Y/N): y Generate Destination Data (Y/N): n Database (mongodb/postgres/couchdb/cloudant): mongodb Database Connection URL: Use SSL/TLS (Y/N): y Certificate File Path:
-
Open
k8s/carrental-v3-deploy.yaml
in an editor. Replace<YOUR URL HERE>
in line 41 to the URL from step 3 and replace$USERNAME
withadmin
and$PASSWORD
with the password set in step 2. At the end of your URL, add&tls=true
. Repeat this step fork8s/destination-v3-deploy.yaml
andk8s/hotel-v3-deploy.yaml
. -
Open the TLS certificate from step 4 and copy its contents excluding the first and last lines in dashes. Encode the content by running the following command and take note of the output:
$ echo <TLS_cert> | base64
-
Open
k8s/mongo-secret.yaml
in an editor. Set thedbsecret
in line 7 to the encoded value in step 8. -
Deploy the application with version 3 services.
$ ./deploy-k8s-v3.sh
At this point, all 3 versions of the destination, hotel, and car rental services should be deployed, along with the UI and currency exchange services.
Confirm that the pods and services are up and running.
kubectl get po
kubectl get svc
-
Create an ingress gateway so the application is accessibile from outside the cluster.
$ kubectl apply -f istio/gateway.yaml
-
Access the Bee Travels application by navigating to the IP address defined in
EXTERNAL-IP
of the ingress gateway in the browser.$ kubectl get svc -n istio-system istio-ingressgateway
Before we begin the configurations, we will set up Artillery, an external load generator tool. We will be using Artillery to generate traffic to the Bee Travels application. Please make sure that it is installed by following the links in the Prerequisites section.
- Open the
artillery_load/artillery.yaml
configuration in an editor and replace<EXTERNAL-IP>
in line 2 with theEXTERNAL-IP
of the ingress gateway. Save the file.Make sure that there is no
/
at the end of the address.
-
Before we can set traffic rules, destination rules must to be defined for Istio to identify the service versions available in the application. These different versions are referred to as subsets.
$ kubectl apply -f istio/destinationrules.yaml
-
Confirm that the destination rules have been created. Notice the subset field for each service.
$ kubectl get dr -o yaml
-
We will try visiting the application without setting any virtual service rules. By default, the Envoy proxies will route traffic in a round-robin manner to all eligible destinations. We will be using a custom service graph in the Bee Travels application UI to visualize which pods are receiving traffic. Navigate to the Bee Travels application in the browser and visit the
service-graph
endpoint using theEXTERNAL-IP
of the ingress gateway (ie. http://EXTERNAL-IP/service-graph) -
Confirm that all 3 versions of the hotel, car rental, and destination services are receiving traffic. Refresh to see that traffic is being routed to different pods each time.
-
We will first try routing all traffic to the
v1
services by applying a set of virtual service rules. Virtual services route traffic to the defined configuration.$ kubectl apply -f istio/virtualservice-all-v1.yaml
-
Confirm that the
v1
virtual service rules have been applied. Notice the subset value for each service's destination is set tov1
.$ kubectl get vs -o yaml
-
Use Bee Travel's service graph to confirm that all traffic is being sent to the version 1 services.
-
We will now route all traffic to the
v2
services by applying a new set of virtual service rules.$ kubectl apply -f istio/virtualservice-all-v2.yaml
-
Confirm that the
v2
virtual service rules have been applied. Notice the subset value for each service's destination is set tov2
.$ kubectl get vs -o yaml
-
Use Bee Travel's service graph to confirm that all traffic is being sent to the version 1 services.
-
Feel free to write and apply your own set of
v3
virtual service rules or try applyingistio/virtualservice-all-v3.yaml
on your own.
-
We will be using the Kiali dashboard to visualize traffic shifting. Set up your Kiali credentials with a secret.
-
Access the Kiali dashboard by running the following command. The dashboard should launch in a browser window automatically, but if it does not, navigate to
localhost:55619/kiali
. You will need to log in with the credentials created in step 1.$ istioctl dashboard kiali
-
Navigate to the Graph panel on the left side of the dashboard and select
default
from the Namespace drop-down to show the Bee Travels service graph. Then, click on the Display drop-down and select Request Percentage and Traffic Animation to customize the graph display. -
We will first define virtual service rules to shift traffic evenly between
v1
andv3
services. Traffic shifting is also referred to as weight-based routing and is helpful for A/B testing.$ kubectl apply -f istio/virtualservice-weights.yaml
-
Confirm that the virtual service rules have been applied. Notice the weight value for the two
v1
andv3
destinations of each service is set to50
.$ kubectl get vs -o yaml
-
Generate traffic to the application using the Artillery script.
Before running, ensure that you have updated the
artillery_load/artillery.yaml
file with the correct IP address for your cluster, as described above.$ artillery run artillery_load/artillery.yaml
-
Navigate to the Kiali dashboard. Notice how traffic is split approximately 50-50 between
v1
andv3
services. -
We will now shift traffic to the
v1
,v2
, andv3
services at 10%, 30%, and 60%, respectively.$ kubectl apply -f istio/virtualservice-weights.yaml
-
Confirm that the virtual service rules have been applied. Notice the weight value for the
v1
,v2
,v3
destinations of each service are set to10
,30
, and60
.$ kubectl get vs -o yaml
-
Generate traffic to the application using the Artillery script.
$ artillery run artillery_load/artillery.yaml
- Navigate to the Kiali dashboard. Notice how traffic is split approximately 10-30-60 between the
v1
,v2
, andv3
services.
Jaeger is a platform to view distributed traces which shows the flow of information and can help isolate errors.
-
Access the Jaeger dashboard by running the following command. The dashboard should launch in a browser window automatically, but if it does not, navigate to
localhost:55545
.$ istioctl dashboard jaeger
-
In a different browser tab, navigate to the Bee Travels application, enter a destination, and make a hotel request.
-
In Jaeger, select
hotel.default
from the Service drop-down and click Find Traces. -
Click on the first result item to analyze the trace for the request you just made in the browser. The trace provides information about how long the entire request took to process (
duration
), which specific endpoint was called (http.url
), which pod/service version processed the request (node_id
), and more.
Grafana is a monitoring platform that provides details about the service mesh through a variety of dashboards:
- Mesh Dashboard provides an overview of all services in the mesh.
- Service Dashboard provides a detailed breakdown of metrics for a service.
- Workload Dashboard provides a detailed breakdown of metrics for a workload.
- Performance Dashboard monitors the resource usage of the mesh.
- Control Plane Dashboard monitors the health and performance of the control plane.
For this code pattern, we will focus on the Mesh Dashboard for traffic and latency.
-
Access the Grafana dashboard by running the following command. The dashboard should launch in a browser window automatically, but if it does not, navigate to
localhost:50340
.$ istioctl dashboard grafana
-
Click the Home drop-down and select the Istio Mesh Dashboard. You should see a list of all of the services in your cluster.
-
Generate traffic to the application using the Artillery script.
$ artillery run artillery_load/artillery.yaml
-
The Mesh Dashboard provides information about the number of requests the services receive and their latency. The
Requests
column depicts how many requests are coming in per second. TheP50 Latency
describes the average time taken to complete requests,P90 Latency
describes the time taken for the slowest 10% of requests, andP99 Latency
describes the time taken for the slowest 1% of requests. Comparing the latency between the different versions, we can see that across all the services,v1
is the fastest, followed byv2
, andv3
is the slowest.
Kiali is a visual representation of the service mesh and its configurations. It includes a topology graph and provides an interface to view the different components of the mesh.
We've used Kiali to visualize the traffic in the mesh and now we explore some other components of the dashboard.
-
Access the Kiali dashboard by running the following command. The dashboard should launch in a browser window automatically, but if it does not, navigate to
localhost:55619/kiali
.$ istioctl dashboard kiali
-
Navigate to the Applications panel on the left side of the dashboard and click on
bee-ui
. The console displays the how deployments and services are connected and shows how the selected application communicates with the other applications in the namespace. The different tabs provide data and metrics about traffic. The Workloads and Services panels will allow you to choose a resource and find similar information about deployments and services. -
Navigate to the Istio Config panel on the left side of the dashboard. This provides the status of the mesh configuration in the
Configuration
column for all of the resources in the namespace. You can click on a specific configuration to view more information.
Envoy proxies can provide access information about the requests that the pod makes.
-
Enable access logging by following steps 1 through 4 and adding
istio-global-proxy-accessLogFile: "/dev/stdout"
to thedata
section of the configmap. -
Restart all of the pods to finish enabling access logs.
$ kubectl delete po --all
-
Display a list of all of the pods and take note of the bee-ui pod name. Use the pod name to start the access logs for the
bee-ui
pod's Envoy proxy.$ kubectl get po $ kubectl logs -f <bee-ui-pod-name> istio-proxy
-
Navigate to the application in the browser, enter a destination request. The access log shows that the UI service made an outbound GET request to
v3
of the car rental service based on my input and filters.
-
Can I check if I have any issues with my Istio configuration?
- The
istioctl analyze
command can detect possible issues within your cluster. You can also run the command against one or multiple configuration files to analyze the effect of applying them to your cluster. For example:istioctl analyze istio/ex-virtualservice.yaml
- Kiali also has an Istio Config panel on the left side of the dashboard that will any warnings or errors in your mesh configuration.
- The
-
I'm running a load test with Artillery but only the
bee-ui
service is receiving traffic on Grafana.- Check to see that the
target
value inartillery_load/artillery.yaml
does not have an extra/
at the end of the address. (ex: "http://169.62.94.60")
- Check to see that the
-
I'm getting a lot of 500 Internal Service Errors when generating traffic with Artillery or in the browser.
- Your deployments' resource limits may be being reached. Try running
watch -n1 kubectl top po
. This command will update and display the resource metrics for each pod every second. While this is running, generate traffic to see if any of the pods are exceeding the resource limits set in their respective deployment files.
- Your deployments' resource limits may be being reached. Try running
This code pattern is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.