You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -142,7 +142,7 @@ EXTERNAL-IP
142
142
143
143
-**Workload Identity**: [See these instructions.](./docs/workload-identity.md)
144
144
-**Cloud SQL**: [See these instructions](./extras/cloudsql) to replace the in-cluster databases with hosted Google Cloud SQL.
145
-
-**Multicluster with Cloud SQL**: [See these instructions](./extras/cloudsql-multicluster) to replicate the app across two regions using GKE, Multi-cluster Ingress, and Google Cloud SQL.
145
+
-**Multi Cluster with Cloud SQL**: [See these instructions](./extras/cloudsql-multicluster) to replicate the app across two regions using GKE, Multi Cluster Ingress, and Google Cloud SQL.
146
146
-**Istio**: Apply `istio-manifests/` to your cluster to access the frontend through the IngressGateway.
147
147
-**Anthos Service Mesh**: ASM requires Workload Identity to be enabled in your GKE cluster. [See the workload identity instructions](./docs/workload-identity.md) to configure and deploy the app. Then, apply `istio-manifests/` to your cluster to configure frontend ingress.
148
148
-**Java Monolith (VM)**: We provide a version of this app where the three Java microservices are coupled together into one monolithic service, which you can deploy inside a VM (eg. Google Compute Engine). See the [ledgermonolith](./src/ledgermonolith) directory.
This demo shows how to install Bank of Anthos across 2 clusters, using [Anthos Service Mesh endpoint discovery](https://cloud.google.com/service-mesh/docs/managed-control-plane#configure_endpoint_discovery_only_for_multi-cluster_installations) for cross-cluster routing.
3
+
This demo shows how to install Bank of Anthos across 2 clusters, using [Anthos Service Mesh endpoint discovery](https://cloud.google.com/service-mesh/docs/managed-control-plane#configure_endpoint_discovery_only_for_multi-cluster_installations) for cross-cluster routing.
4
4
5
-
For a "replicated" multicluster setup with no cross-cluster traffic, see the [Cloud SQL + Multicluster](/extras/cloudsql-multicluster) demo.
5
+
For a "replicated" multicluster setup with no cross-cluster traffic, see the [Cloud SQL + Multi Cluster](/extras/cloudsql-multicluster) demo.
6
6
7
-
## Architecture
7
+
## Architecture
8
8
9
9

10
10
11
11
12
-
## Prerequisites
12
+
## Prerequisites
13
13
14
-
1. A Google Cloud project.
15
-
2. The following tools installed in your local environment:
16
-
-[gcloud](https://cloud.google.com/sdk/docs/install), up to date with `gcloud components update`
14
+
1. A Google Cloud project.
15
+
2. The following tools installed in your local environment:
16
+
-[gcloud](https://cloud.google.com/sdk/docs/install), up to date with `gcloud components update`
17
17
-[kubectl](https://cloud.google.com/sdk/gcloud/reference/components/install) - you can install this via gcloud: `gcloud components install kubectl`
3.**Make sure you've `cd`-ed into this directory, then run the cluster setup script**. This script creates 2 GKE clusters, `cluster-1` and `cluster-2`, installs Anthos Service Mesh, and sets up cross-cluster endpoint discovery. This script takes about 10 minutes to run.
43
+
3.**Make sure you've `cd`-ed into this directory, then run the cluster setup script**. This script creates 2 GKE clusters, `cluster-1` and `cluster-2`, installs Anthos Service Mesh, and sets up cross-cluster endpoint discovery. This script takes about 10 minutes to run.
44
44
45
45
```
46
-
cd extras/asm-multicluster/
46
+
cd extras/asm-multicluster/
47
47
./cluster-setup.sh
48
48
```
49
49
50
-
4.**Verify that your local kubectx is set up** for `cluster-1` and `cluster-2`, and that you can access both clusters.
50
+
4.**Verify that your local kubectx is set up** for `cluster-1` and `cluster-2`, and that you can access both clusters.
5.**Deploy the Bank of Anthos app across both clusters**. This will deploy the frontend and Python backends to cluster-1, and the Java backends to cluster 2. Note that ASM endpoint discovery only works if all the Kubernetes Services are deployed to both clusters, so that's what we're doing here.
67
+
5.**Deploy the Bank of Anthos app across both clusters**. This will deploy the frontend and Python backends to cluster-1, and the Java backends to cluster 2. Note that ASM endpoint discovery only works if all the Kubernetes Services are deployed to both clusters, so that's what we're doing here.
68
68
69
-
**Note** - you should run these commands from this directory. (`extras/asm-multicluster`).
69
+
**Note** - you should run these commands from this directory. (`extras/asm-multicluster`).
6.**Verify that the pods start up successfully.** Note that you should see 2 containers per pod `(2/2)`, one containing the Bank of Anthos service container, the other containing the ASM sidecar proxy (Envoy).
93
+
6.**Verify that the pods start up successfully.** Note that you should see 2 containers per pod `(2/2)`, one containing the Bank of Anthos service container, the other containing the ASM sidecar proxy (Envoy).
Navigate to the `EXTERNAL_IP` in a browser; you should see the Bank of Anthos login page.
135
+
Navigate to the `EXTERNAL_IP` in a browser; you should see the Bank of Anthos login page.
136
136
137
-
8.**Open the Google Cloud Console, and navigate to Anthos > Service Mesh**. You may see an Anthos window pop up - click "Enable."
137
+
8.**Open the Google Cloud Console, and navigate to Anthos > Service Mesh**. You may see an Anthos window pop up - click "Enable."
138
138
139
139

140
140
141
141
142
-
9. View the Bank of Anthos services in the Anthos Service Mesh dashboard. In the Table view, you should see metrics populated for services in both cluster-1 and cluster 2.
142
+
9. View the Bank of Anthos services in the Anthos Service Mesh dashboard. In the Table view, you should see metrics populated for services in both cluster-1 and cluster 2.
143
143
144
144

145
145
146
146
147
-
In the topology view, you should see traffic flowing from cluster 1 services (frontend) to cluster 2 services (eg. transactionhistory).
147
+
In the topology view, you should see traffic flowing from cluster 1 services (frontend) to cluster 2 services (eg. transactionhistory).
Copy file name to clipboardExpand all lines: extras/cloudsql-multicluster/README.md
+8-6
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
-
# Multi-cluster Bank of Anthos with Cloud SQL
1
+
# Multi Cluster Bank of Anthos with Cloud SQL
2
2
3
3
This doc contains instructions for deploying the Cloud SQL version of Bank of Anthos in a multi-region high availability / global configuration.
4
4
5
-
The use case for this setup is to demo running a global, scaled app, where even if one cluster goes down, users will be routed to the next available cluster. These instructions also show how to use [Multi-cluster Ingress](https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress) to route users to the closest GKE cluster, demonstrating a low-latency use case.
5
+
The use case for this setup is to demo running a global, scaled app, where even if one cluster goes down, users will be routed to the next available cluster. These instructions also show how to use [Multi Cluster Ingress](https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress) to route users to the closest GKE cluster, demonstrating a low-latency use case.
6
6
7
7
This guide has two parts to it:
8
8
1. Deploy Bank of Anthos on 2 GKE clusters with **Multi Cluster Ingress** for
> Note: It can take more than **10 minutes** for both clusters to get created.
67
+
66
68
5.**Configure kubectx for the clusters.**
67
69
68
70
```
@@ -92,6 +94,7 @@ kubectx cluster1
92
94
kubectx cluster2
93
95
../cloudsql/create_cloudsql_instance.sh
94
96
```
97
+
> Note: Setting up the `CloudSQL` instance can sometimes take more than 10 minutes.
95
98
96
99
8.**Create Cloud SQL admin secrets** in your GKE clusters. This gives your in-cluster Cloud SQL clients a username and password to access Cloud SQL. (Note that admin/admin credentials are for demo use only and should never be used in a production environment.)
9.**Deploy the DB population jobs.** These are one-off bash scripts that initialize the Accounts and Ledger databases with data. You only need to run these Jobs once, so we deploy them only on `cluster1`.
115
117
116
118
```
@@ -151,22 +153,22 @@ kubectx cluster2
151
153
kubectl delete svc frontend -n ${NAMESPACE}
152
154
```
153
155
154
-
13.**Run the Multi-cluster Ingress setup script.** This registers both GKE clusters to Anthos with ***"memberships"*** and sets cluster 1 as the ***"config cluster"*** to administer the Multi-cluster Ingress resources.
156
+
13.**Run the Multi Cluster Ingress setup script.** This registers both GKE clusters to Anthos with ***"memberships"*** and sets cluster 1 as the ***"config cluster"*** to administer the Multi Cluster Ingress resources.
155
157
156
158
```
157
159
./register_clusters.sh
158
160
```
159
161
160
162
161
-
14.**Create Multi-cluster Ingress resources for global routing.** This YAML file contains two resources a headless Multicluster Kubernetes Service ("MCS") mapped to the `frontend` Pods, and a multi cluster Ingress resource, `frontend-global-ingress`, with `frontend-mcs` as the MCS backend. Note that we're only deploying this to Cluster 1, which we've designated as the multicluster ingress "config cluster."
163
+
14.**Create Multi Cluster Ingress resources for global routing.** This YAML file contains two resources a headless Multi Cluster Kubernetes Service ("MCS") mapped to the `frontend` Pods, and a Multi Cluster Ingress resource, `frontend-global-ingress`, with `frontend-mcs` as the MCS backend. Note that we're only deploying this to Cluster 1, which we've designated as the Multi Cluster Ingress "config cluster."
15.**Verify that the multicluster ingress resource was created.** Look for the `Status` field to be populated with two Network Endpoint Groups (NEGs) corresponding to the regions where your 2 GKE clusters are running.
171
+
15.**Verify that the Multi Cluster Ingress resource was created.** Look for the `Status` field to be populated with two Network Endpoint Groups (NEGs) corresponding to the regions where your 2 GKE clusters are running.
170
172
171
173
> **Note:** It may take up to 90 seconds before a `VIP` is assigned to the
0 commit comments