This migration assumes that you have an external MySQL Deployment that is accessible by both Docker Swarm and your Kubernetes Cluster. If you are not using MySQL for your development environment, Broadcom recommends that you migrate to MySQL.
This guide covers migrating certificates and analytics data from Docker Swarm to Kubernetes
Read Migrate Analytics before proceeding with installation!!
- SSH into your Docker Swarm Portal Node
$ curl https://raw.githubusercontent.com/CAAPIM/apim-charts/stable/utils/portal-migration/swarm/docker-swarm-migrate.sh > docker-swarm-migrate.sh
$ chmod +x docker-swarm-migrate.sh
$ ./docker-swarm-migrate.sh -p </path/to/portal/certs/folder/> -a certs
***this produces certificates.tar.gz***
- On a machine that has access via kubectl to the Kubernetes cluster you intend to deploy the Portal on
$ mkdir migration
$ cd migration
$ scp <username>@<swarm-ip>:/path/to/certificates.tar.gz .
$ curl https://raw.githubusercontent.com/CAAPIM/apim-charts/stable/utils/portal-migration/kubernetes/migrate-certificates.sh > migrate-certificates.sh
$ chmod +x migrate-certificates.sh
$ ./migrate-certificates.sh -n <kubernetes-namespace>
***your certificates should now be in your kubernetes cluster***
- Configure Portal Prerequisites
- Copy this file to your machine as <my-values.yaml> - contains a production version of values.yaml that will be applied to your portal when you install the Chart. NOTE: tls.job.enabled must be set to false!!
- Update the following values in this file.
global.legacyHostnames: true global.legacyDatabaseNames: true global.databaseHost: <host> global.databaseUsername: <username> global.databasePassword <password> global.databasePort: <port> default 3306 tls.job.enabled: false portal.domain: <domain> default example.com portal.enrollNotificationEmail: <enrollNotificationEmail> default [email protected] ingress.tenantIds <list of existing tenants> default tenant1 portal.otk.port: 9443 default 443
- uncomment and fill namespace in ingress-nginx (if using your own Ingress Controller, then make sure 9443 is open with SSL Passthrough enabled.)
# tcp: # 9443: "<namespace>/dispatcher:9443"
- Update any other values that you'd like to set (i.e. SMTP settings) link
This guide covers migrating certificates from the old Helm2 to the new Helm3 Chart. Persistent volume naming conventions have not changed so the transition is relatively simple.
- Load Certificates
$ curl https://raw.githubusercontent.com/CAAPIM/apim-charts/stable/utils/portal-migration/kubernetes/migrate-certificates.sh > migrate-certificates.sh
$ chmod +x migrate-certificates.sh
$ ./migrate-certificates.sh -n <kubernetes-namespace> -p /path/to/portal-helm-charts/files -k <certpass>
-
Prepare your values.yaml file
- Copy this file to your machine as <my-values.yaml> - contains a production version of values.yaml that will be applied to your portal when you install the Chart. NOTE: tls.job.enabled must be set to false!!
- Update the following values in this file.
global.databaseHost: <host> global.databaseUsername: <username> global.databasePassword <password> global.databasePort: <port> default 3306 tls.job.enabled: false portal.domain: <domain> default example.com portal.enrollNotificationEmail: <enrollNotificationEmail> default [email protected] ingress.tenantIds <list of existing tenants> default tenant1
- If you already have an ingress controller, then make sure that ingress.create is set to false.
- Update any other values that you'd like to set (i.e. SMTP settings) link.
-
Migrate analytics - see Helm 2.x
- If you need to skip migrating analytics, update the following in <my-values.yaml>
druid.minio.replicaCount: 1
- If you need to skip migrating analytics, update the following in <my-values.yaml>
-
Remove the old Portal Helm Chart (using Helm2) (Analytics volumes will be retained automatically)
$ helm delete --purge <portal-release-name>
-
If you have exported your analytics and need to run minio in distributed mode
$ kubectl delete pvc minio-vol-claim-minio-0 -n <namespace>
WARNING: make sure $PWD/analytics is not empty!!
- If you are migrating from Helm 2.x (non-HA deployment) to Helm 3.x (HA deployment i.e running Kafka, Zookepeer and other services in distributed mode), please do the following
$ kubectl delete pvc kafka-vol-claim-kafka-0 -n <namespace>
$ kubectl delete pvc zookeeper-vol-claim-zookeeper-0 -n <namespace>
$ kubectl delete pvc historical-vol-claim-historical-0 -n <namespace>
- Install the new Chart (using Helm3)
$ helm repo add portal https://caapim.github.io/apim-charts/
$ helm repo update
$ helm install <release-name> portal/portal --set-file "portal.registryCredentials=/path/to/docker-secret.yaml" -f <your-values-production.yaml> -n <namespace>
- Update Portal DNS records to point at the Kubernetes Portal
API Portal makes use of Minio, which acts as a S3 filestore providing a medium to different Cloud Storage solutions. This migration extracts the deep storage data from Minio. Analytics are written to deep storage on an hourly interval. NOTE: the current hour is not backed up.
From your docker swarm node, run the following and copy to a machine that has access via kubectl to the Kubernetes cluster you intend to deploy the Portal on
-
Retrieve Analytics Data from Minio
$ ./docker-swarm-migrate.sh -a analytics ***this produces analytics.tar.gz***
-
Shutdown portal docker swarm
$ docker stack rm portal
-
Proceed with Portal Installation
$ helm repo add portal https://caapim.github.io/apim-charts/ $ helm repo update $ helm install <release-name> portal/portal --set-file "portal.registryCredentials=/path/to/docker-secret.yaml" -f <your-values-production.yaml
-
Update Portal DNS records to point at the Kubernetes Portal (the output of the install/upgrade will display the Portal Hostnames you'll need to add)
Note: the port-forward command may require you to open port 9000 on your kubernetes cluster.
- Get your machines local IP address
export HOST_IP=<ip-address-of-your-machine>
$ ipconfig getifaddr en0
$ hostname -I | awk '{print $1}'
- Port-forward to your Kubernetes minio instance (on a separate shell)
$ kubectl port-forward svc/minio --address 0.0.0.0 9000 -n <namespace>
- Use Minio mc to copy your data to a local directory
$ docker run -e BUCKET_NAME=api-metrics -e ACCESS_KEY=minio -e SECRET_KEY=minio123 -e HOST_IP=$HOST_IP -v $PWD/analytics/api-metrics:/opt/api-metrics -it --entrypoint=/bin/sh minio/mc
$ mc alias set portal http://$HOST_IP:9000 $ACCESS_KEY $SECRET_KEY
$ mc mirror portal/api-metrics /opt/api-metrics
- Go back to step 4 of Migrate from Helm2
This is the standard supported option, the production-values.yaml you started from deploys 4 replicas as per Minio guidelines. The migration makes use of the minio/mc docker container, this is the easiest way to move data across.
- Get your minio credentials from Kubernetes
export ACCESS_KEY=$(kubectl get secret minio-secret -n <namespace> -o 'go-template={{index .data "MINIO_ACCESS_KEY" | base64decode }}')
export SECRET_KEY=$(kubectl get secret minio-secret -n <namespace> -o 'go-template={{index .data "MINIO_SECRET_KEY" | base64decode }}')
export BUCKET_NAME=api-metrics ***If using a Cloud Storage Provider, see Using Cloud Storage and update the bucket name to reflect what you have created.***
export HOST_IP=<ip-address-of-your-machine>
##### On a Macbook
```$ ipconfig getifaddr en0```
##### On Linux
```$ hostname -I | awk '{print $1}'```
- Port-forward to your Kubernetes minio instance (on a separate shell)
$ kubectl port-forward svc/minio --address 0.0.0.0 9000 -n <namespace>
- Prepare analytics and use Minio mc to sync with your Kubernetes Portal
$ tar -xvf analytics.tar.gz ***Docker Swarm Only***
$ docker run -e BUCKET_NAME=$BUCKET_NAME -e ACCESS_KEY=$ACCESS_KEY -e SECRET_KEY=$SECRET_KEY -e HOST_IP=$HOST_IP -v $PWD/analytics/api-metrics:/opt/api-metrics -it --entrypoint=/bin/sh minio/mc
$ mc alias set portal http://$HOST_IP:9000 $ACCESS_KEY $SECRET_KEY
$ mc mirror /opt/api-metrics portal/$BUCKET_NAME
We have exposed this Minio functionality in Kubernetes, if you would like to use Amazon S3, Google GCS, or Azure Blob Storage then simply do the following tasks
- Go to your chosen cloud storage provider.
- Create a storage bucket.
- In your <my-values.yaml> file:
- druid.minio.cloudStorage: true
- druid.minio.bucketName:
- druid.minio..enabled
- relevant credentials - see below
- Azure blob storage - this uses the access/secret key of minio, set these to your Azure Blob Storage credentials
- Google GCS - this uses an authentication file (json) and project name, for the json file use --set-file "druid.minio.gcsgateway.gcsKeyJson=/path/to/auth.json" or paste this in your .yaml file in the correct format.
- Amazon S3 - this uses access/secret key of a user that has S3 permissions.
- For more details see ==> https://docs.min.io/docs/
- Update Druid Metadata:
$ curl https://raw.githubusercontent.com/CAAPIM/apim-charts/stable/utils/portal-migration/druid-meta-update/druid-meta-update.sh > druid-meta-update.sh
$ chmod +x druid-meta-update.sh
$ ./druid-meta-update.sh -u <database-username> -p <database-password> -h <database-host> -d <database-name> -p <database-port> -b <bucket-name>
The Middle Manager and Coordinator services need to be restarted.
$ kubectl rollout restart statefulset coordinator -n <namespace>
$ kubectl rollout restart statefulset middlemanager -n <namespace>
Connect to the Coordinator pods and reset the supervisors using the following curl commands
$ kubectl exec -it <coordinator> -n <namespace> sh
Run below commands to reset
$ curl -X POST http://localhost:8081/druid/indexer/v1/supervisor/apim_metrics/reset
$ curl -X POST http://localhost:8081/druid/indexer/v1/supervisor/apim_metrics_hour/reset
If you have RBAC enabled in your Kubernetes Cluster, serviceAccounts can be set in <my-values.yaml>
serviceAccount.create: <true|false>
serviceAccount.name: <serviceAccountName>
rbac.create: <true|false>
druid.serviceAccount.create: <true|false>
druid.serviceAccount.name: <serviceAccountName>
rabbitmq.serviceAccount.create: <true|false>
rabbitmq.serviceAccount.name: <serviceAccountName>
rabbitmq.rbac.create: <true|false>
ingress-nginx.podSecurityPolicy.enabled: <true|false>
ingress-nginx.serviceAccount.create: <true|false>
ingress-nginx.serviceAccount.name: <true|false>
ingress-nginx.rbac.create: <true|false>
- Restart Portal Deployer.
- Launch Policy Manager and connect to your enrolled API Gateway
- Tasks ==> Global Settings ==> Manage Cluster-Wide Properties
- toggle portal.deployer.enabled ==> set to false, then to true
- Confirm that you are able to deploy APIs to your Gateway as expected.
This section will be updated as we encounter problems related to installing/migrating the Portal to this form factor
- These guides do not include migrating from other databases to MySQL.
- Please raise a support ticket with Broadcom if you encounter problems, raising a bug/feature request against this repository in parallel should result in faster turnaround.
mc: <ERROR> Unable to initialize new alias from the provided credentials. Get "http://<ipaddress>:9000/probe-bucket-sign-ira11pnec5j1/?location=": dial tcp <ipaddress>:9000: connect: no route to host.
If you encounter above error while doing analytics data import (mc alias set portal)
- Verify that the port forward is still running.
- Verify that the minio user and password are valid.
- Verify that port 9000 is allowed by the firewall rules (firewalld)