|
| 1 | +# Demo |
| 2 | +Let's run Kafka Connect ArangoDB in a local Apache Kafka + ArangoDB Kubernetes cluster via MiniKube so that we can get a feel for how it all works together. |
| 3 | + |
| 4 | +## Setup |
| 5 | +### Minikube |
| 6 | +Let's set up the cluster. We're going to use Minikube for this so [make sure you have it installed](https://minikube.sigs.k8s.io/docs/start/) along with [`kubectl` 1.14 or higher](https://kubernetes.io/docs/tasks/tools/install-kubectl/). |
| 7 | + |
| 8 | +Set up the cluster with a docker registry and some extra juice: |
| 9 | +```bash |
| 10 | +minikube start --cpus 2 --memory 10g |
| 11 | +``` |
| 12 | + |
| 13 | +### Docker |
| 14 | +Now that we have a cluster, we'll need a Docker image that contains the Kafka Connect ArangoDB plugin. We don't publish a Docker image to public Docker registries since you will usually install multiple Kafka Connect plugins on one image. Additionally, that base image may vary depending on your preferences and use case. |
| 15 | + |
| 16 | +Navigate to `demo/docker/` and run the following commands in a separate terminal to download the plugin and build the image for Minikube: |
| 17 | +```bash |
| 18 | +curl -O https://search.maven.org/remotecontent?filepath=io/github/jaredpetersen/kafka-connect-arangodb/1.0.6/kafka-connect-arangodb-1.0.6.jar |
| 19 | +eval $(minikube docker-env) |
| 20 | +docker build -t jaredpetersen/kafka-connect-arangodb:1.0.6 . |
| 21 | +``` |
| 22 | + |
| 23 | +Close out this terminal when you're done -- we want to go back to our normal Docker environment. |
| 24 | + |
| 25 | +Alternatively, build from source with `mvn package` at the root of this repository to get the JAR file. |
| 26 | + |
| 27 | +### Kubernetes Manifests |
| 28 | +Let's start running everything. We're going to be using [kube-arangodb](https://github.com/arangodb/kube-arangodb) to help us manage our ArangoDB cluster and just plain manifests for Zookeeper, Apache Kafka, and Kafka Connect. |
| 29 | + |
| 30 | +Apply the manifests (you may have to give this a couple of tries due to race conditions): |
| 31 | +```bash |
| 32 | +kubectl apply -k kubernetes |
| 33 | +``` |
| 34 | + |
| 35 | +Check in on the pods and wait for everything to come up: |
| 36 | +```bash |
| 37 | +kubectl -n kca-demo get pods |
| 38 | +``` |
| 39 | + |
| 40 | +Be patient, this can take a few minutes. |
| 41 | + |
| 42 | +## Usage |
| 43 | +### Database |
| 44 | +Log in to the database. Minikube will take you there by running: |
| 45 | +```bash |
| 46 | +minikube -n kca-demo service arangodb-ea |
| 47 | +``` |
| 48 | + |
| 49 | +The username is `root` and the password is empty. |
| 50 | + |
| 51 | +Create a new database with the name `demo`. Switch to this new database and create a document collection with the name `airports` and an edge collection with the name `flights`. |
| 52 | + |
| 53 | +### Create Kafka Topics |
| 54 | +Create an interactive ephemeral query pod: |
| 55 | +```bash |
| 56 | +kubectl -n kca-demo run kafka-create-topics --generator=run-pod/v1 --image confluentinc/cp-kafka:5.4.0 -it --rm --command /bin/bash |
| 57 | +``` |
| 58 | + |
| 59 | +Create topics: |
| 60 | +``` |
| 61 | +kafka-topics --create --zookeeper zookeeper-0.zookeeper:2181 --replication-factor 1 --partitions 1 --topic stream.airports |
| 62 | +kafka-topics --create --zookeeper zookeeper-0.zookeeper:2181 --replication-factor 1 --partitions 1 --topic stream.flights |
| 63 | +``` |
| 64 | + |
| 65 | +### Configure Kafka Connect ArangoDB |
| 66 | +Send a request to the Kafka Connect REST API to configure it to use Kafka Connect ArangoDB: |
| 67 | +```bash |
| 68 | +curl --request POST \ |
| 69 | + --url "$(minikube -n kca-demo service kafka-connect --url)/connectors" \ |
| 70 | + --header 'content-type: application/json' \ |
| 71 | + --data '{ |
| 72 | + "name": "demo-arangodb-connector", |
| 73 | + "config": { |
| 74 | + "connector.class": "io.github.jaredpetersen.kafkaconnectarangodb.sink.ArangoDbSinkConnector", |
| 75 | + "tasks.max": "1", |
| 76 | + "topics": "stream.airports,stream.flights", |
| 77 | + "arangodb.host": "arangodb-ea", |
| 78 | + "arangodb.port": 8529, |
| 79 | + "arangodb.user": "root", |
| 80 | + "arangodb.password": "", |
| 81 | + "arangodb.database.name": "demo" |
| 82 | + } |
| 83 | + }' |
| 84 | +``` |
| 85 | + |
| 86 | +### Write Records |
| 87 | +Create an interactive ephemeral query pod: |
| 88 | +```bash |
| 89 | +kubectl -n kca-demo run kafka-write-records --generator=run-pod/v1 --image confluentinc/cp-kafka:5.4.0 -it --rm --command /bin/bash |
| 90 | +``` |
| 91 | + |
| 92 | +Write records to the `airports` topic: |
| 93 | +```bash |
| 94 | +kafka-console-producer --broker-list kafka-broker-0.kafka-broker:9092 --topic stream.airports --property "parse.key=true" --property "key.separator=|" |
| 95 | +>{"id":"PDX"}|{"airport":"Portland International Airport","city":"Portland","state":"OR","country":"USA","lat":45.58872222,"long":-122.5975} |
| 96 | +>{"id":"BOI"}|{"airport":"Boise Airport","city":"Boise","state":"ID","country":"USA","lat":43.56444444,"long":-116.2227778} |
| 97 | +>{"id":"HNL"}|{"airport":"Daniel K. Inouye International Airport","city":"Honolulu","state":"HI","country":"USA","lat":21.31869111,"long":-157.9224072} |
| 98 | +>{"id":"KOA"}|{"airport":"Ellison Onizuka Kona International Airport at Keāhole","city":"Kailua-Kona","state":"HI","country":"USA","lat":19.73876583,"long":-156.0456314} |
| 99 | +``` |
| 100 | + |
| 101 | +Write records to the `flights` topic: |
| 102 | +```bash |
| 103 | +kafka-console-producer --broker-list kafka-broker-0.kafka-broker:9092 --topic stream.flights --property "parse.key=true" --property "key.separator=|" |
| 104 | +>{"id":1}|{"_from":"airports/PDX","_to":"airports/BOI","depTime":"2008-01-01T21:26:00.000Z","arrTime":"2008-01-01T22:26:00.000Z","uniqueCarrier":"WN","flightNumber":2377,"tailNumber":"N663SW","distance":344} |
| 105 | +>{"id":2}|{"_from":"airports/HNL","_to":"airports/PDX","depTime":"2008-01-13T00:16:00.000Z","arrTime":"2008-01-13T05:03:00.000Z","uniqueCarrier":"HA","flightNumber":26,"tailNumber":"N587HA","distance":2603} |
| 106 | +>{"id":3}|{"_from":"airports/KOA","_to":"airports/HNL","depTime":"2008-01-15T16:08:00.000Z","arrTime":"2008-01-15T16:50:00.000Z","uniqueCarrier":"YV","flightNumber":1010,"tailNumber":"N693BR","distance":163} |
| 107 | +>{"id":4}|{"_from":"airports/BOI","_to":"airports/PDX","depTime":"2008-01-16T02:03:00.000Z","arrTime":"2008-01-16T03:09:00.000Z","uniqueCarrier":"WN","flightNumber":1488,"tailNumber":"N242WN","distance":344} |
| 108 | +``` |
| 109 | + |
| 110 | +### Validate |
| 111 | +Open up the `demo` database again and go to the collections we created earlier. You should now see that they have the data we just wrote into Kafka. |
| 112 | + |
| 113 | +## Teardown |
| 114 | +Remove all manifests: |
| 115 | +```bash |
| 116 | +k delete -k kubernetes |
| 117 | +``` |
| 118 | + |
| 119 | +Delete the minikube cluster |
| 120 | +```bash |
| 121 | +minikube delete |
| 122 | +``` |
0 commit comments