Skip to content

Commit c81332e

Browse files
committed
* updated doc
1 parent 0724a26 commit c81332e

File tree

1 file changed

+31
-16
lines changed

1 file changed

+31
-16
lines changed

docs/_docs/extensions-and-integrations/change-data-capture-extensions.adoc

Lines changed: 31 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -294,7 +294,7 @@ This application should be started near the destination cluster.
294294

295295
IMPORTANT: `kafka-to-ignite.sh` implements the fail-fast approach. It just fails in case of any error. The restart procedure should be configured with the OS tools.
296296

297-
Count of instances of the application does not corellate to the count of destination server nodes.
297+
Count of instances of the application does not correlate to the count of destination server nodes.
298298
It should be just enough to process source cluster load.
299299
Each instance of application will process configured subset of topic partitions to spread the load.
300300
`KafkaConsumer` for each partition will be created to ensure fair reads.
@@ -315,6 +315,17 @@ Now, you have additional binary `$IGNITE_HOME/bin/kafka-to-ignite.sh` and `$IGNI
315315

316316
NOTE: Please, enable `ignite-cdc-ext` to be able to run `kafka-to-ignite.sh`.
317317

318+
==== Kafka Installation
319+
320+
To install Kafka, download the binary from the link:https://kafka.apache.org/downloads[Apache Kafka downloads page]. Extract the downloaded archive to your desired location. Next, configure the server.properties file to suit your needs and then you can start Zookeeper and Kafka server using provided scripts.
321+
322+
To bootstrap Kafka server use:
323+
324+
```
325+
./zookeeper-server-start.sh ../config/zookeeper.properties
326+
./kafka-server-start.sh ../config/server.properties
327+
```
328+
318329
==== Configuration
319330

320331
Application configuration should be done using POJO classes or Spring xml file like regular Ignite node configuration.
@@ -585,7 +596,7 @@ Examples for reference:
585596

586597
To start an Ignite cluster node, use `--ignite` or `-i` command with `cdc-start-up.sh`. You also need to specify properties holder directory.
587598

588-
There are currently 2 examples for 2 clusters, that you can run sumalteniously. You can find them under `$IGNITE_HOME/examples/config/cdc-start-up/cluster-1` and `$IGNITE_HOME/examples/config/cdc-start-up/cluster-2` as `ignite-cdc.properties`. These files contains all independent settings that you can tinker for your needs.
599+
There are currently 2 examples for 2 clusters, that you can run simultaneously. You can find them under `$IGNITE_HOME/examples/config/cdc-start-up/cluster-1` and `$IGNITE_HOME/examples/config/cdc-start-up/cluster-2` as `ignite-cdc.properties`. These files contains all independent settings that you can tinker for your needs.
589600

590601
NOTE: All properties files are preconfigured to work out of the box.
591602

@@ -596,29 +607,29 @@ To start a single node for each cluster type the following commands in different
596607
./cdc-start-up.sh --ignite cluster-2
597608
```
598609

599-
* Start CDC clients with specified properties:
610+
* Start CDC consumer with specified properties:
600611

601-
** To start any CDC client node, use `--cdc-client` or `-c` command with `cdc-start-up.sh`. In addition, you have to specify CDC client mode and properties holder directory for the source cluster (as in the previous example).
612+
** To start any CDC consumer, use `--cdc-consumer` or `-c` command with `cdc-start-up.sh`. In addition, you have to specify CDC consumer mode and properties holder directory for the source cluster (as in the previous example).
602613

603-
** There are 5 options you can specify CDC client mode from. Take a look at `--help` command output to learn about them.
614+
** There are 3 options you can specify CDC consumer mode from. Take a look at `--help` command output to learn about them.
604615

605-
NOTE: Start both clusters (as in previous example with Ignite nodes) before starting CDC client.
616+
NOTE: Start both clusters (as in previous example with Ignite nodes) before starting CDC consumer.
606617

607-
Here is an example on how to start Active-Passive inter-cluster communication with 2 separate nodes and one thin CDC client for Ignite-to-Ignite replication from cluster 1 to cluster 2 (Run the commands independently):
618+
Here is an example on how to start Active-Passive inter-cluster communication with 2 separate nodes and one CDC consumer with thin client for Ignite-to-Ignite replication from cluster 1 to cluster 2 (Run the commands independently):
608619
```
609620
./cdc-start-up.sh --ignite cluster-1
610621
./cdc-start-up.sh --ignite cluster-2
611-
./cdc-start-up.sh --cdc-client --ignite-to-ignite-thin cluster-1
622+
./cdc-start-up.sh --cdc-consumer --ignite-to-ignite-thin cluster-1
612623
```
613624

614-
NOTE: Make sure clusters fully started up before starting CDC client.
625+
NOTE: Make sure clusters fully started up before starting CDC consumer.
615626

616-
Here is an example on how to start Active-Active inter-cluster communication with 2 separate nodes and 2 CDC clients (thick) for Ignite-to-Ignite replication (Run the commands independently):
627+
Here is an example on how to start Active-Active inter-cluster communication with 2 separate nodes and 2 CDC consumers (thick) for Ignite-to-Ignite replication (Run the commands independently):
617628
```
618629
./cdc-start-up.sh --ignite cluster-1
619630
./cdc-start-up.sh --ignite cluster-2
620-
./cdc-start-up.sh --cdc-client --ignite-to-ignite cluster-1
621-
./cdc-start-up.sh --cdc-client --ignite-to-ignite cluster-2
631+
./cdc-start-up.sh --cdc-consumer --ignite-to-ignite cluster-1
632+
./cdc-start-up.sh --cdc-consumer --ignite-to-ignite cluster-2
622633
```
623634

624635
NOTE: To start CDC with Kafka you need to start topics beforehand.
@@ -639,14 +650,18 @@ We use the following topics naming for our examples:
639650
./kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic metadata_from_dc2 --bootstrap-server localhost:9092
640651
```
641652

642-
Here is an example on how to start Active-Passive inter-cluster communication with 2 separate nodes and 2 CDC clients for replication with Kafka from cluster 1 to cluster 2 (Run the commands independently):
653+
To start-up the replication with Kafka topics you need 1 CDC consumer to replicate data from source cluster to Kafka topics, and 1 Kafka-to-Ignite applier to retrieve data from the topics and apply them to the destination cluster.
654+
655+
Here is an example on how to start Active-Passive inter-cluster communication with 2 separate nodes, 1 CDC consumer and 1 Kafka-to-Ignite applier (thick) for replication with Kafka from cluster 1 to cluster 2 (Run the commands independently):
643656
```
644657
./cdc-start-up.sh --ignite cluster-1
645658
./cdc-start-up.sh --ignite cluster-2
646-
./cdc-start-up.sh --cdc-client --ignite-to-kafka cluster-1
647-
./cdc-start-up.sh --cdc-client --kafka-to-ignite-thin cluster-2
659+
./cdc-start-up.sh --cdc-consumer --ignite-to-kafka cluster-1
660+
./cdc-start-up.sh --kafka-to-ignite thick cluster-2
648661
```
649662

663+
NOTE: Kafka-to-Ignite applier starts alongside the destination cluster and uses its configuration to connect to it.
664+
650665
* You can check CDC replication with `--check-cdc`. Use it in parallel with Active-Passive/Active-Active replication. To start CDC check for proposed entry:
651666
```
652667
./cdc-start-up.sh --check-cdc --key 11006 --value 1 --version 1 --cluster 1
@@ -664,4 +679,4 @@ NOTE: Try to play with version value to see how the conflict resolver works. We
664679
./cdc-start-up.sh --check-cdc --key 11006 --value 3 --version 3 --cluster 1
665680
./cdc-start-up.sh --check-cdc --key 11006 --value 2 --version 2 --cluster 2
666681
```
667-
This sequence simulates the case when the first cluster receives outdated value from the second. In our case the data will not be replicated in the last command and the check would fail after 20 tries.
682+
This sequence simulates the case when the first cluster receives outdated value from the second. In our case the data will not be replicated in the last command and the check would timeout after 1 minute.

0 commit comments

Comments
 (0)