You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/_docs/extensions-and-integrations/change-data-capture-extensions.adoc
+31-16Lines changed: 31 additions & 16 deletions
Original file line number
Diff line number
Diff line change
@@ -294,7 +294,7 @@ This application should be started near the destination cluster.
294
294
295
295
IMPORTANT: `kafka-to-ignite.sh` implements the fail-fast approach. It just fails in case of any error. The restart procedure should be configured with the OS tools.
296
296
297
-
Count of instances of the application does not corellate to the count of destination server nodes.
297
+
Count of instances of the application does not correlate to the count of destination server nodes.
298
298
It should be just enough to process source cluster load.
299
299
Each instance of application will process configured subset of topic partitions to spread the load.
300
300
`KafkaConsumer` for each partition will be created to ensure fair reads.
@@ -315,6 +315,17 @@ Now, you have additional binary `$IGNITE_HOME/bin/kafka-to-ignite.sh` and `$IGNI
315
315
316
316
NOTE: Please, enable `ignite-cdc-ext` to be able to run `kafka-to-ignite.sh`.
317
317
318
+
==== Kafka Installation
319
+
320
+
To install Kafka, download the binary from the link:https://kafka.apache.org/downloads[Apache Kafka downloads page]. Extract the downloaded archive to your desired location. Next, configure the server.properties file to suit your needs and then you can start Zookeeper and Kafka server using provided scripts.
Application configuration should be done using POJO classes or Spring xml file like regular Ignite node configuration.
@@ -585,7 +596,7 @@ Examples for reference:
585
596
586
597
To start an Ignite cluster node, use `--ignite` or `-i` command with `cdc-start-up.sh`. You also need to specify properties holder directory.
587
598
588
-
There are currently 2 examples for 2 clusters, that you can run sumalteniously. You can find them under `$IGNITE_HOME/examples/config/cdc-start-up/cluster-1` and `$IGNITE_HOME/examples/config/cdc-start-up/cluster-2` as `ignite-cdc.properties`. These files contains all independent settings that you can tinker for your needs.
599
+
There are currently 2 examples for 2 clusters, that you can run simultaneously. You can find them under `$IGNITE_HOME/examples/config/cdc-start-up/cluster-1` and `$IGNITE_HOME/examples/config/cdc-start-up/cluster-2` as `ignite-cdc.properties`. These files contains all independent settings that you can tinker for your needs.
589
600
590
601
NOTE: All properties files are preconfigured to work out of the box.
591
602
@@ -596,29 +607,29 @@ To start a single node for each cluster type the following commands in different
596
607
./cdc-start-up.sh --ignite cluster-2
597
608
```
598
609
599
-
* Start CDC clients with specified properties:
610
+
* Start CDC consumer with specified properties:
600
611
601
-
** To start any CDC client node, use `--cdc-client` or `-c` command with `cdc-start-up.sh`. In addition, you have to specify CDC client mode and properties holder directory for the source cluster (as in the previous example).
612
+
** To start any CDC consumer, use `--cdc-consumer` or `-c` command with `cdc-start-up.sh`. In addition, you have to specify CDC consumer mode and properties holder directory for the source cluster (as in the previous example).
602
613
603
-
** There are 5 options you can specify CDC client mode from. Take a look at `--help` command output to learn about them.
614
+
** There are 3 options you can specify CDC consumer mode from. Take a look at `--help` command output to learn about them.
604
615
605
-
NOTE: Start both clusters (as in previous example with Ignite nodes) before starting CDC client.
616
+
NOTE: Start both clusters (as in previous example with Ignite nodes) before starting CDC consumer.
606
617
607
-
Here is an example on how to start Active-Passive inter-cluster communication with 2 separate nodes and one thin CDC client for Ignite-to-Ignite replication from cluster 1 to cluster 2 (Run the commands independently):
618
+
Here is an example on how to start Active-Passive inter-cluster communication with 2 separate nodes and one CDC consumer with thin client for Ignite-to-Ignite replication from cluster 1 to cluster 2 (Run the commands independently):
NOTE: Make sure clusters fully started up before starting CDC client.
625
+
NOTE: Make sure clusters fully started up before starting CDC consumer.
615
626
616
-
Here is an example on how to start Active-Active inter-cluster communication with 2 separate nodes and 2 CDC clients (thick) for Ignite-to-Ignite replication (Run the commands independently):
627
+
Here is an example on how to start Active-Active inter-cluster communication with 2 separate nodes and 2 CDC consumers (thick) for Ignite-to-Ignite replication (Run the commands independently):
Here is an example on how to start Active-Passive inter-cluster communication with 2 separate nodes and 2 CDC clients for replication with Kafka from cluster 1 to cluster 2 (Run the commands independently):
653
+
To start-up the replication with Kafka topics you need 1 CDC consumer to replicate data from source cluster to Kafka topics, and 1 Kafka-to-Ignite applier to retrieve data from the topics and apply them to the destination cluster.
654
+
655
+
Here is an example on how to start Active-Passive inter-cluster communication with 2 separate nodes, 1 CDC consumer and 1 Kafka-to-Ignite applier (thick) for replication with Kafka from cluster 1 to cluster 2 (Run the commands independently):
NOTE: Kafka-to-Ignite applier starts alongside the destination cluster and uses its configuration to connect to it.
664
+
650
665
* You can check CDC replication with `--check-cdc`. Use it in parallel with Active-Passive/Active-Active replication. To start CDC check for proposed entry:
This sequence simulates the case when the first cluster receives outdated value from the second. In our case the data will not be replicated in the last command and the check would fail after 20 tries.
682
+
This sequence simulates the case when the first cluster receives outdated value from the second. In our case the data will not be replicated in the last command and the check would timeout after 1 minute.
0 commit comments