-
Notifications
You must be signed in to change notification settings - Fork 3
data-contracts tutorial #80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This includes a major_version compatibility group.
🎉 All Contributor License Agreements have been signed. Ready to merge. |
data-contracts/README.md
Outdated
Register the new schema using our gradle plugin: | ||
|
||
```shell | ||
./gradlew :data-contracts:schemas:registerSchemasTask | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a bit confused at this point as to what I should be running vs what's already done. e.g., the new version already got registered when I ran this earlier. isn't this a no-op?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, lemme remove this second registration.
data-contracts/README.md
Outdated
@@ -0,0 +1,439 @@ | |||
# Managing Data Contracts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Give this a really specific "How to ..." title. I think it should include "Confluent Cloud" since this is cloud only. "How to manage Data Contracts in Confluent Cloud using Terraform and Gradle" or something
|
||
We'll create producer and consumer classes to configure and provide Kafka clients for our application(s). In the `shared` module there are implementations to encapsulate this behavior: | ||
|
||
```kotlin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thinking of ways to trim down the README and think we don't need the code snippet. say that the clients are dummy implementations, e.g., the client simply logs events, but dont need to include the code in a README
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would making these "collapsible" work a lil better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my pref would be a link (non-relative for our friend CD) to the source for this particular one since the producer / consumer are kind of background noise far from the point of this tutorial. my high level 2c on snippets by example - definitely do show a snippet and explain the client configs that are relevant to the tutorial topic (LATEST_COMPATIBILITY_STRICT
, USE_LATEST_VERION
, USE_LATEST_WITH_METADATA
), but inlining code snippets of the class hierarchy and showing a consumer that logs is too much detail and not relevant enough to make the README cut IMO... the snippets kinda detract from the thing to highlight which is those three configs
```kotlin | ||
class MembershipProducer: BaseProducer<String, Membership>(mapOf( | ||
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG to "org.apache.kafka.common.serialization.StringSerializer", | ||
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG to "io.confluent.kafka.serializers.KafkaAvroSerializer", | ||
ProducerConfig.CLIENT_ID_CONFIG to "membership-producer-app-v1", | ||
AbstractKafkaSchemaSerDeConfig.LATEST_COMPATIBILITY_STRICT to true, | ||
AbstractKafkaSchemaSerDeConfig.USE_LATEST_VERSION to false, | ||
AbstractKafkaSchemaSerDeConfig.USE_LATEST_WITH_METADATA to "major_version=1" | ||
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think the code snippets can go away, or be trimmed down to the most important part - last 3 lines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davetroiano Do you think I should remove the references in the paragraph below to the Serializers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could go either way on that... it doesn't seem necessary to cover the tutorial topic but it's close enough to the topic that I think it's alright to leave in there
data-contracts/README.md
Outdated
|
||
To exercise these producer and consumer implementations, we created a `main` function in the `ApplicationMain` class to start a consumer instance and a producer instance periodically send random events to the `membership-avro` topic. | ||
|
||
<details> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'd remove this. just reference the code with a link and say what it does in English, but no need to put code inline.
|
||
In the `app-schema-v2` module, we'll find a new implementation of a `MembershipConsumer`: | ||
|
||
```kotlin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar feedback to above - suggest removing or trimming the code snippets and console output down. it's a little overwhelming and I think distracts from the flow of running through it.
Addressing PR comment: #80 (comment)
Addresses PR comment: #80 (comment)
Tutorial explaining how to manage schemas with Confluent Schema registry, using the Confluent Terraform Provider and Gradle. Code examples show how to configure producers and consumers to use specific versions of the schema. The build also includes data migration rules to illustrate our support for o"breaking" schema evolution changes.
Also included is an update of the common module to use Java 17.