Skip to content

Commit 4ceeb1a

Browse files
authored
Fix some typos (#152)
1 parent 02f6d82 commit 4ceeb1a

File tree

20 files changed

+30
-30
lines changed

20 files changed

+30
-30
lines changed

assets/certs/component-certs/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ openssl req -x509 -new -nodes \
5454
-subj "/C=US/ST=CA/L=MVT/O=TestOrg/OU=Cloud/CN=TestCA"
5555
```
5656

57-
3. Check the validatity of the CA
57+
3. Check the validity of the CA
5858

5959
```
6060
openssl x509 -in $TUTORIAL_HOME/generated/cacerts.pem -text -noout

connector/hdfs3-sink-connector/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ kubectl --namespace confluent delete -f $TUTORIAL_HOME/hdfs-server.yaml
144144

145145
## Notes
146146

147-
Since the Connect leverage the built in libraries of the hadoop client a name for the user of the pod is needed.
147+
Since the Connect leverages the built in libraries of the Hadoop client a name for the user of the pod is needed.
148148
By default Confluent Platform uses ` user ID 1001` which has no name:
149149

150150

connector/replicator-with-monitoring/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ kubectl create secret tls ca-pair-sslcerts \
6666
In this step you will be creating secrets to be used to authenticate the clusters.
6767

6868
```
69-
# Specify the credentials required by the souce and destination cluster. To understand how these
69+
# Specify the credentials required by the source and destination cluster. To understand how these
7070
# credentials are configured, see
7171
# https://github.com/confluentinc/confluent-kubernetes-examples/tree/master/security/secure-authn-encrypt-deploy
7272

hybrid/ccloud-JDBC-mysql/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ sasl.mechanism=PLAIN
134134
EOF
135135
```
136136

137-
### Create topic to load dat a from table into (CRD connector)
137+
### Create topic to load data from table into (CRD connector)
138138
```
139139
kafka-topics --command-config /opt/confluentinc/etc/connect/consumer.properties \
140140
--bootstrap-server CCLOUD:9092 \
@@ -144,7 +144,7 @@ kafka-topics --command-config /opt/confluentinc/etc/connect/consumer.properties
144144
--topic quickstart-jdbc-CRD-test
145145
```
146146

147-
### Create topic to load dat a from table into (REST API endpoint connector)
147+
### Create topic to load data from table into (REST API endpoint connector)
148148

149149
```
150150
kafka-topics --command-config /opt/confluentinc/etc/connect/consumer.properties \

hybrid/clusterlink/ccloud-as-destination-cluster/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ kubectl -n operator create secret tls ca-pair-sslcerts \
6565

6666
## CCLOUD setup
6767
- Create a dedicated a cluster in the Confluent Cloud. Dedicated cluster is required for cluster linking. Standard cluster should be good for creating topics
68-
- Create an API Key with `Global Access`. You can create API key in `Cluster Overview -> Data Integartion -> API Keys`
68+
- Create an API Key with `Global Access`. You can create API key in `Cluster Overview -> Data Integration -> API Keys`
6969
- Create a file `basic.txt` with API Key and API secret in this format
7070
```
7171
username=<API-KEY>

hybrid/clusterlink/ccloud-as-source-cluster/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ After the Kafka cluster is in running state, create cluster link between source
7676
#### exec into destination kafka pod
7777
kubectl -n destination exec kafka-0 -it -- bash
7878

79-
#### produce a message in the Confuent Cloud topic
79+
#### produce a message in the Confluent Cloud topic
8080
Go the UI and produce a message into the topic called demo.
8181

8282
#### consume in destination kafka cluster and confirm message delivery in destination cluster

hybrid/multi-region-clusters/README-basic.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# MRC Deployment - Basic
22

3-
In this deploymnet, you'll deploy a quickstart Confluent Platform cluster across a multiple regions set up.
3+
In this deployment, you'll deploy a quickstart Confluent Platform cluster across a multiple regions set up.
44

55
This assumes that you have set up three Kubernetes clusters, and have Kubernetes contexts set up as:
66
- mrc-central

hybrid/multi-region-clusters/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -106,13 +106,13 @@ helm install external-dns -f $TUTORIAL_HOME/external-dns-values.yaml --set names
106106

107107
## Setup - OpenLDAP
108108

109-
This repo includes a Helm chart for `OpenLdap
109+
This repo includes a Helm chart for `OpenLDAP
110110
<https://github.com/osixia/docker-openldap>`. The chart ``values.yaml``
111111
includes the set of principal definitions that Confluent Platform needs for
112112
RBAC.
113113

114114
```
115-
# Deploy OpenLdap
115+
# Deploy OpenLDAP
116116
helm upgrade --install -f $TUTORIAL_HOME/../../assets/openldap/ldaps-rbac.yaml open-ldap $TUTORIAL_HOME/../../assets/openldap -n central --kube-context mrc-central
117117
118118
# Validate that OpenLDAP is running:

hybrid/multi-region-clusters/networking/aks/networking-AKS-README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ This is the architecture you'll achieve on Azure Kubernetes Service (AKS):
3838
be the authoritative nameserver for the west.svc.cluster.local domain, and like wise for all pairs of regions.
3939

4040
- Firewall rules
41-
- At minimum: allow TCP traffic on the standard ZooKeeper, Kafka and SchemaRegistry ports between all region clusters' Pod subnetworks.
41+
- At minimum: allow TCP traffic on the standard Zookeeper, Kafka and SchemaRegistry ports between all region clusters' Pod subnetworks.
4242

4343
In this section, you'll configure the required networking between three Azure Kubernetes Service (AKS) clusters, where
4444
each AKS cluster is in a different Azure region.

hybrid/multi-region-clusters/networking/eks/networking-EKS-README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@ This is the architecture you'll achieve on Elastic Kubernetes Service (EKS):
1313
- For peering to work, address space between VPCs cannot overlap.
1414
- Additional details on VPC peering can be found here: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
1515
- Flat pod networking
16-
- With Amazon VPC Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly.
16+
- With Amazon VPC Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly.
1717
- These IP addresses must be unique across your network space.
18-
- Each node can support upto a certain number of pods as defined here: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
18+
- Each node can support up to a certain number of pods as defined here: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
1919
- The equivalent number of IP addresses per node are then reserved up front for that node.
2020
- With Amazon VPC CNI, pods do not require a separate address space.
2121
- Additional details on Amazon VPC CNI can be found here: https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html
@@ -447,4 +447,4 @@ service "busybox" deleted
447447
statefulset.apps "busybox" deleted
448448
service "busybox" deleted
449449
```
450-
With this, the network setup is complete.
450+
With this, the network setup is complete.

0 commit comments

Comments
 (0)