Skip to content

Commit 4ceeb1a

Browse files
authored
Fix some typos (#152)
1 parent 02f6d82 commit 4ceeb1a

File tree

20 files changed

+30
-30
lines changed

20 files changed

+30
-30
lines changed

assets/certs/component-certs/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ openssl req -x509 -new -nodes \
5454
-subj "/C=US/ST=CA/L=MVT/O=TestOrg/OU=Cloud/CN=TestCA"
5555
```
5656

57-
3. Check the validatity of the CA
57+
3. Check the validity of the CA
5858

5959
```
6060
openssl x509 -in $TUTORIAL_HOME/generated/cacerts.pem -text -noout

connector/hdfs3-sink-connector/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ kubectl --namespace confluent delete -f $TUTORIAL_HOME/hdfs-server.yaml
144144

145145
## Notes
146146

147-
Since the Connect leverage the built in libraries of the hadoop client a name for the user of the pod is needed.
147+
Since the Connect leverages the built in libraries of the Hadoop client a name for the user of the pod is needed.
148148
By default Confluent Platform uses ` user ID 1001` which has no name:
149149

150150

connector/replicator-with-monitoring/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ kubectl create secret tls ca-pair-sslcerts \
6666
In this step you will be creating secrets to be used to authenticate the clusters.
6767

6868
```
69-
# Specify the credentials required by the souce and destination cluster. To understand how these
69+
# Specify the credentials required by the source and destination cluster. To understand how these
7070
# credentials are configured, see
7171
# https://github.com/confluentinc/confluent-kubernetes-examples/tree/master/security/secure-authn-encrypt-deploy
7272

hybrid/ccloud-JDBC-mysql/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ sasl.mechanism=PLAIN
134134
EOF
135135
```
136136

137-
### Create topic to load dat a from table into (CRD connector)
137+
### Create topic to load data from table into (CRD connector)
138138
```
139139
kafka-topics --command-config /opt/confluentinc/etc/connect/consumer.properties \
140140
--bootstrap-server CCLOUD:9092 \
@@ -144,7 +144,7 @@ kafka-topics --command-config /opt/confluentinc/etc/connect/consumer.properties
144144
--topic quickstart-jdbc-CRD-test
145145
```
146146

147-
### Create topic to load dat a from table into (REST API endpoint connector)
147+
### Create topic to load data from table into (REST API endpoint connector)
148148

149149
```
150150
kafka-topics --command-config /opt/confluentinc/etc/connect/consumer.properties \

hybrid/clusterlink/ccloud-as-destination-cluster/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ kubectl -n operator create secret tls ca-pair-sslcerts \
6565

6666
## CCLOUD setup
6767
- Create a dedicated a cluster in the Confluent Cloud. Dedicated cluster is required for cluster linking. Standard cluster should be good for creating topics
68-
- Create an API Key with `Global Access`. You can create API key in `Cluster Overview -> Data Integartion -> API Keys`
68+
- Create an API Key with `Global Access`. You can create API key in `Cluster Overview -> Data Integration -> API Keys`
6969
- Create a file `basic.txt` with API Key and API secret in this format
7070
```
7171
username=<API-KEY>

hybrid/clusterlink/ccloud-as-source-cluster/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ After the Kafka cluster is in running state, create cluster link between source
7676
#### exec into destination kafka pod
7777
kubectl -n destination exec kafka-0 -it -- bash
7878

79-
#### produce a message in the Confuent Cloud topic
79+
#### produce a message in the Confluent Cloud topic
8080
Go the UI and produce a message into the topic called demo.
8181

8282
#### consume in destination kafka cluster and confirm message delivery in destination cluster

hybrid/multi-region-clusters/README-basic.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# MRC Deployment - Basic
22

3-
In this deploymnet, you'll deploy a quickstart Confluent Platform cluster across a multiple regions set up.
3+
In this deployment, you'll deploy a quickstart Confluent Platform cluster across a multiple regions set up.
44

55
This assumes that you have set up three Kubernetes clusters, and have Kubernetes contexts set up as:
66
- mrc-central

hybrid/multi-region-clusters/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -106,13 +106,13 @@ helm install external-dns -f $TUTORIAL_HOME/external-dns-values.yaml --set names
106106

107107
## Setup - OpenLDAP
108108

109-
This repo includes a Helm chart for `OpenLdap
109+
This repo includes a Helm chart for `OpenLDAP
110110
<https://github.com/osixia/docker-openldap>`. The chart ``values.yaml``
111111
includes the set of principal definitions that Confluent Platform needs for
112112
RBAC.
113113

114114
```
115-
# Deploy OpenLdap
115+
# Deploy OpenLDAP
116116
helm upgrade --install -f $TUTORIAL_HOME/../../assets/openldap/ldaps-rbac.yaml open-ldap $TUTORIAL_HOME/../../assets/openldap -n central --kube-context mrc-central
117117
118118
# Validate that OpenLDAP is running:

hybrid/multi-region-clusters/networking/aks/networking-AKS-README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ This is the architecture you'll achieve on Azure Kubernetes Service (AKS):
3838
be the authoritative nameserver for the west.svc.cluster.local domain, and like wise for all pairs of regions.
3939

4040
- Firewall rules
41-
- At minimum: allow TCP traffic on the standard ZooKeeper, Kafka and SchemaRegistry ports between all region clusters' Pod subnetworks.
41+
- At minimum: allow TCP traffic on the standard Zookeeper, Kafka and SchemaRegistry ports between all region clusters' Pod subnetworks.
4242

4343
In this section, you'll configure the required networking between three Azure Kubernetes Service (AKS) clusters, where
4444
each AKS cluster is in a different Azure region.

hybrid/multi-region-clusters/networking/eks/networking-EKS-README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,9 @@ This is the architecture you'll achieve on Elastic Kubernetes Service (EKS):
1313
- For peering to work, address space between VPCs cannot overlap.
1414
- Additional details on VPC peering can be found here: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
1515
- Flat pod networking
16-
- With Amazon VPC Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly.
16+
- With Amazon VPC Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly.
1717
- These IP addresses must be unique across your network space.
18-
- Each node can support upto a certain number of pods as defined here: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
18+
- Each node can support up to a certain number of pods as defined here: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
1919
- The equivalent number of IP addresses per node are then reserved up front for that node.
2020
- With Amazon VPC CNI, pods do not require a separate address space.
2121
- Additional details on Amazon VPC CNI can be found here: https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html
@@ -447,4 +447,4 @@ service "busybox" deleted
447447
statefulset.apps "busybox" deleted
448448
service "busybox" deleted
449449
```
450-
With this, the network setup is complete.
450+
With this, the network setup is complete.

hybrid/multi-region-clusters/networking/gke/networking-GKE-README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Configure Networking: Google Kubernetes Engine
22

3-
This is the architecture you'll acheive on Google Kubernetes Engine (GKE):
3+
This is the architecture you'll achieve on Google Kubernetes Engine (GKE):
44

55
- Namespace naming
66
- 1 uniquely named namespace in each region cluster

hybrid/replicator/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ kubectl create secret tls ca-pair-sslcerts \
6262
Deploy the source and destination cluster.
6363

6464
```
65-
# Specify the credentials required by the souce and destination cluster. To understand how these
65+
# Specify the credentials required by the source and destination cluster. To understand how these
6666
# credentials are configured, see
6767
# https://github.com/confluentinc/confluent-kubernetes-examples/tree/master/security/secure-authn-encrypt-deploy
6868

networking/external-access-load-balancer-deploy/README.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ The access endpoint of each Confluent Platform component will be:
136136

137137
<Component CR name>.$DOMAIN
138138

139-
For example, in a brower, you will access Control Center at:
139+
For example, in a browser, you will access Control Center at:
140140

141141
::
142142

security/configure-with-vault/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Note: Hashicorp Vault is a third party software product that is not supported or
2222

2323
Using the Helm Chart, install the latest version of the Vault server running in development mode to a namespace `hashicorp`.
2424

25-
Running a Vault server in development is automatically initialized and unsealed. This is ideal in a learning environment, but not recomended for a production environment.
25+
Running a Vault server in development is automatically initialized and unsealed. This is ideal in a learning environment, but not recommended for a production environment.
2626

2727
```
2828
$ kubectl create ns hashicorp
@@ -336,7 +336,7 @@ This repo includes a Helm chart for [OpenLdap](https://github.com/osixia/docker-
336336
The chart `values.yaml` includes the set of principal definitions that Confluent Platform
337337
needs for RBAC.
338338

339-
Deploy OpenLdap:
339+
Deploy OpenLDAP:
340340

341341
```
342342
helm upgrade --install -f $TUTORIAL_HOME/../../assets/openldap/ldaps-rbac.yaml test-ldap $TUTORIAL_HOME/../../assets/openldap -n confluent

security/internal_external-tls_mtls_confluent-rbac/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ In this scenario, you'll be allowing Kafka clients to connect with Kafka through
129129
For that purpose, you'll provide a server certificate that secures the external domain used for Kafka access.
130130

131131
```
132-
# If you dont have one, create a root certificate authority for the external component certs
132+
# If you don't have one, create a root certificate authority for the external component certs
133133
openssl genrsa -out $TUTORIAL_HOME/externalRootCAkey.pem 2048
134134
135135
openssl req -x509 -new -nodes \
@@ -248,7 +248,7 @@ kubectl apply -f $TUTORIAL_HOME/controlcenter-testadmin-rolebindings.yaml --name
248248
# Configure External Access through Ingress Controller
249249

250250
The Ingress Controller will support TLS encryption. For this, you'll need to provide a server certificate
251-
to use for encypting traffic.
251+
to use for encrypting traffic.
252252

253253
```
254254
# Generate a server certificate from the external root certificate authority

security/userprovided-tls_mtls-sasl_confluent-rbac/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ kubectl get pods --namespace confluent
4141
This repo includes a Helm chart for [OpenLdap](https://github.com/osixia/docker-openldap). The chart `values.yaml`
4242
includes the set of principal definitions that Confluent Platform needs for RBAC.
4343

44-
Deploy OpenLdap:
44+
Deploy OpenLDAP:
4545

4646
```
4747
helm upgrade --install -f $TUTORIAL_HOME/../../assets/openldap/ldaps-rbac.yaml test-ldap $TUTORIAL_HOME/../../assets/openldap --namespace confluent

security/userprovided-tls_mtls_kafka-acls/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ These TLS certificates include the following principal names for each component
4646
- Kafka: `kafka`
4747
- Schema Registry: `sr`
4848
- Kafka Connect: `connect`
49-
- Kafka Rest Pxory: `krp`
49+
- Kafka Rest Proxy: `krp`
5050
- ksqlDB: `ksql`
5151
- Control Center: `controlcenter`
5252

@@ -254,7 +254,7 @@ Create ACLs:
254254
255255
# For Connect
256256
257-
### The Connect topic predix is: <namespace>.<connect-cluster-name>-
257+
### The Connect topic prefix is: <namespace>.<connect-cluster-name>-
258258
/bin/kafka-acls --bootstrap-server kafka.confluent.svc.cluster.local:9071 \
259259
--command-config /opt/confluentinc/kafka.properties \
260260
--add \
@@ -496,7 +496,7 @@ org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to ac
496496

497497
Check the following:
498498
1) What principal is being used by the component - this comes from the CN of the certificate used by the component
499-
2) Are the appopriate ACL created for the components principal
499+
2) Are the appropriate ACL created for the components principal
500500

501501
To see why Kafka failed the access request, look at the kafka broker logs.
502502
You might see messages that indicate authorization failures:

security/using-cert-manager/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ kubectl get pods --namespace confluent
4747

4848
To comprehensively understand how to install cert manager using Helm, see these docs: https://cert-manager.io/docs/installation/kubernetes/
4949

50-
For the purpose of this scenario worklow, use this step to install Cert-manager:
50+
For the purpose of this scenario workflow, use this step to install Cert-manager:
5151

5252
```
5353
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.yaml
@@ -59,7 +59,7 @@ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/relea
5959
When you provide TLS certificates, CFK takes the provided files and configures Confluent components accordingly.
6060

6161
For each component, the following TLS certificate information should be provided:
62-
- The certificate authorities for the component to trust, including the authorites used to issue server certificates for any Confluent component cluster. These are required so that peer-to-peer communication (e.g. between Kafka Brokers) and communication between components (e.g. from Connect workers to Kafka) will work.
62+
- The certificate authorities for the component to trust, including the authorities used to issue server certificates for any Confluent component cluster. These are required so that peer-to-peer communication (e.g. between Kafka Brokers) and communication between components (e.g. from Connect workers to Kafka) will work.
6363
- The component’s server certificate (public key)
6464
- The component’s server private key
6565

storage/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,10 @@ Before continuing with the scenario, ensure that you have set up the
77

88
## Set the current tutorial directory
99

10-
Set the tutorial directory under the directory you downloaded this Github repo:
10+
Set the tutorial directory under the directory you downloaded this GitHub repo:
1111

1212
```
13-
export TUTORIAL_HOME=<Github repo directory>/storage
13+
export TUTORIAL_HOME=<GitHub repo directory>/storage
1414
```
1515

1616
## Deploy Confluent for Kubernetes

troubleshooting/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ the auto-generated certificate is expired and requires the Operator to be notifi
9797

9898

9999

100-
### ConfluentRoleBindings associaed with older Kafka cluster
100+
### ConfluentRoleBindings associated with older Kafka cluster
101101

102102
If CP RBAC enabled Kafka Cluster is deleted and redeployed with same cluster name. Then, all the existing `ConfluentRolebindings` are still associated with previous Kafka Cluster. To make existing `ConfluentRolebindings` sync with new Kafka cluster, you need to manually delete the `KafkaRestClass` and recreate it again.
103103

0 commit comments

Comments
 (0)