Skip to content

Commit 5ecd741

Browse files
Add since versions to docs for features (#232)
Co-authored-by: Houston Putman <[email protected]>
1 parent a71fe82 commit 5ecd741

File tree

3 files changed

+34
-18
lines changed

3 files changed

+34
-18
lines changed

docs/solr-cloud/managed-updates.md

+1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
# Managed SolrCloud Rolling Updates
2+
_Since v0.2.7_
23

34
Solr Clouds are complex distributed systems, and thus require a more delicate and informed approach to rolling updates.
45

docs/solr-cloud/solr-cloud-crd.md

+10-1
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,9 @@ If neither is provided, ephemeral storage will be used by default.
1515
These options can be found in `SolrCloud.spec.dataStorage`
1616

1717
- **`persistent`**
18-
- **`reclaimPolicy`** - Either `Retain`, the default, or `Delete`.
18+
- **`reclaimPolicy`** -
19+
_Since v0.2.7_ -
20+
Either `Retain`, the default, or `Delete`.
1921
This describes the lifecycle of PVCs that are deleted after the SolrCloud is deleted, or the SolrCloud is scaled down and the pods that the PVCs map to no longer exist.
2022
`Retain` is used by default, as that is the default Kubernetes policy, to leave PVCs in case pods, or StatefulSets are deleted accidentally.
2123

@@ -37,6 +39,7 @@ These options can be found in `SolrCloud.spec.dataStorage`
3739
Only use this option when you require restoring the same backup to multiple SolrClouds.
3840

3941
## Update Strategy
42+
_Since v0.2.7_
4043

4144
The SolrCloud CRD provides users the ability to define how Pod updates should be managed, through `SolrCloud.Spec.updateStrategy`.
4245
This provides the following options:
@@ -59,6 +62,7 @@ Under `SolrCloud.Spec.updateStrategy`:
5962
- **`maxShardReplicasUnavailable`** - The `maxShardReplicasUnavailable` is calculated independently for each shard, as the percentage of the number of replicas for that shard.
6063

6164
## Addressability
65+
_Since v0.2.6_
6266

6367
The SolrCloud CRD provides users the ability to define how it is addressed, through the following options:
6468

@@ -112,6 +116,7 @@ Under `spec.zookeeperRef`:
112116
- **`chroot`** - The chroot to use for the cluster
113117

114118
#### ACLs
119+
_Since v0.2.7_
115120

116121
The Solr Operator allows for users to specify ZK ACL references in their Solr Cloud CRDs.
117122
The user must specify the name of a secret that resides in the same namespace as the cloud, that contains an ACL username value and an ACL password value.
@@ -138,6 +143,7 @@ each solrCloud that has this option specified.
138143
The startup parameter `zookeeper-operator` must be provided on startup of the solr-operator for this parameter to be available.
139144

140145
## Override Built-in Solr Configuration Files
146+
_Since v0.2.7_
141147

142148
The Solr operator deploys well-configured SolrCloud instances with minimal input required from human operators.
143149
As such, the operator installs various configuration files automatically, including `solr.xml` for node-level settings and `log4j2.xml` for logging.
@@ -222,6 +228,7 @@ If the custom `solr.xml` changes in the user-provided ConfigMap, then the operat
222228
To summarize, if you need to customize `solr.xml`, provide your own version in a ConfigMap and changes made to the XML in the ConfigMap are automatically applied to your Solr pods.
223229

224230
### Custom Log Configuration
231+
_Since v0.3.0_
225232

226233
By default, the Solr Docker image configures Solr to load its log configuration from `/var/solr/log4j2.xml`.
227234
If you need to fine-tune the log configuration, then you can provide a custom `log4j2.xml` in a ConfigMap using the same basic process as described in the previous section for customizing `solr.xml`. If supplied, the operator overrides the log config using the `LOG4J_PROPS` env var.
@@ -254,6 +261,7 @@ data:
254261
```
255262

256263
## Enable TLS Between Solr Pods
264+
_Since v0.3.0_
257265

258266
A common approach to securing traffic to your Solr cluster is to perform **TLS termination** at the Ingress and leave all traffic between Solr pods un-encrypted.
259267
However, depending on how you expose Solr on your network, you may also want to encrypt traffic between Solr pods.
@@ -547,6 +555,7 @@ The example settings above will result in your Solr pods getting names like: `<n
547555
which you can request TLS certificates from LetsEncrypt assuming you own the `k8s.solr.cloud` domain.
548556

549557
## Authentication and Authorization
558+
_Since v0.3.0_
550559

551560
All well-configured Solr clusters should enforce users to authenticate, even for read-only operations. Even if you want
552561
to allow anonymous query requests from unknown users, you should make this explicit using Solr's rule-based authorization

docs/solr-prometheus-exporter/README.md

+23-17
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ This name can be provided at: `SolrPrometheusExporter.spec.solrRef.cloud.name`
2727
This info can be provided at: `SolrPrometheusExporter.spec.solrRef.cloud.zkConnectionInfo`, with keys `internalConnectionString` and `chroot`
2828

2929
#### ACLs
30+
_Since v0.2.7_
3031

3132
The Prometheus Exporter can be set up to use ZK ACLs when connecting to Zookeeper.
3233

@@ -53,11 +54,13 @@ In order to use this functionality, use the following spec field:
5354

5455

5556
### Solr TLS
57+
_Since v0.3.0_
5658

5759
If you're relying on a self-signed certificate (or any certificate that requires importing the CA into the Java trust store) for Solr pods, then the Prometheus Exporter will not be able to make requests for metrics.
5860
You'll need to duplicate your TLS config from your SolrCloud CRD definition to your Prometheus exporter CRD definition as shown in the example below:
5961

60-
```
62+
```yaml
63+
spec:
6164
solrReference:
6265
cloud:
6366
name: "dev"
@@ -74,10 +77,11 @@ You'll need to duplicate your TLS config from your SolrCloud CRD definition to y
7477
**This only applies to the SolrJ client the exporter uses to make requests to your TLS-enabled Solr pods and does not enable HTTPS for the exporter service.**
7578
7679
### Prometheus Exporter with Basic Auth
80+
_Since v0.3.0_
7781
7882
If you enable basic auth for your SolrCloud cluster, then you need to point the Prometheus exporter at the basic auth secret containing the credentials for making API requests to `/admin/metrics` and `/admin/ping` for all collections.
7983

80-
```
84+
```yaml
8185
spec:
8286
solrReference:
8387
basicAuthSecret: user-provided-secret
@@ -86,7 +90,7 @@ If you chose option #1 to have the operator bootstrap `security.json` for you, t
8690
`<CLOUD>-solrcloud-basic-auth`. If you chose option #2, then pass the same name that you used for your SolrCloud CRD instance.
8791

8892
This user account will need access to the following endpoints in Solr:
89-
```
93+
```json
9094
{
9195
"name": "k8s-metrics",
9296
"role": "k8s",
@@ -112,7 +116,7 @@ The Prometheus Stack provides all the services you need for monitoring Kubernete
112116
### Install Prometheus Stack
113117

114118
Begin by installing the Prometheus Stack in the `monitoring` namespace with Helm release name `mon`:
115-
```
119+
```bash
116120
MONITOR_NS=monitoring
117121
PROM_OPER_REL=mon
118122
@@ -134,15 +138,15 @@ helm upgrade --install ${PROM_OPER_REL} prometheus-community/kube-prometheus-sta
134138
_Refer to the Prometheus stack documentation for detailed instructions._
135139

136140
Verify you have Prometheus / Grafana pods running in the `monitoring` namespace:
137-
```
141+
```bash
138142
kubectl get pods -n monitoring
139143
```
140144

141145
### Deploy Prometheus Exporter for Solr Metrics
142146

143147
Next, deploy a Solr Prometheus exporter for the SolrCloud you want to capture metrics from in the namespace where you're running SolrCloud, not in the `monitoring` namespace.
144148
For instance, the following example creates a Prometheus exporter named `dev-prom-exporter` for a SolrCloud named `dev` deployed in the `dev` namespace:
145-
```
149+
```yaml
146150
apiVersion: solr.apache.org/v1beta1
147151
kind: SolrPrometheusExporter
148152
metadata:
@@ -161,7 +165,7 @@ spec:
161165
```
162166

163167
Look at the logs for your exporter pod to ensure it is running properly (notice we're using a label filter vs. addressing the pod by name):
164-
```
168+
```bash
165169
kubectl logs -l solr-prometheus-exporter=dev-prom-exporter
166170
```
167171
You should see some log messages that look similar to:
@@ -171,12 +175,14 @@ INFO - <timestamp>; org.apache.solr.prometheus.collector.SchedulerMetricsCollec
171175
```
172176

173177
You can also see the metrics that are exported by the pod by opening a port-forward to the exporter pod and hitting port 8080 with cURL:
174-
```
178+
```bash
175179
kubectl port-forward $(kubectl get pod -l solr-prometheus-exporter=dev-prom-exporter --no-headers -o custom-columns=":metadata.name") 8080
176180
177181
curl http://localhost:8080/metrics
178182
```
183+
179184
#### Customize Prometheus Exporter Config
185+
_Since v0.3.0_
180186

181187
Each Solr pod exposes metrics as JSON from the `/solr/admin/metrics` endpoint. To see this in action, open a port-forward to a Solr pod and send a request to `http://localhost:8983/solr/admin/metrics`.
182188

@@ -187,13 +193,13 @@ By default, the Solr operator configures the exporter to use the config from `/o
187193
If you need to customize the metrics exposed to Prometheus, you'll need to provide a custom config XML via a ConfigMap and then configure the exporter CRD to point to it.
188194

189195
For instance, let's imagine you need to expose a new metric to Prometheus. Start by pulling the default config from the exporter pod using:
190-
```
196+
```bash
191197
EXPORTER_POD_ID=$(kubectl get pod -l solr-prometheus-exporter=dev-prom-exporter --no-headers -o custom-columns=":metadata.name"`)
192198

193199
kubectl cp $EXPORTER_POD_ID:/opt/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml ./solr-exporter-config.xml
194200
```
195201
Create a ConfigMap with your customized XML config under the `solr-prometheus-exporter.xml` key.
196-
```
202+
```yaml
197203
apiVersion: v1
198204
data:
199205
solr-prometheus-exporter.xml: |
@@ -208,7 +214,7 @@ metadata:
208214
_Note: Using `kubectl create configmap --from-file` scrambles the XML formatting, so we recommend defining the configmap YAML as shown above to keep the XML formatted properly._
209215

210216
Point to the custom ConfigMap in your Prometheus exporter definition using:
211-
```
217+
```yaml
212218
spec:
213219
customKubeOptions:
214220
configMapOptions:
@@ -224,24 +230,24 @@ The Solr operator creates a K8s `ClusterIP` service for load-balancing across ex
224230
For our example `dev-prom-exporter`, the service name is: `dev-prom-exporter-solr-metrics`
225231

226232
Take a quick look at the labels on the service as you'll need them to define a service monitor in the next step.
227-
```
233+
```bash
228234
kubectl get svc dev-prom-exporter-solr-metrics --show-labels
229235
```
230236

231237
Also notice the ports that are exposed for this service:
232-
```
238+
```bash
233239
kubectl get svc dev-prom-exporter-solr-metrics --output jsonpath="{@.spec.ports}"
234240
```
235241
You should see output similar to:
236-
```
242+
```json
237243
[{"name":"solr-metrics","port":80,"protocol":"TCP","targetPort":8080}]
238244
```
239245

240246
### Create a Service Monitor
241247
The Prometheus operator (deployed with the Prometheus stack) uses service monitors to find which services to scrape metrics from. Thus, we need to define a service monitor for our exporter service `dev-prom-exporter-solr-metrics`.
242248
If you're not using the Prometheus operator, then you do not need a service monitor as Prometheus will scrape metrics using the `prometheus.io/*` pod annotations on the exporter service; see [Prometheus Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/).
243249

244-
```
250+
```yaml
245251
apiVersion: monitoring.coreos.com/v1
246252
kind: ServiceMonitor
247253
metadata:
@@ -267,7 +273,7 @@ There are a few important aspects of this service monitor definition:
267273
* The `endpoints` section identifies the port to scrape metrics from and the scrape interval; recall our service exposes the port as `solr-metrics`
268274

269275
Save the service monitor YAML to a file, such as `dev-prom-service-monitor.yaml` and apply to the `monitoring` namespace:
270-
```
276+
```bash
271277
kubectl apply -f dev-prom-service-monitor.yaml -n monitoring
272278
```
273279

@@ -276,7 +282,7 @@ Prometheus is now configured to scrape metrics from the exporter service.
276282
### Load Solr Dashboard in Grafana
277283

278284
You can expose Grafana via a LoadBalancer (or Ingress) but for now, we'll just open a port-forward to port 3000 to access Grafana:
279-
```
285+
```bash
280286
GRAFANA_POD_ID=$(kubectl get pod -l app.kubernetes.io/name=grafana --no-headers -o custom-columns=":metadata.name" -n monitoring)
281287
kubectl port-forward -n monitoring $GRAFANA_POD_ID 3000
282288
```

0 commit comments

Comments
 (0)