You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/solr-cloud/solr-cloud-crd.md
+10-1
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,9 @@ If neither is provided, ephemeral storage will be used by default.
15
15
These options can be found in `SolrCloud.spec.dataStorage`
16
16
17
17
-**`persistent`**
18
-
-**`reclaimPolicy`** - Either `Retain`, the default, or `Delete`.
18
+
-**`reclaimPolicy`** -
19
+
_Since v0.2.7_ -
20
+
Either `Retain`, the default, or `Delete`.
19
21
This describes the lifecycle of PVCs that are deleted after the SolrCloud is deleted, or the SolrCloud is scaled down and the pods that the PVCs map to no longer exist.
20
22
`Retain` is used by default, as that is the default Kubernetes policy, to leave PVCs in case pods, or StatefulSets are deleted accidentally.
21
23
@@ -37,6 +39,7 @@ These options can be found in `SolrCloud.spec.dataStorage`
37
39
Only use this option when you require restoring the same backup to multiple SolrClouds.
38
40
39
41
## Update Strategy
42
+
_Since v0.2.7_
40
43
41
44
The SolrCloud CRD provides users the ability to define how Pod updates should be managed, through `SolrCloud.Spec.updateStrategy`.
42
45
This provides the following options:
@@ -59,6 +62,7 @@ Under `SolrCloud.Spec.updateStrategy`:
59
62
-**`maxShardReplicasUnavailable`** - The `maxShardReplicasUnavailable` is calculated independently for each shard, as the percentage of the number of replicas for that shard.
60
63
61
64
## Addressability
65
+
_Since v0.2.6_
62
66
63
67
The SolrCloud CRD provides users the ability to define how it is addressed, through the following options:
64
68
@@ -112,6 +116,7 @@ Under `spec.zookeeperRef`:
112
116
-**`chroot`** - The chroot to use for the cluster
113
117
114
118
#### ACLs
119
+
_Since v0.2.7_
115
120
116
121
The Solr Operator allows for users to specify ZK ACL references in their Solr Cloud CRDs.
117
122
The user must specify the name of a secret that resides in the same namespace as the cloud, that contains an ACL username value and an ACL password value.
@@ -138,6 +143,7 @@ each solrCloud that has this option specified.
138
143
The startup parameter `zookeeper-operator` must be provided on startup of the solr-operator for this parameter to be available.
139
144
140
145
## Override Built-in Solr Configuration Files
146
+
_Since v0.2.7_
141
147
142
148
The Solr operator deploys well-configured SolrCloud instances with minimal input required from human operators.
143
149
As such, the operator installs various configuration files automatically, including `solr.xml` for node-level settings and `log4j2.xml` for logging.
@@ -222,6 +228,7 @@ If the custom `solr.xml` changes in the user-provided ConfigMap, then the operat
222
228
To summarize, if you need to customize `solr.xml`, provide your own version in a ConfigMap and changes made to the XML in the ConfigMap are automatically applied to your Solr pods.
223
229
224
230
### Custom Log Configuration
231
+
_Since v0.3.0_
225
232
226
233
By default, the Solr Docker image configures Solr to load its log configuration from `/var/solr/log4j2.xml`.
227
234
If you need to fine-tune the log configuration, then you can provide a custom `log4j2.xml` in a ConfigMap using the same basic process as described in the previous section for customizing `solr.xml`. If supplied, the operator overrides the log config using the `LOG4J_PROPS` env var.
@@ -254,6 +261,7 @@ data:
254
261
```
255
262
256
263
## Enable TLS Between Solr Pods
264
+
_Since v0.3.0_
257
265
258
266
A common approach to securing traffic to your Solr cluster is to perform **TLS termination** at the Ingress and leave all traffic between Solr pods un-encrypted.
259
267
However, depending on how you expose Solr on your network, you may also want to encrypt traffic between Solr pods.
@@ -547,6 +555,7 @@ The example settings above will result in your Solr pods getting names like: `<n
547
555
which you can request TLS certificates from LetsEncrypt assuming you own the `k8s.solr.cloud` domain.
548
556
549
557
## Authentication and Authorization
558
+
_Since v0.3.0_
550
559
551
560
All well-configured Solr clusters should enforce users to authenticate, even for read-only operations. Even if you want
552
561
to allow anonymous query requests from unknown users, you should make this explicit using Solr's rule-based authorization
Copy file name to clipboardexpand all lines: docs/solr-prometheus-exporter/README.md
+23-17
Original file line number
Diff line number
Diff line change
@@ -27,6 +27,7 @@ This name can be provided at: `SolrPrometheusExporter.spec.solrRef.cloud.name`
27
27
This info can be provided at: `SolrPrometheusExporter.spec.solrRef.cloud.zkConnectionInfo`, with keys `internalConnectionString` and `chroot`
28
28
29
29
#### ACLs
30
+
_Since v0.2.7_
30
31
31
32
The Prometheus Exporter can be set up to use ZK ACLs when connecting to Zookeeper.
32
33
@@ -53,11 +54,13 @@ In order to use this functionality, use the following spec field:
53
54
54
55
55
56
### Solr TLS
57
+
_Since v0.3.0_
56
58
57
59
If you're relying on a self-signed certificate (or any certificate that requires importing the CA into the Java trust store) for Solr pods, then the Prometheus Exporter will not be able to make requests for metrics.
58
60
You'll need to duplicate your TLS config from your SolrCloud CRD definition to your Prometheus exporter CRD definition as shown in the example below:
59
61
60
-
```
62
+
```yaml
63
+
spec:
61
64
solrReference:
62
65
cloud:
63
66
name: "dev"
@@ -74,10 +77,11 @@ You'll need to duplicate your TLS config from your SolrCloud CRD definition to y
74
77
**This only applies to the SolrJ client the exporter uses to make requests to your TLS-enabled Solr pods and does not enable HTTPS for the exporter service.**
75
78
76
79
### Prometheus Exporter with Basic Auth
80
+
_Since v0.3.0_
77
81
78
82
If you enable basic auth for your SolrCloud cluster, then you need to point the Prometheus exporter at the basic auth secret containing the credentials for making API requests to `/admin/metrics` and `/admin/ping` for all collections.
79
83
80
-
```
84
+
```yaml
81
85
spec:
82
86
solrReference:
83
87
basicAuthSecret: user-provided-secret
@@ -86,7 +90,7 @@ If you chose option #1 to have the operator bootstrap `security.json` for you, t
86
90
`<CLOUD>-solrcloud-basic-auth`. If you chose option #2, then pass the same name that you used for your SolrCloud CRD instance.
87
91
88
92
This user account will need access to the following endpoints in Solr:
89
-
```
93
+
```json
90
94
{
91
95
"name": "k8s-metrics",
92
96
"role": "k8s",
@@ -112,7 +116,7 @@ The Prometheus Stack provides all the services you need for monitoring Kubernete
112
116
### Install Prometheus Stack
113
117
114
118
Begin by installing the Prometheus Stack in the `monitoring` namespace with Helm release name `mon`:
_Refer to the Prometheus stack documentation for detailed instructions._
135
139
136
140
Verify you have Prometheus / Grafana pods running in the `monitoring` namespace:
137
-
```
141
+
```bash
138
142
kubectl get pods -n monitoring
139
143
```
140
144
141
145
### Deploy Prometheus Exporter for Solr Metrics
142
146
143
147
Next, deploy a Solr Prometheus exporter for the SolrCloud you want to capture metrics from in the namespace where you're running SolrCloud, not in the `monitoring` namespace.
144
148
For instance, the following example creates a Prometheus exporter named `dev-prom-exporter` for a SolrCloud named `dev` deployed in the `dev` namespace:
145
-
```
149
+
```yaml
146
150
apiVersion: solr.apache.org/v1beta1
147
151
kind: SolrPrometheusExporter
148
152
metadata:
@@ -161,7 +165,7 @@ spec:
161
165
```
162
166
163
167
Look at the logs for your exporter pod to ensure it is running properly (notice we're using a label filter vs. addressing the pod by name):
You should see some log messages that look similar to:
@@ -171,12 +175,14 @@ INFO - <timestamp>; org.apache.solr.prometheus.collector.SchedulerMetricsCollec
171
175
```
172
176
173
177
You can also see the metrics that are exported by the pod by opening a port-forward to the exporter pod and hitting port 8080 with cURL:
174
-
```
178
+
```bash
175
179
kubectl port-forward $(kubectl get pod -l solr-prometheus-exporter=dev-prom-exporter --no-headers -o custom-columns=":metadata.name") 8080
176
180
177
181
curl http://localhost:8080/metrics
178
182
```
183
+
179
184
#### Customize Prometheus Exporter Config
185
+
_Since v0.3.0_
180
186
181
187
Each Solr pod exposes metrics as JSON from the `/solr/admin/metrics` endpoint. To see this in action, open a port-forward to a Solr pod and send a request to `http://localhost:8983/solr/admin/metrics`.
182
188
@@ -187,13 +193,13 @@ By default, the Solr operator configures the exporter to use the config from `/o
187
193
If you need to customize the metrics exposed to Prometheus, you'll need to provide a custom config XML via a ConfigMap and then configure the exporter CRD to point to it.
188
194
189
195
For instance, let's imagine you need to expose a new metric to Prometheus. Start by pulling the default config from the exporter pod using:
190
-
```
196
+
```bash
191
197
EXPORTER_POD_ID=$(kubectl get pod -l solr-prometheus-exporter=dev-prom-exporter --no-headers -o custom-columns=":metadata.name"`)
Create a ConfigMap with your customized XML config under the `solr-prometheus-exporter.xml` key.
196
-
```
202
+
```yaml
197
203
apiVersion: v1
198
204
data:
199
205
solr-prometheus-exporter.xml: |
@@ -208,7 +214,7 @@ metadata:
208
214
_Note: Using `kubectl create configmap --from-file` scrambles the XML formatting, so we recommend defining the configmap YAML as shown above to keep the XML formatted properly._
209
215
210
216
Point to the custom ConfigMap in your Prometheus exporter definition using:
211
-
```
217
+
```yaml
212
218
spec:
213
219
customKubeOptions:
214
220
configMapOptions:
@@ -224,24 +230,24 @@ The Solr operator creates a K8s `ClusterIP` service for load-balancing across ex
224
230
For our example `dev-prom-exporter`, the service name is: `dev-prom-exporter-solr-metrics`
225
231
226
232
Take a quick look at the labels on the service as you'll need them to define a service monitor in the next step.
227
-
```
233
+
```bash
228
234
kubectl get svc dev-prom-exporter-solr-metrics --show-labels
229
235
```
230
236
231
237
Also notice the ports that are exposed for this service:
232
-
```
238
+
```bash
233
239
kubectl get svc dev-prom-exporter-solr-metrics --output jsonpath="{@.spec.ports}"
The Prometheus operator (deployed with the Prometheus stack) uses service monitors to find which services to scrape metrics from. Thus, we need to define a service monitor for our exporter service `dev-prom-exporter-solr-metrics`.
242
248
If you're not using the Prometheus operator, then you do not need a service monitor as Prometheus will scrape metrics using the `prometheus.io/*` pod annotations on the exporter service; see [Prometheus Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/).
243
249
244
-
```
250
+
```yaml
245
251
apiVersion: monitoring.coreos.com/v1
246
252
kind: ServiceMonitor
247
253
metadata:
@@ -267,7 +273,7 @@ There are a few important aspects of this service monitor definition:
267
273
* The `endpoints` section identifies the port to scrape metrics from and the scrape interval; recall our service exposes the port as `solr-metrics`
268
274
269
275
Save the service monitor YAML to a file, such as `dev-prom-service-monitor.yaml` and apply to the `monitoring` namespace:
0 commit comments