Skip to content

Commit 49158ec

Browse files
RafiaSabihRafia SabihJan-M
authored
Connection pooler for replica (zalando#1127)
* Enable connection pooler for replica * Refactor code for connection pooler - Move all the relevant code to a separate file - Move all the related tests to a separate file - Avoid using cluster where not required - Simplify the logic in sync and other methods - Cleanup of duplicated or unused code * Fix labels for the replica pods * Update deleteConnectionPooler to include role * Adding test cases and other changes - Fix unit test and delete secret when required only - Make sure we use empty fresh cluster for every test case. * enhance e2e test * Disable pooler in complete manifest as this is source for e2e too an creates unnecessary pooler setups. Co-authored-by: Rafia Sabih <[email protected]> Co-authored-by: Jan Mussler <[email protected]>
1 parent 3fed565 commit 49158ec

25 files changed

+2185
-1736
lines changed

charts/postgres-operator/crds/postgresqls.yaml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -190,6 +190,8 @@ spec:
190190
type: string
191191
enableConnectionPooler:
192192
type: boolean
193+
enableReplicaConnectionPooler:
194+
type: boolean
193195
enableLogicalBackup:
194196
type: boolean
195197
enableMasterLoadBalancer:
@@ -603,4 +605,4 @@ spec:
603605
status:
604606
type: object
605607
additionalProperties:
606-
type: string
608+
type: string

docs/reference/cluster_manifest.md

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -151,10 +151,15 @@ These parameters are grouped directly under the `spec` key in the manifest.
151151
configured (so you can override the operator configuration). Optional.
152152

153153
* **enableConnectionPooler**
154-
Tells the operator to create a connection pooler with a database. If this
155-
field is true, a connection pooler deployment will be created even if
154+
Tells the operator to create a connection pooler with a database for the master
155+
service. If this field is true, a connection pooler deployment will be created even if
156156
`connectionPooler` section is empty. Optional, not set by default.
157157

158+
* **enableReplicaConnectionPooler**
159+
Tells the operator to create a connection pooler with a database for the replica
160+
service. If this field is true, a connection pooler deployment for replica
161+
will be created even if `connectionPooler` section is empty. Optional, not set by default.
162+
158163
* **enableLogicalBackup**
159164
Determines if the logical backup of this cluster should be taken and uploaded
160165
to S3. Default: false. Optional.
@@ -241,10 +246,10 @@ explanation of `ttl` and `loop_wait` parameters.
241246

242247
* **synchronous_mode**
243248
Patroni `synchronous_mode` parameter value. The default is set to `false`. Optional.
244-
249+
245250
* **synchronous_mode_strict**
246251
Patroni `synchronous_mode_strict` parameter value. Can be used in addition to `synchronous_mode`. The default is set to `false`. Optional.
247-
252+
248253
## Postgres container resources
249254

250255
Those parameters define [CPU and memory requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
@@ -397,8 +402,10 @@ CPU and memory limits for the sidecar container.
397402

398403
Parameters are grouped under the `connectionPooler` top-level key and specify
399404
configuration for connection pooler. If this section is not empty, a connection
400-
pooler will be created for a database even if `enableConnectionPooler` is not
401-
present.
405+
pooler will be created for master service only even if `enableConnectionPooler`
406+
is not present. But if this section is present then it defines the configuration
407+
for both master and replica pooler services (if `enableReplicaConnectionPooler`
408+
is enabled).
402409

403410
* **numberOfInstances**
404411
How many instances of connection pooler to create.

docs/user.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -807,11 +807,17 @@ manifest:
807807
```yaml
808808
spec:
809809
enableConnectionPooler: true
810+
enableReplicaConnectionPooler: true
810811
```
811812

812813
This will tell the operator to create a connection pooler with default
813814
configuration, through which one can access the master via a separate service
814-
`{cluster-name}-pooler`. In most of the cases the
815+
`{cluster-name}-pooler`. With the first option, connection pooler for master service
816+
is created and with the second option, connection pooler for replica is created.
817+
Note that both of these flags are independent of each other and user can set or
818+
unset any of them as per their requirements without any effect on the other.
819+
820+
In most of the cases the
815821
[default configuration](reference/operator_parameters.md#connection-pooler-configuration)
816822
should be good enough. To configure a new connection pooler individually for
817823
each Postgres cluster, specify:

e2e/scripts/watch_objects.sh

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,9 @@ kubectl get statefulset -o jsonpath='{.items..metadata.annotations.zalando-postg
88
echo
99
echo
1010
echo 'Pods'
11-
kubectl get pods -l application=spilo -l name=postgres-operator -l application=db-connection-pooler -o wide --all-namespaces
11+
kubectl get pods -l application=spilo -o wide --all-namespaces
12+
echo
13+
kubectl get pods -l application=db-connection-pooler -o wide --all-namespaces
1214
echo
1315
echo 'Statefulsets'
1416
kubectl get statefulsets --all-namespaces

e2e/tests/k8s_api.py

Lines changed: 20 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,14 @@
11
import json
2-
import unittest
32
import time
4-
import timeout_decorator
53
import subprocess
64
import warnings
7-
import os
8-
import yaml
95

10-
from datetime import datetime
116
from kubernetes import client, config
127
from kubernetes.client.rest import ApiException
138

149

1510
def to_selector(labels):
16-
return ",".join(["=".join(l) for l in labels.items()])
11+
return ",".join(["=".join(lbl) for lbl in labels.items()])
1712

1813

1914
class K8sApi:
@@ -43,8 +38,8 @@ class K8s:
4338

4439
def __init__(self, labels='x=y', namespace='default'):
4540
self.api = K8sApi()
46-
self.labels=labels
47-
self.namespace=namespace
41+
self.labels = labels
42+
self.namespace = namespace
4843

4944
def get_pg_nodes(self, pg_cluster_name, namespace='default'):
5045
master_pod_node = ''
@@ -81,7 +76,7 @@ def get_operator_pod(self):
8176
'default', label_selector='name=postgres-operator'
8277
).items
8378

84-
pods = list(filter(lambda x: x.status.phase=='Running', pods))
79+
pods = list(filter(lambda x: x.status.phase == 'Running', pods))
8580

8681
if len(pods):
8782
return pods[0]
@@ -110,7 +105,6 @@ def wait_for_pod_start(self, pod_labels, namespace='default'):
110105

111106
time.sleep(self.RETRY_TIMEOUT_SEC)
112107

113-
114108
def get_service_type(self, svc_labels, namespace='default'):
115109
svc_type = ''
116110
svcs = self.api.core_v1.list_namespaced_service(namespace, label_selector=svc_labels, limit=1).items
@@ -213,8 +207,8 @@ def wait_for_logical_backup_job_creation(self):
213207
self.wait_for_logical_backup_job(expected_num_of_jobs=1)
214208

215209
def delete_operator_pod(self, step="Delete operator pod"):
216-
# patching the pod template in the deployment restarts the operator pod
217-
self.api.apps_v1.patch_namespaced_deployment("postgres-operator","default", {"spec":{"template":{"metadata":{"annotations":{"step":"{}-{}".format(step, datetime.fromtimestamp(time.time()))}}}}})
210+
# patching the pod template in the deployment restarts the operator pod
211+
self.api.apps_v1.patch_namespaced_deployment("postgres-operator", "default", {"spec": {"template": {"metadata": {"annotations": {"step": "{}-{}".format(step, time.time())}}}}})
218212
self.wait_for_operator_pod_start()
219213

220214
def update_config(self, config_map_patch, step="Updating operator deployment"):
@@ -237,7 +231,7 @@ def exec_with_kubectl(self, pod, cmd):
237231

238232
def get_patroni_state(self, pod):
239233
r = self.exec_with_kubectl(pod, "patronictl list -f json")
240-
if not r.returncode == 0 or not r.stdout.decode()[0:1]=="[":
234+
if not r.returncode == 0 or not r.stdout.decode()[0:1] == "[":
241235
return []
242236
return json.loads(r.stdout.decode())
243237

@@ -248,7 +242,7 @@ def get_operator_state(self):
248242
pod = pod.metadata.name
249243

250244
r = self.exec_with_kubectl(pod, "curl localhost:8080/workers/all/status/")
251-
if not r.returncode == 0 or not r.stdout.decode()[0:1]=="{":
245+
if not r.returncode == 0 or not r.stdout.decode()[0:1] == "{":
252246
return None
253247

254248
return json.loads(r.stdout.decode())
@@ -277,7 +271,7 @@ def get_effective_pod_image(self, pod_name, namespace='default'):
277271
'''
278272
pod = self.api.core_v1.list_namespaced_pod(
279273
namespace, label_selector="statefulset.kubernetes.io/pod-name=" + pod_name)
280-
274+
281275
if len(pod.items) == 0:
282276
return None
283277
return pod.items[0].spec.containers[0].image
@@ -305,8 +299,8 @@ class K8sBase:
305299

306300
def __init__(self, labels='x=y', namespace='default'):
307301
self.api = K8sApi()
308-
self.labels=labels
309-
self.namespace=namespace
302+
self.labels = labels
303+
self.namespace = namespace
310304

311305
def get_pg_nodes(self, pg_cluster_labels='cluster-name=acid-minimal-cluster', namespace='default'):
312306
master_pod_node = ''
@@ -434,10 +428,10 @@ def count_deployments_with_label(self, labels, namespace='default'):
434428
def count_pdbs_with_label(self, labels, namespace='default'):
435429
return len(self.api.policy_v1_beta1.list_namespaced_pod_disruption_budget(
436430
namespace, label_selector=labels).items)
437-
431+
438432
def count_running_pods(self, labels='application=spilo,cluster-name=acid-minimal-cluster', namespace='default'):
439433
pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items
440-
return len(list(filter(lambda x: x.status.phase=='Running', pods)))
434+
return len(list(filter(lambda x: x.status.phase == 'Running', pods)))
441435

442436
def wait_for_pod_failover(self, failover_targets, labels, namespace='default'):
443437
pod_phase = 'Failing over'
@@ -484,14 +478,14 @@ def exec_with_kubectl(self, pod, cmd):
484478

485479
def get_patroni_state(self, pod):
486480
r = self.exec_with_kubectl(pod, "patronictl list -f json")
487-
if not r.returncode == 0 or not r.stdout.decode()[0:1]=="[":
481+
if not r.returncode == 0 or not r.stdout.decode()[0:1] == "[":
488482
return []
489483
return json.loads(r.stdout.decode())
490484

491485
def get_patroni_running_members(self, pod):
492486
result = self.get_patroni_state(pod)
493-
return list(filter(lambda x: x["State"]=="running", result))
494-
487+
return list(filter(lambda x: x["State"] == "running", result))
488+
495489
def get_statefulset_image(self, label_selector="application=spilo,cluster-name=acid-minimal-cluster", namespace='default'):
496490
ssets = self.api.apps_v1.list_namespaced_stateful_set(namespace, label_selector=label_selector, limit=1)
497491
if len(ssets.items) == 0:
@@ -505,7 +499,7 @@ def get_effective_pod_image(self, pod_name, namespace='default'):
505499
'''
506500
pod = self.api.core_v1.list_namespaced_pod(
507501
namespace, label_selector="statefulset.kubernetes.io/pod-name=" + pod_name)
508-
502+
509503
if len(pod.items) == 0:
510504
return None
511505
return pod.items[0].spec.containers[0].image
@@ -514,10 +508,13 @@ def get_effective_pod_image(self, pod_name, namespace='default'):
514508
"""
515509
Inspiriational classes towards easier writing of end to end tests with one cluster per test case
516510
"""
511+
512+
517513
class K8sOperator(K8sBase):
518514
def __init__(self, labels="name=postgres-operator", namespace="default"):
519515
super().__init__(labels, namespace)
520516

517+
521518
class K8sPostgres(K8sBase):
522519
def __init__(self, labels="cluster-name=acid-minimal-cluster", namespace="default"):
523520
super().__init__(labels, namespace)

0 commit comments

Comments
 (0)