Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,7 @@ coverage.xml

# e2e tests
e2e/manifests
e2e/tls

# Translations
*.mo
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -637,7 +637,7 @@ spec:
default: "pooler"
connection_pooler_image:
type: string
default: "registry.opensource.zalan.do/acid/pgbouncer:master-26"
default: "registry.opensource.zalan.do/acid/pgbouncer:master-27"
connection_pooler_max_db_connections:
type: integer
default: 60
Expand Down
2 changes: 1 addition & 1 deletion charts/postgres-operator/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -416,7 +416,7 @@ configConnectionPooler:
# db user for pooler to use
connection_pooler_user: "pooler"
# docker image
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-26"
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-27"
# max db connections the pooler should hold
connection_pooler_max_db_connections: 60
# default pooling mode
Expand Down
4 changes: 3 additions & 1 deletion docs/reference/cluster_manifest.md
Original file line number Diff line number Diff line change
Expand Up @@ -543,7 +543,9 @@ for both master and replica pooler services (if `enableReplicaConnectionPooler`

## Custom TLS certificates

Those parameters are grouped under the `tls` top-level key.
Those parameters are grouped under the `tls` top-level key. Note, you have to
define `spiloFSGroup` in the Postgres cluster manifest or `spilo_fsgroup` in
the global configuration before adding the `tls` section'.

* **secretName**
By setting the `secretName` value, the cluster will switch to load the given
Expand Down
29 changes: 23 additions & 6 deletions docs/user.md
Original file line number Diff line number Diff line change
Expand Up @@ -1197,14 +1197,19 @@ don't know the value, use `103` which is the GID from the default Spilo image
OpenShift allocates the users and groups dynamically (based on scc), and their
range is different in every namespace. Due to this dynamic behaviour, it's not
trivial to know at deploy time the uid/gid of the user in the cluster.
Therefore, instead of using a global `spilo_fsgroup` setting, use the
`spiloFSGroup` field per Postgres cluster.
Therefore, instead of using a global `spilo_fsgroup` setting in operator
configuration or use the `spiloFSGroup` field per Postgres cluster manifest.

For testing purposes, you can generate a self-signed certificate with openssl:
```sh
openssl req -x509 -nodes -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=acid.zalan.do"
```

Upload the cert as a kubernetes secret:
```sh
kubectl create secret tls pg-tls \
--key pg-tls.key \
--cert pg-tls.crt
--key tls.key \
--cert tls.crt
```

When doing client auth, CA can come optionally from the same secret:
Expand All @@ -1231,8 +1236,7 @@ spec:

Optionally, the CA can be provided by a different secret:
```sh
kubectl create secret generic pg-tls-ca \
--from-file=ca.crt=ca.crt
kubectl create secret generic pg-tls-ca --from-file=ca.crt=ca.crt
```

Then configure the postgres resource with the TLS secret:
Expand All @@ -1255,3 +1259,16 @@ Alternatively, it is also possible to use

Certificate rotation is handled in the Spilo image which checks every 5
minutes if the certificates have changed and reloads postgres accordingly.

### TLS certificates for connection pooler

By default, the pgBouncer image generates its own TLS certificate like Spilo.
When the `tls` section is specfied in the manifest it will be used for the
connection pooler pod(s) as well. The security context options are hard coded
to `runAsUser: 100` and `runAsGroup: 101`. The `fsGroup` will be the same
like for Spilo.

As of now, the operator does not sync the pooler deployment automatically
which means that changes in the pod template are not caught. You need to
toggle `enableConnectionPooler` to set environment variables, volumes, secret
mounts and securityContext required for TLS support in the pooler pod.
2 changes: 2 additions & 0 deletions e2e/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,12 @@ default: tools

clean:
rm -rf manifests
rm -rf tls

copy: clean
mkdir manifests
cp -r ../manifests .
mkdir tls

docker: scm-source.json
docker build -t "$(IMAGE):$(TAG)" .
Expand Down
6 changes: 6 additions & 0 deletions e2e/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -55,13 +55,18 @@ function set_kind_api_server_ip(){
sed -i "s/server.*$/server: https:\/\/$kind_api_server/g" "${kubeconfig_path}"
}

function generate_certificate(){
openssl req -x509 -nodes -newkey rsa:2048 -keyout tls/tls.key -out tls/tls.crt -subj "/CN=acid.zalan.do"
}

function run_tests(){
echo "Running tests... image: ${e2e_test_runner_image}"
# tests modify files in ./manifests, so we mount a copy of this directory done by the e2e Makefile

docker run --rm --network=host -e "TERM=xterm-256color" \
--mount type=bind,source="$(readlink -f ${kubeconfig_path})",target=/root/.kube/config \
--mount type=bind,source="$(readlink -f manifests)",target=/manifests \
--mount type=bind,source="$(readlink -f tls)",target=/tls \
--mount type=bind,source="$(readlink -f tests)",target=/tests \
--mount type=bind,source="$(readlink -f exec.sh)",target=/exec.sh \
--mount type=bind,source="$(readlink -f scripts)",target=/scripts \
Expand All @@ -82,6 +87,7 @@ function main(){
[[ ! -f ${kubeconfig_path} ]] && start_kind
load_operator_image
set_kind_api_server_ip
generate_certificate

shift
run_tests $@
Expand Down
32 changes: 32 additions & 0 deletions e2e/tests/k8s_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,26 @@ def get_services():
while not get_services():
time.sleep(self.RETRY_TIMEOUT_SEC)

def count_pods_with_volume_mount(self, mount_name, labels, namespace='default'):
pod_count = 0
pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items
for pod in pods:
for mount in pod.spec.containers[0].volume_mounts:
if mount.name == mount_name:
pod_count += 1

return pod_count

def count_pods_with_env_variable(self, env_variable_key, labels, namespace='default'):
pod_count = 0
pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items
for pod in pods:
for env in pod.spec.containers[0].env:
if env.name == env_variable_key:
pod_count += 1

return pod_count

def count_pods_with_rolling_update_flag(self, labels, namespace='default'):
pods = self.api.core_v1.list_namespaced_pod(namespace, label_selector=labels).items
return len(list(filter(lambda x: "zalando-postgres-operator-rolling-update-required" in x.metadata.annotations, pods)))
Expand Down Expand Up @@ -241,6 +261,18 @@ def update_config(self, config_map_patch, step="Updating operator deployment"):
def patch_pod(self, data, pod_name, namespace="default"):
self.api.core_v1.patch_namespaced_pod(pod_name, namespace, data)

def create_tls_secret_with_kubectl(self, secret_name):
return subprocess.run(
["kubectl", "create", "secret", "tls", secret_name, "--key=tls/tls.key", "--cert=tls/tls.crt"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)

def create_tls_ca_secret_with_kubectl(self, secret_name):
return subprocess.run(
["kubectl", "create", "secret", "generic", secret_name, "--from-file=ca.crt=tls/ca.crt"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)

def create_with_kubectl(self, path):
return subprocess.run(
["kubectl", "apply", "-f", path],
Expand Down
48 changes: 48 additions & 0 deletions e2e/tests/test_e2e.py
Original file line number Diff line number Diff line change
Expand Up @@ -622,6 +622,49 @@ def test_cross_namespace_secrets(self):
self.eventuallyEqual(lambda: k8s.count_secrets_with_label("cluster-name=acid-minimal-cluster,application=spilo", self.test_namespace),
1, "Secret not created for user in namespace")

@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_custom_ssl_certificate(self):
'''
Test if spilo uses a custom SSL certificate
'''

k8s = self.k8s
cluster_label = 'application=spilo,cluster-name=acid-minimal-cluster'
tls_secret = "pg-tls"

# get nodes of master and replica(s) (expected target of new master)
_, replica_nodes = k8s.get_pg_nodes(cluster_label)
self.assertNotEqual(replica_nodes, [])

try:
# create secret containing ssl certificate
result = self.k8s.create_tls_secret_with_kubectl(tls_secret)
print("stdout: {}, stderr: {}".format(result.stdout, result.stderr))

# enable load balancer services
pg_patch_tls = {
"spec": {
"spiloFSGroup": 103,
"tls": {
"secretName": tls_secret
}
}
}
k8s.api.custom_objects_api.patch_namespaced_custom_object(
"acid.zalan.do", "v1", "default", "postgresqls", "acid-minimal-cluster", pg_patch_tls)

# wait for switched over
k8s.wait_for_pod_failover(replica_nodes, 'spilo-role=master,' + cluster_label)
k8s.wait_for_pod_start('spilo-role=replica,' + cluster_label)

self.eventuallyEqual(lambda: k8s.count_pods_with_env_variable("SSL_CERTIFICATE_FILE", cluster_label), 2, "TLS env variable SSL_CERTIFICATE_FILE missing in Spilo pods")
self.eventuallyEqual(lambda: k8s.count_pods_with_env_variable("SSL_PRIVATE_KEY_FILE", cluster_label), 2, "TLS env variable SSL_PRIVATE_KEY_FILE missing in Spilo pods")
self.eventuallyEqual(lambda: k8s.count_pods_with_volume_mount(tls_secret, cluster_label), 2, "TLS volume mount missing in Spilo pods")

except timeout_decorator.TimeoutError:
print('Operator log: {}'.format(k8s.get_operator_log()))
raise

@timeout_decorator.timeout(TEST_TIMEOUT_SEC)
def test_enable_disable_connection_pooler(self):
'''
Expand Down Expand Up @@ -653,6 +696,11 @@ def test_enable_disable_connection_pooler(self):
self.eventuallyEqual(lambda: k8s.count_services_with_label(pooler_label), 2, "No pooler service found")
self.eventuallyEqual(lambda: k8s.count_secrets_with_label(pooler_label), 1, "Pooler secret not created")

# TLS still enabled so check existing env variables and volume mounts
self.eventuallyEqual(lambda: k8s.count_pods_with_env_variable("CONNECTION_POOLER_CLIENT_TLS_CRT", pooler_label), 4, "TLS env variable CONNECTION_POOLER_CLIENT_TLS_CRT missing in pooler pods")
self.eventuallyEqual(lambda: k8s.count_pods_with_env_variable("CONNECTION_POOLER_CLIENT_TLS_KEY", pooler_label), 4, "TLS env variable CONNECTION_POOLER_CLIENT_TLS_KEY missing in pooler pods")
self.eventuallyEqual(lambda: k8s.count_pods_with_volume_mount("pg-tls", pooler_label), 4, "TLS volume mount missing in pooler pods")

k8s.api.custom_objects_api.patch_namespaced_custom_object(
'acid.zalan.do', 'v1', 'default',
'postgresqls', 'acid-minimal-cluster',
Expand Down
2 changes: 1 addition & 1 deletion manifests/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ data:
# connection_pooler_default_cpu_request: "500m"
# connection_pooler_default_memory_limit: 100Mi
# connection_pooler_default_memory_request: 100Mi
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-26"
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-27"
# connection_pooler_max_db_connections: 60
# connection_pooler_mode: "transaction"
# connection_pooler_number_of_instances: 2
Expand Down
2 changes: 1 addition & 1 deletion manifests/minimal-fake-pooler-deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ spec:
serviceAccountName: postgres-operator
containers:
- name: postgres-operator
image: registry.opensource.zalan.do/acid/pgbouncer:master-26
image: registry.opensource.zalan.do/acid/pgbouncer:master-27
imagePullPolicy: IfNotPresent
resources:
requests:
Expand Down
2 changes: 1 addition & 1 deletion manifests/operatorconfiguration.crd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -635,7 +635,7 @@ spec:
default: "pooler"
connection_pooler_image:
type: string
default: "registry.opensource.zalan.do/acid/pgbouncer:master-26"
default: "registry.opensource.zalan.do/acid/pgbouncer:master-27"
connection_pooler_max_db_connections:
type: integer
default: 60
Expand Down
2 changes: 1 addition & 1 deletion manifests/postgresql-operator-default-configuration.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ configuration:
connection_pooler_default_cpu_request: "500m"
connection_pooler_default_memory_limit: 100Mi
connection_pooler_default_memory_request: 100Mi
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-26"
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-27"
# connection_pooler_max_db_connections: 60
connection_pooler_mode: "transaction"
connection_pooler_number_of_instances: 2
Expand Down
Loading