Hardened Security Context for Elasticsearch#6703
Conversation
|
buildkite test this -f p=gke,s=8.7.0,s=7.17.8 |
This will only run the e2e tests with
|
|
@thbkrkr Thanks! I think I spotted a problem with this PR, I will use your advice next time 🙇 |
|
|
log{
"Time": "2023-04-19T17:16:36.671150963Z",
"Action": "output",
"Package": "github.com/elastic/cloud-on-k8s/v2/test/e2e/kb",
"Test": "TestKibanaStandalone/ES_HTTP_certificate_authority_should_be_set_and_deployed",
"Output": "=== RUN TestKibanaStandalone/ES_HTTP_certificate_authority_should_be_set_and_deployed\n"
}
{
"log.level": "info",
"@timestamp": "2023-04-19T17:30:05.540Z",
"log.logger": "e2e-pdzcj",
"message": "Log stream ended",
"service.version": "0.0.0-SNAPSHOT+00000000",
"service.type": "eck",
"ecs.version": "1.4.0",
"pod_name": "eck-e2e-pdzcj-cl86s"
}
{
"log.level": "info",
"@timestamp": "2023-04-19T17:30:05.540Z",
"log.logger": "e2e-pdzcj",
"message": "Streaming pod logs",
"service.version": "0.0.0-SNAPSHOT+00000000",
"service.type": "eck",
"ecs.version": "1.4.0",
"pod_name": "eck-e2e-pdzcj-cl86s"
} |
|
buildkite test this --fixed p=gke --mixed s=8.7.0,s=7.17.8 |
|
buildkite test this -f p=gke -m s=8.7.0,s=7.17.8 |
|
buildkite test this -f p=gke -m s=8.7.0,s=7.17.8 |
|
buildkite test this -f p=gke -m s=8.7.0,s=7.17.8 |
|
buildkite test this -f p=gke -m s=8.0.1,s=8.8.0-SNAPSHOT |
Some "noteworthy" things if you want to review this PR:
|
|
buildkite test this -f p=ocp,s=8.7.0 |
|
Results from the last e2e test sessions:
|
thbkrkr
left a comment
There was a problem hiding this comment.
LGTM, this is great!
Looks also like we're writing code specifically to handle #6186 but why not fix it first? We would no longer need to detect if the data directory is mounted in a volume as this is normally impossible, no? So we could call WithContainersSecurityContext after stackmon.WithMonitoring, to reuse the same code to set the SecurityContext for beats sidecars.
At first I thought this code was specifically handling #6186. But I've been wondering if, in general, we should not set I have to admit this is still brittle, as there are other ways to mis-configure |
thbkrkr
left a comment
There was a problem hiding this comment.
But I've been wondering if, in general, we should not set readOnlyRootFilesystem to true only if we are pretty confident in the fact that the data are written in a volume, not in the container. I'm thinking about users that will upgrade to 2.8.0: if we set readOnlyRootFilesystem to true too "aggressively" we may end up with issues during upgrades if Elasticsearch is not configured properly.
👍 👍
pebrc
left a comment
There was a problem hiding this comment.
Code LGTM I have not done many tests yet.
| Privileged: pointer.Bool(false), | ||
| RunAsNonRoot: pointer.Bool(true), | ||
| ReadOnlyRootFilesystem: pointer.Bool(true), | ||
| AllowPrivilegeEscalation: pointer.Bool(false), |
There was a problem hiding this comment.
Can we use a shared variable or function here given that we use the exact same context twice? Also potentially we could refer to the new securitycontext package here, to have all contexts in one place?
pebrc
left a comment
There was a problem hiding this comment.
LGTM I managed to run a few tests on GKE and OCP and things seem to work fine!
|
I'll open an issue to update https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-security-context.html before the next release as it is definitely outdated. Edit: I think we can actually work on this as part of #5913 |
This adjusts the number of volumes expected from Beats sidecars in the Logstash Stack Monitoring unit tests. Why? Because we don't test PRs with an automatic merge of the main branch (🐛🐞), we missed that the tests in #6732 had to be updated to take into account the changes made by #6703, which adds a new temp volume to the Beats sidecars.

If Elasticsearch configuration files are copied before symbolic links are created, it is possible to run all the Elasticsearch containers with the following restricted
securityContextFixes #6126
See my comment here if you want to review this PR: #6703 (comment)