Skip to content
This repository has been archived by the owner on Sep 21, 2021. It is now read-only.

Add empty default value for k8s security contexts #1016

Merged
merged 1 commit into from
Jul 22, 2019

Conversation

KierranM
Copy link
Contributor

Thanks for contributing to Zalenium! Please give us as much information as possible to merge this PR
quickly.

Sets empty default values for the two security contexts.

Description

Not having these default values is causing the node pods to not have the

Motivation and Context

Fixes #1015

How Has This Been Tested?

We have tested this locally by installing the helm chart with and without the values having defaults.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have read the CONTRIBUTING document.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

@codecov-io
Copy link

Codecov Report

Merging #1016 into master will increase coverage by 0.04%.
The diff coverage is n/a.

@@             Coverage Diff              @@
##             master    #1016      +/-   ##
============================================
+ Coverage     54.95%   54.99%   +0.04%     
- Complexity      673      675       +2     
============================================
  Files            58       58              
  Lines          4562     4562              
  Branches        429      429              
============================================
+ Hits           2507     2509       +2     
+ Misses         1821     1818       -3     
- Partials        234      235       +1

Copy link
Contributor

@arnaud-deprez arnaud-deprez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR @KierranM!

I'm ok to accept this but I don't quite understand #1015 and how it solves it...

@KierranM
Copy link
Contributor Author

I don't entirely understand it myself. For whatever reason when these values are left without the {} default the hub pod comes up with the environment variables that are required to set the requests/limits on the nodes, but the nodes don't have the requests/limits set.

As soon as you add in the default {} the nodes start receiving their limits again 🤷‍♀

Copy link
Contributor

@diemol diemol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for digging into this @KierranM

@diemol diemol merged commit 750ef9a into zalando:master Jul 22, 2019
jlamaille pushed a commit to jlamaille/zalenium that referenced this pull request Sep 12, 2019
diemol pushed a commit that referenced this pull request Sep 15, 2019
* Release 3.141.59t

* Add empty default value for podSecurityContext and containerSecurityContext (#1016)

Co-Authored-By: Pirmin Schuermann <[email protected]>

* Issue-1019 Update helm chart zip files to release version 3.141.59t (#1020)

* fixed index file (#1025)

* Change probes URL to the /status page (#1024)

When creating the console page, Zalenium servlet collects the status of all the nodes. If some of the nodes' proxy is unavailable, then the probe will fail. Using status page makes the Hub probes not relying on the state, health and availability of the node pods.

* Fix condition to get container by remote url

* Add symlink for /usr/bin/docker in Dockerfile

* Rewrite condition to exec ./zalenium.sh in entry.sh considering none sudo

* Update docker swarm documentation

* allow the modification of the hosts file inside a container

* Add documentation for hostAliases

* Fix get pod ip dns load error
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Kubernetes pods in 3.141.59t don't respect resource requests
4 participants