Skip to content

Conversation

@wking
Copy link
Member

@wking wking commented Sep 16, 2020

I want this so #!/usr/bin/env python gets a version of Python that has PyYAML installed. We currently only install it for Python 3, but as long as the default Python is Python 2, it will not be available to Python-agnostic scripts (#11752). Auditing the existing users:

$ git --no-pager grep -l python-validation ci-operator
ci-operator/jobs/openshift/release/openshift-release-master-presubmits.yaml

So all the consumers are jobs in that one file. Getting the commands they run:

$ yaml2json <ci-operator/jobs/openshift/release/openshift-release-master-presubmits.yaml | jq -r '.presubmits["openshift/release"][].spec.containers[] | select(.image | endswith("python-validation")) | (.command | join(" ")) + " " + (.args | join(" "))'
hack/validate-ci-operator-config-filename.py --config-dir ./ci-operator/config
hack/validate-prow-job-semantics.py ./
hack/validate-release-controller-config.sh ./
pylint --rcfile=hack/.pylintrc --ignore=lib --persistent=n hack

Don't have to worry about pylint, because we install that in the Dockerfile. Looking for how the shell script is associated with Python:

$ grep 'python\|\.py' hack/validate-release-controller-config.sh
${base_dir}/hack/generators/release-controllers/generate-release-controllers.py "${base_dir}"

And checking all three Python scripts:

$ head -n1 hack/validate-ci-operator-config-filename.py hack/validate-prow-job-semantics.py hack/generators/release-controllers/generate-release-controllers.py
==> hack/validate-ci-operator-config-filename.py <==
#!/usr/bin/env python3

==> hack/validate-prow-job-semantics.py <==
#!/usr/bin/env python3

==> hack/generators/release-controllers/generate-release-controllers.py <==
#!/usr/bin/env python3

So we are uniformly python3 today, and changing the default to Python 3 should not cause problems.

Confirming the default in the current image:

$ podman run --rm registry.svc.ci.openshift.org/ci/python-validation /usr/bin/env python --version
Python 2.7.5
$ podman run --rm registry.svc.ci.openshift.org/ci/python-validation rpm -q --whatprovides /usr/bin/python2
python-2.7.5-69.el7_5.x86_64

Apparently the preferred way to change the default is via alternatives, but that fails for me with no useful error message:

$ podman run --rm registry.svc.ci.openshift.org/ci/python-validation sh -c 'env python --version && command -v python3 && alternatives --set python $(command -v python3) || echo "$?"'Python 2.7.5
/usr/local/bin/python3
2

So instead I've installed a new virtual environment and set the PATH and VIRTUAL_ENV variables to produce the core environment you'd get from sourcing ~/.venv/bin/activate:

$ podman run --rm registry.svc.ci.openshift.org/ci/python-validation sh -c 'python3 -m venv ~/.venv && grep "^export\|^PATH=\|^VIRTUAL_ENV=\|^# unset PYTHONHOME" ~/.venv/bin/activate'
VIRTUAL_ENV="/root/.venv"
export VIRTUAL_ENV
PATH="$VIRTUAL_ENV/bin:$PATH"
export PATH

I want this so '#!/usr/bin/env python' gets a version of Python that
has PyYAML installed.  We currently only install it for Python 3, but
as long as the default Python is Python 2, it will not be available to
Python-agnostic scripts [1].  Auditing the existing users:

  $ git --no-pager grep -l python-validation ci-operator
  ci-operator/jobs/openshift/release/openshift-release-master-presubmits.yaml

So all the consumers are jobs in that one file.  Getting the commands they run:

  $ yaml2json <ci-operator/jobs/openshift/release/openshift-release-master-presubmits.yaml | jq -r '.presubmits["openshift/release"][].spec.containers[] | select(.image | endswith("python-validation")) | (.command | join(" ")) + " " + (.args | join(" "))'
  hack/validate-ci-operator-config-filename.py --config-dir ./ci-operator/config
  hack/validate-prow-job-semantics.py ./
  hack/validate-release-controller-config.sh ./
  pylint --rcfile=hack/.pylintrc --ignore=lib --persistent=n hack

Don't have to worry about pylint, because we install that in the
Dockerfile.  Looking for how the shell script is associated with
Python:

  $ grep 'python\|\.py' hack/validate-release-controller-config.sh
  ${base_dir}/hack/generators/release-controllers/generate-release-controllers.py "${base_dir}"

And checking all three Python scripts:

  $ head -n1 hack/validate-ci-operator-config-filename.py hack/validate-prow-job-semantics.py hack/generators/release-controllers/generate-release-controllers.py
  ==> hack/validate-ci-operator-config-filename.py <==
  #!/usr/bin/env python3

  ==> hack/validate-prow-job-semantics.py <==
  #!/usr/bin/env python3

  ==> hack/generators/release-controllers/generate-release-controllers.py <==
  #!/usr/bin/env python3

So we are uniformly python3 today, and changing the default to Python
3 should not cause problems.

Confirming the default in the current image:

  $ podman run --rm registry.svc.ci.openshift.org/ci/python-validation /usr/bin/env python --version
  Python 2.7.5
  $ podman run --rm registry.svc.ci.openshift.org/ci/python-validation rpm -q --whatprovides /usr/bin/python2
  python-2.7.5-69.el7_5.x86_64

Apparently the preferred way to change the default is via
'alternatives' [2], but that fails for me with no useful error
message:

  $ podman run --rm registry.svc.ci.openshift.org/ci/python-validation sh -c 'env python --version && command -v python3 && alternatives --set python $(command -v python3) || echo "$?"'Python 2.7.5
  /usr/local/bin/python3
  2

So instead I've installed a new virtual environment [3] and set the
PATH and VIRTUAL_ENV variables to produce the core environment you'd
get from sourcing ~/.venv/bin/activate:

  $ podman run --rm registry.svc.ci.openshift.org/ci/python-validation sh -c 'python3 -m venv ~/.venv && grep "^export\|^PATH=\|^VIRTUAL_ENV=\|^# unset PYTHONHOME" ~/.venv/bin/activate'
  VIRTUAL_ENV="/root/.venv"
  export VIRTUAL_ENV
  PATH="$VIRTUAL_ENV/bin:$PATH"
  export PATH
  # unset PYTHONHOME if set

[1]: openshift#11752
[2]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/using-python3_configuring-basic-system-settings#configuring-the-unversioned-python_installing-and-using-python
[3]: https://docs.python.org/3/library/venv.html
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: wking
To complete the pull request process, please assign hongkailiu
You can assign the PR to them by writing /assign @hongkailiu in a comment when ready.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@wking wking force-pushed the python-3-validation branch from fa85a51 to aac3b8b Compare September 16, 2020 03:50

RUN python3 -m venv ~/.venv
ENV VIRTUAL_ENV "$HOME/.venv"
ENV PATH "$VIRTUAL_ENV/bin:$PATH"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a hack... I saw your comment about how alternatives does not work for you, but that doc is for RHEL8 and the FROM centos above IMO means that we build from centos:latest, which is still centos 7:

$ podman run centos:latest cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

So I'm thinking that we should just bump to centos 8, get rid of the manual python 3 build above and use the python3 that comes tiwh centos 8 natively?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with that. Or UBI 8 and install Python, or whatever.

wking added a commit to wking/openshift-release that referenced this pull request Oct 8, 2020
…f names

Quotas for some providers depend on both account-scoped resources
(e.g. AWS elastic IPs default to five per account [1]), region-scoped
resources (e.g. AWS VPCs default to five per region [1]), potentialy
other scoping (e.g. AWS NAT gateways default to five per availability
zone [1]).  This commit moves us away from account-scoped leases and
towards region-scoped leases in account-scoped buckets.  That allows
us to say things like "on Azure, we have more capacity in centralus
than we do in our other regions".

The generating script is because typing out "us-east-1" 50 times is
tedious and hard to review, and today there is apparently no support
in the Boskos config directly to ask for $N copies of a given name.

I'm also adding myself as an owner, so DPTP doesn't have to bother
with lease adjustment.  Other CI-platform maintainers should feel free
to add themselves as well, if they want to approve lease adjustments
to their platforms.

Currently a mostly-no-op reshuffling of the exisiting Boskos config.

The python-validation image follows the pattern set by the existing
prow-config-semantics job.

This script works with both Python 2 and Python 3, but I'm using
python3 in the shebang because the verification image currently has
'python' pointed at Python 2 but only installs PyYAML for Python 3,
and I've been unable to land a pivot to point it at Python 3 [2].

[1]: https://docs.openshift.com/container-platform/4.5/installing/installing_aws/installing-aws-account.html#installation-aws-limits_installing-aws-account
[2]: openshift#11869
@openshift-merge-robot
Copy link
Contributor

@wking: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/secret-generator-config-valid aac3b8b link /test secret-generator-config-valid

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@alvaroaleman
Copy link
Contributor

/uncc

@openshift-ci-robot openshift-ci-robot removed the request for review from alvaroaleman October 30, 2020 20:33
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 8, 2021
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 26, 2021

@wking: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/ci-secret-bootstrap-config-validation aac3b8b link /test ci-secret-bootstrap-config-validation

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 28, 2021
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link
Contributor

@openshift-bot: Closed this PR.

Details

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants