Skip to content

Bug 2033720: Library synchronization for OCP 4.10#408

Merged
openshift-merge-robot merged 1 commit intoopenshift:masterfrom
dperaza4dustbit:resync_libs_ocp4.10
Jan 20, 2022
Merged

Bug 2033720: Library synchronization for OCP 4.10#408
openshift-merge-robot merged 1 commit intoopenshift:masterfrom
dperaza4dustbit:resync_libs_ocp4.10

Conversation

@dperaza4dustbit
Copy link
Contributor

Performed ./library-sync.sh and pushing new assets.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 17, 2022
@yselkowitz
Copy link
Contributor

/retitle Bug 2033720: Library synchronization for OCP 4.10

@openshift-ci openshift-ci bot changed the title Library synchronization for OCP 4.10 Bug 2033720: Library synchronization for OCP 4.10 Jan 17, 2022
@openshift-ci openshift-ci bot added bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Jan 17, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 17, 2022

@dperaza4dustbit: This pull request references Bugzilla bug 2033720, which is valid. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.10.0) matches configured target release for branch (4.10.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @jitendar-singh

Details

In response to this:

Bug 2033720: Library synchronization for OCP 4.10

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@yselkowitz
Copy link
Contributor

/test unit

@gabemontero
Copy link
Contributor

gabemontero commented Jan 17, 2022

Hmm.... 2 ruby tests are now failing in image eco @dperaza4dustbit @yselkowitz

See if you can look at the logs @dperaza4dustbit to sort our what happened. Most likely a change in the upstream scl org broke something.

If you get stuck ping me and I'll lend a second set of eyes on them.

@yselkowitz
Copy link
Contributor

/retest

2 similar comments
@yselkowitz
Copy link
Contributor

/retest

@yselkowitz
Copy link
Contributor

/retest

@yselkowitz
Copy link
Contributor

I'm baffled, the only ruby&rails changes were to 1) groupify APIs, 2) migrate the templates from ruby 2.6 to 2.7, and 3) add ruby 3.0 IST. The JSON files don't show any errors, I don't see any typos in the groupification, and I have no problem building the rails-ex with 2.7 using s2i. What exactly is the issue?

@dperaza4dustbit
Copy link
Contributor Author

@yselkowitz I'm gathering information so I can open an issue here: https://github.com/sclorg/rails-ex

So far this caught our attention:

2022-01-18T02:35:47.654396810Z Bundle complete! 18 Gemfile dependencies, 59 gems now installed.
2022-01-18T02:35:47.654396810Z Gems in the groups 'development' and 'test' were not installed.
2022-01-18T02:35:47.654396810Z Bundled gems are installed into ./bundle
2022-01-18T02:35:47.654396810Z Post-install message from sass:
2022-01-18T02:35:47.654396810Z
2022-01-18T02:35:47.654396810Z Ruby Sass has reached end-of-life and should no longer be used.
2022-01-18T02:35:47.654396810Z
2022-01-18T02

Aslo this:

2022-01-18T02:36:02.047220049Z LoadError: cannot load such file -- bundler/setup
2022-01-18T02:36:02.047331126Z /opt/app-root/src/config/boot.rb:3:in require' 2022-01-18T02:36:02.047331126Z /opt/app-root/src/config/boot.rb:3:in <top (required)>'
2022-01-18T02:36:02.047331126Z /opt/app-root/src/config/application.rb:1:in require_relative' 2022-01-18T02:36:02.047331126Z /opt/app-root/src/config/application.rb:1:in <top (required)>'
2022-01-18T02:36:02.047331126Z /opt/app-root/src/Rakefile:4:in require_relative' 2022-01-18T02:36:02.047331126Z /opt/app-root/src/Rakefile:4:in <top (required)>'
2022-01-18T02:36:02.047331126Z /opt/app-root/src/bundle/ruby/2.7.0/gems/rake-13.0.3/exe/rake:27:in <top (required)>' 2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/cli/exec.rb:63:in load'
2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/cli/exec.rb:63:in kernel_load' 2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/cli/exec.rb:28:in run'
2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/cli.rb:475:in exec' 2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/vendor/thor/lib/thor/command.rb:27:in run'
2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in invoke_command' 2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/vendor/thor/lib/thor.rb:392:in dispatch'
2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/cli.rb:31:in dispatch' 2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/vendor/thor/lib/thor/base.rb:485:in start'
2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/cli.rb:25:in start' 2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/libexec/bundle:46:in block in <top (required)>'
2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/lib/bundler/friendly_errors.rb:128:in with_friendly_errors' 2022-01-18T02:36:02.047331126Z /usr/share/gems/gems/bundler-2.2.24/libexec/bundle:34:in <top (required)>'
2022-01-18T02:36:02.047331126Z /opt/app-root/src/bin/bundle:3:in load' 2022-01-18T02:36:02.047331126Z /opt/app-root/src/bin/bundle:3:in

'
2022-01-18T02:36:02.047358987Z (See full trace by running task with --trace)
2022-01-18T02:36:08.795530068Z error: build error: error building at STEP "RUN /bin/sh -ic 'bundle exec rake test'": error while running runtime: exit status 1

Any of this rings any bells? ^^^^^

@dperaza4dustbit
Copy link
Contributor Author

I'm also going to make a manual change to assets to try to narrow down more what change triggers this issue.

@gabemontero
Copy link
Contributor

I'm also going to make a manual change to assets to try to narrow down more what change triggers this issue.

one more nugget wrt what @dperaza4dustbit is doing with ^^, among other things, @dperaza4dustbit and I saw that the template's BuildConfig was bumped from 2.6 ruby to 2.7 ruby, but they also added 3.0 ruby to the imagestream... maybe the rails-ex repo is such now that the template's BC needs to use 3.0 ruby ?

that conjecture may be discussed in the rails-ex issue he opens based on the investigation he is doing

@yselkowitz
Copy link
Contributor

No, the only recent changes to rails-ex were in the templates, not the sample code.

@gabemontero
Copy link
Contributor

No, the only recent changes to rails-ex were in the templates, not the sample code.

then the bump from 2.6 to 2.7 affected the processing of the ruby gem stuff such that the openshift build breaks,
or there is an unrelated change in upstream ruby that is effecting things in general for the rails-ex repo

I've started an image eco run in @dperaza4dustbit 's other active PR, #406

if it fails in the same spot as well, then it is the latter guess, and we most likely have to disable that test while it gets sorted out in upstream SCL (unless your ruby know how figures it out @yselkowitz)

if the former proves to be the case, @dperaza4dustbit reverts the bump of the ruby rails template in this PR so it still uses 2.6, and upstream SCL is still has to be contacted, unless your ruby know how figures it out @yselkowitz

@yselkowitz
Copy link
Contributor

I believe sclorg/rails-ex#140 will fix the rails templates.

@yselkowitz
Copy link
Contributor

yselkowitz commented Jan 18, 2022

In the meantime, #406's test failures help establish a baseline for this one:

  • okd-e2e-aws-builds: Build status OutOfMemoryKilled is failing there too (why??)
  • okd-e2e-aws-image-ecosystem: dotnet:3.1-el7 is failing there too (something "wrong" with registry.centos.org??)
  • e2e-aws-upgrade: hopefully unrelated

Therefore the rails template fix should handle the only actual regression here?

@yselkowitz
Copy link
Contributor

@dperaza4dustbit your repush did not include sclorg/rails-ex#140, which currently would have to be applied manually, e.g. sed -i -e 's|bundle exec|/usr/bin/&|' assets/operator/*/rails/templates/*

@gabemontero
Copy link
Contributor

so @dperaza4dustbit I am good with manually applying all of @yselkowitz fix for ruby/rails here, vs. waiting for them to merge and then for openshift/library to pick it up

sounds like based on @yselkowitz 's #408 (comment) you need to do a bit more to make that happen

Next, wrt any okd-e2e-aws-builds failures, unless they are directly related to an imagestream or template, we can skip that optional test. The OOM error code thing is not a blocker for this PR.

the upgrade job flakes a bit; again unrelated to this PR; we'll just need to retest until we get past the flake and a clean run

for okd image eco, let's do some best effort due diligence to sort out any imagestream/template issue (the last failure was an unrelated install problem). If it is an easy fix, we go for it. If it is not, we do not block this PR, skip that optional test, and open a bugzilla or github issue on the owning component/repo for dotnet or whatever it was that was failing.

@dperaza4dustbit
Copy link
Contributor Author

@dperaza4dustbit your repush did not include sclorg/rails-ex#140, which currently would have to be applied manually, e.g. sed -i -e 's|bundle exec|/usr/bin/&|' assets/operator/*/rails/templates/*

I see @yselkowitz so for all architecture. Let me make that change

@dperaza4dustbit
Copy link
Contributor Author

so @dperaza4dustbit I am good with manually applying all of @yselkowitz fix for ruby/rails here, vs. waiting for them to merge and then for openshift/library to pick it up

sounds like based on @yselkowitz 's #408 (comment) you need to do a bit more to make that happen

Next, wrt any okd-e2e-aws-builds failures, unless they are directly related to an imagestream or template, we can skip that optional test. The OOM error code thing is not a blocker for this PR.

the upgrade job flakes a bit; again unrelated to this PR; we'll just need to retest until we get past the flake and a clean run

for okd image eco, let's do some best effort due diligence to sort out any imagestream/template issue (the last failure was an unrelated install problem). If it is an easy fix, we go for it. If it is not, we do not block this PR, skip that optional test, and open a bugzilla or github issue on the owning component/repo for dotnet or whatever it was that was failing.

Thanks @gabemontero just applied the changes to all rail templates and updated this PR, will check and open issues on OKD side.

@gabemontero
Copy link
Contributor

cluster start fail with ocp image eco ... trying again, just that one

/test e2e-aws-image-ecosystem

@gabemontero
Copy link
Contributor

In the meantime, #406's test failures help establish a baseline for this one:

* okd-e2e-aws-builds: `Build status OutOfMemoryKilled` is failing there too (why??)

* okd-e2e-aws-image-ecosystem: `dotnet:3.1-el7` is failing there too (something "wrong" with registry.centos.org??)

took a peak the the logs / must gather for this run, and it looks like a flake pulling from the internal OCP registry; the imagestream tag for dotnet:3.1-el7 imported fine from registry.centos.org

we'll see what happens with a next test run

* e2e-aws-upgrade: hopefully unrelated

Therefore the rails template fix should handle the only actual regression here?

@gabemontero
Copy link
Contributor

there are still some failures with ruby in the latest e2e-aws-image-ecosystem @dperaza4dustbit at https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_cluster-samples-operator/408/pull-ci-openshift-cluster-samples-operator-master-e2e-aws-image-ecosystem/1483862526675390464

see if you can triage / diagnose like we did together yesterday

also, based on the discussion I see @yselkowitz having upstream, maybe another upstream PR (134) is needed ?

@yselkowitz
Copy link
Contributor

It turns out my workaround was enough to get the template to first deploy, but a similar issue occurs later in the process when a regeneration is triggered, which involves a (different) bundle exec command. Therefore, either rails-ex gets properly fixed for compatibility with Rails 2.7+ (which may not happen quickly enough), or we revert the rails templates to 2.6-ubi8 (which isn't great given that version is EOL in March). Either way, you can back out my attempted change.

@gabemontero
Copy link
Contributor

gabemontero commented Jan 19, 2022 via email

@dperaza4dustbit
Copy link
Contributor Author

Ok thanks for investigation @gabemontero and @yselkowitz making the changes in assets to revert bundle exec changes and revert back to 2.6-ubi8

@yselkowitz
Copy link
Contributor

/retest

@yselkowitz
Copy link
Contributor

Remaining test failures are pre-existing or flakes
/lgtm
/retest

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Jan 20, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 20, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dperaza4dustbit, yselkowitz

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@yselkowitz
Copy link
Contributor

/test e2e-aws-proxy

@openshift-merge-robot openshift-merge-robot merged commit 1f21b78 into openshift:master Jan 20, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 20, 2022

@dperaza4dustbit: Some pull requests linked via external trackers have merged:

The following pull requests linked via external trackers have not merged:

These pull request must merge or be unlinked from the Bugzilla bug in order for it to move to the next state. Once unlinked, request a bug refresh with /bugzilla refresh.

Bugzilla bug 2033720 has not been moved to the MODIFIED state.

Details

In response to this:

Bug 2033720: Library synchronization for OCP 4.10

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jan 20, 2022

@dperaza4dustbit: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/okd-e2e-aws-image-ecosystem f99a7bf link false /test okd-e2e-aws-image-ecosystem

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments