Skip to content

Conversation

@gklein
Copy link

@gklein gklein commented May 19, 2019

No description provided.

@russellb
Copy link
Member

Can you expand on why the new approach is more desirable? Did the previous code have a problem that this fixes?

@gklein
Copy link
Author

gklein commented May 20, 2019

No issue with the previous code.

I suggest this approach in order to allow us to upgrade oc as soon as a new build is released, and not just for a single case.

Long term it may help prevent future compatibility issues.

@hardys
Copy link

hardys commented May 21, 2019

Does oc guarantee backwards compatibility? Just wondering if it would be safer to use the version from kni-installer e.g the buildid from https://github.com/openshift-metal3/kni-installer/blob/master/data/data/rhcos.json, of if always installing the latest will work, even when we lag master with pinned releases?

@hardys hardys closed this May 21, 2019
@hardys hardys reopened this May 21, 2019
@gklein
Copy link
Author

gklein commented May 21, 2019

Does oc guarantee backwards compatibility? Just wondering if it would be safer to use the version from kni-installer e.g the buildid from https://github.com/openshift-metal3/kni-installer/blob/master/data/data/rhcos.json, of if always installing the latest will work, even when we lag master with pinned releases?

The reason I've suggested this solution is to help prevent issues similar to this one:
#401 (comment)

Sticking to the kni-installer version, sounds like a good option to help keep the oc client version aligned with the cluster version.

@stbenjam
Copy link
Member

stbenjam commented Aug 8, 2019

Is this change still something worthwhile? Could you rebase if so?

@russellb
Copy link
Member

russellb commented Aug 9, 2019

reopen if you'd like to pursue this further

@russellb russellb closed this Aug 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants