Skip to content

Conversation

@bcrochet
Copy link
Contributor

Once these are landed in MCO, we can remove them from dev-scripts

@hardys
Copy link

hardys commented Jun 17, 2019

I think we can also remove the yq install ref #584

@hardys
Copy link

hardys commented Jun 27, 2019

@bcrochet hey what's the status of this? This will soon be on the critical path as we're pushing PRs to openshift/install so can no longer rely on the dev-scripts hacks.

@bcrochet bcrochet changed the title WIP: Remove the assets from dev-scripts Remove the assets from dev-scripts Jul 19, 2019
@bcrochet
Copy link
Contributor Author

@hardys openshift/machine-config-operator#795 has merged. This is gtg.

@russellb
Copy link
Member

@hardys openshift/machine-config-operator#795 has merged. This is gtg.

Thanks! We'll need to merge this along with an update to our pinned release image that includes your changes.

We're getting really, really close to where we don't need a custom release image anymore ...

@bcrochet bcrochet force-pushed the remove-assets branch 2 times, most recently from 040acc9 to 109d464 Compare July 30, 2019 14:35
@bcrochet
Copy link
Contributor Author

bcrochet commented Aug 5, 2019

@hardys openshift/machine-config-operator#795 has merged. This is gtg.

Thanks! We'll need to merge this along with an update to our pinned release image that includes your changes.

We're getting really, really close to where we don't need a custom release image anymore ...

@russellb I think this has been done, yes?

@cybertron
Copy link
Contributor

Testing this PR locally I'm having trouble with it timing out on the image registry. I'm guessing that's because of the assets/templates/99_registry.yaml removal. Since I don't think we migrated that to MCO we may need to leave it.

I'm also inclined to think maybe we want to leave assets/templates/99_master-core-password.yaml since that's dev-specific?

With those two changes I am able to stand up a cluster.

@russellb
Copy link
Member

russellb commented Aug 7, 2019 via email

@cybertron
Copy link
Contributor

@bcrochet This still removes the two assets I mentioned above. Do we want to wait this on the image registry change or just leave that asset to be removed separately when the work is complete?

I also think we should still leave the core password here as it is useful for development.

@russellb
Copy link
Member

russellb commented Aug 7, 2019 via email

@bcrochet
Copy link
Contributor Author

bcrochet commented Aug 8, 2019

Testing this PR locally I'm having trouble with it timing out on the image registry. I'm guessing that's because of the assets/templates/99_registry.yaml removal. Since I don't think we migrated that to MCO we may need to leave it.

I'm also inclined to think maybe we want to leave assets/templates/99_master-core-password.yaml since that's dev-specific?

With those two changes I am able to stand up a cluster.

I must have missed these comments. I'll add them back.

@rdoxenham
Copy link
Contributor

At a minimum I needed these for a successful deployment...

[kni@provisioner assets]$ find dev-scripts/assets/ -type f | grep -v generated
dev-scripts/assets/files/etc/chrony.conf
dev-scripts/assets/templates/90_metal3_baremetalhost_crd.yaml
dev-scripts/assets/templates/99_ingress-controller.yaml
dev-scripts/assets/templates/99_master-chronyd-custom.yaml.optional
dev-scripts/assets/templates/99_master-core-password.yaml
dev-scripts/assets/templates/99_registry.yaml
dev-scripts/assets/templates/99_worker-chronyd-custom.yaml.optional
dev-scripts/assets/yaml_patch.py

If I wasn't using a customised NTP config then I suspect I could have also dropped chrony, but didn't test this as I need it for Ceph/Rook.

@stbenjam stbenjam added the CI check this PR with CI label Aug 8, 2019
@bcrochet
Copy link
Contributor Author

bcrochet commented Aug 8, 2019

At a minimum I needed these for a successful deployment...

[kni@provisioner assets]$ find dev-scripts/assets/ -type f | grep -v generated
dev-scripts/assets/files/etc/chrony.conf
dev-scripts/assets/templates/90_metal3_baremetalhost_crd.yaml
dev-scripts/assets/templates/99_ingress-controller.yaml
dev-scripts/assets/templates/99_master-chronyd-custom.yaml.optional
dev-scripts/assets/templates/99_master-core-password.yaml
dev-scripts/assets/templates/99_registry.yaml
dev-scripts/assets/templates/99_worker-chronyd-custom.yaml.optional
dev-scripts/assets/yaml_patch.py

If I wasn't using a customised NTP config then I suspect I could have also dropped chrony, but didn't test this as I need it for Ceph/Rook.

I'm pretty sure that's what's left.

@metal3ci
Copy link

metal3ci commented Aug 8, 2019

Build FAILURE, see build http://10.8.144.11:8080/job/dev-tools/1012/

@cybertron
Copy link
Contributor

Yep, that's what I see on the latest version of the PR and it passed my local testing. This should be good to go now.

Looks like CI timed out on the worker node. My local run didn't, so I'm guessing that's a heisenbug.

@stbenjam
Copy link
Member

stbenjam commented Aug 9, 2019

Yea I think that's fine, the cluster came up so I think we're beyond where this affects. LGTM.

@stbenjam stbenjam merged commit 7c11c52 into openshift-metal3:master Aug 9, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CI check this PR with CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants