-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
switching to clusterIP #15
base: master
Are you sure you want to change the base?
Conversation
I was able to test this in minikube (though I have to comment out the grace-period in the deployment spec as minikube doesn't support that. I also had to switch the image too our stock image as it didn't work with the RH registry even if I pre-cached the image. |
#20 is also addressed now |
{{- end }} | ||
# {{- if .Values.deployment.terminationGracePeriodSeconds }} | ||
# terminationGracePeriodSeconds: {{ .Values.deployment.terminationGracePeriodSeconds }} | ||
# {{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
interesting. i had also found this setting to be problematic in the operator code, and had to remove reference to it in the values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this isn't useful, then please delete it. Commented out code creates a lot of clutter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jflinchbaugh why was this a problem? I believe I added it in because of some specific concerns that the shutdown would not be done in time (which was something I experienced personally). It is optional so I don't see the value in removing this support.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is useful on OCP, unsupported on k8s
imageName: registry.connect.redhat.com/sonatype/nexus-iq-server | ||
imageTag: 1.85.0-01-ubi | ||
imageName: sonatype/nexus-iq-server | ||
imageTag: 1.91.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we're going to upgrade this now, how do we feel about combining the version into imageName? The operator requires it to be combined, but does it go against a rival best-practice for helm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we had kept them separate because the build process would try to hunt out the version and update it automatically. It may be possible to combine it, but we'd need to update the Jenkinsfile and it will likely introduce some more complexity there. Definitely a trade-off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I probably shouldn't have tried to lump that combination into this PR, so you're right, let's not try to combine name and tag just yet. ignore my suggestion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For helm, the repo, imagename, and tags should all be separate entries so they can be ovrridden
imageName: registry.connect.redhat.com/sonatype/nexus-repository-manager | ||
imageTag: 3.21.1-01-ubi-23 | ||
imageName: sonatype/nexus3 | ||
imageTag: 3.23.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the upgrade. as in the other values.yaml, could the version be pushed into imageName?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙅 my suggestion is invalid at this point. this should not be combined now.
@@ -4,13 +4,15 @@ | |||
|
|||
iq: | |||
name: nxiq | |||
imageName: registry.connect.redhat.com/sonatype/nexus-iq-server | |||
imageTag: 1.85.0-01-ubi | |||
imageName: sonatype/nexus-iq-server |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we had used the RH version because that was a requirement for getting this chart certified. Why are you changing this? It's in the values file so if you need to override it, you can but I think we should stick with the RH as a default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed in the context of an Operator but we chose to use helm so we also get helm charts of this for generic k8s instances
@@ -9,8 +9,8 @@ deploymentStrategy: {} | |||
# type: RollingUpdate | |||
|
|||
nexus: | |||
imageName: registry.connect.redhat.com/sonatype/nexus-repository-manager | |||
imageTag: 3.21.1-01-ubi-23 | |||
imageName: sonatype/nexus3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same concern as earlier - I believe this should remain using RH as the default image source
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same reply, we should use our stock images here in a helm chart and John can switch to the RHCC for the operator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the RHCC image needs to remain the default to preserve certification and I would prefer to see the termination grace period remain. Can you keep this focused on just the Cluster IP changes?
Termination grace period cannot remain as it simply isn't supported on k8s. I left it commented out so we don't lose sight of it when we go to the operator |
Addressing issue #12 . ClusterIP is the preferred method and then it is easy to make a 'route' to the service in OCP