-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make install fails with metadata.annotations too long .. #1140
Comments
Hi @champak, Are you able to provide the steps to create a POC to reproduce this issue using the latest version of kubebuilder? I mean, after following the Quick Start steps what is required to do to face this issue? |
/assing @camilamacedo86 |
I suspect this issue is caused by the size of the annotation that kubectl apply creates. I check the total size of the yaml generated by kustomize that gets piped to kubectl and its past 650K bytes. Think kubectl maintains state for an apply/merge. To support these cases could "make install" use kubectl replace <> instead of create ? |
same here...changed to |
HI @rockmenjack and @champak, Could you help us by letting us know the steps to reproduce it and/or sharing the link for the project where you are able to check it? |
I think the root cause is that if we include a native k8 type like pod specs into the spec for a kubebuilder operator, the entire spec gets included which blows up the size. I noted a couple of other issues that relate to this: #962 and #82292. @DirectXMan12 's comments seemed to indicate there is not a good way around this at present. Given that, could we consider replacing the kubectl apply with a few additional steps that allow the "make install/deploy" to work for large CRDs |
HI @rockmenjack and @champak, You are saying that the issue is faced because of However, note that the apply means that it will create/update the resources when the create will shows an error in the case of the resources be already created instead of update it. It shows NOT the root cause of this issue at all. May something was not updated applied and then because of this, the error did not occur. However, I am unable to reproduce this scenario. So, could you please provide the steps by following up the quick start to face the same? I mean, if I followed up the quick start what additional step should I do and when in order to face this scenario? Without the steps to reproduce the issue is very complicated to identify what is wrong and properly solve it. |
@camilamacedo86 I added a trivial example at : https://github.com/champak/kbuildercrdtoobig The spec contains 3 pods (https://github.com/champak/kbuildercrdtoobig/blob/master/example/api/v1alpha1/toobig_types.go ) and the CRD gets too big to be installed via the kustomize build config/crd | kubectl apply -f method. Hope that helps illustrates. Thanks for taking a look ! |
FWIW, here's what I did to get around the issue. Since we cannot always use replace: (kustomize build config/crd | kubectl replace -f -) || (kustomize build config/crd | kubectl create -f -) replace will fail if the CRD is not already in place, in which case create does the job. This is merely a workaround. The root cause I think is the inclusion of the entire native spec(s) in the CRD spec. I have not dug into why an indirect reference to the native spec is not used instead. Does seem to be a significant limitation. If anyone knows why this approach was chosen please post. |
Hi @champak, It happens because when we use the
To know more about see here doc over it: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#how-to-create-objects So, yes. The usage of Then, I think it could be closed as sorted out. WDYT? |
Well, in my case, bring in PodSpec makes CRD definition resemble what users usually do for defining a Pod, which provides a smooth experience on using the CRD. |
HI @rockmenjack,
I do not agree with this approach at all. Just to clarifies, could you imagine have an interface and ask for a user put a POD yaml in the field? IMHO the CRD should have specs/attributes that just is part of its own domain in order to not hurt concepts such as encapsulation, single responsibility principle and almost for sure cohesion that may cause unexpected sides effects such difficulty to read, maintain/improve and extend, to mention a few. Is really the Pod info part of the Kind domain or a Pod is one of the secondary resources which is required to have for the Kind be installed/configured? What data from a POD actually is part of the Kind domain? So, instead of passing the Pod spec, you could inform, for example just the relevant data which can be customized as the image, the policy and etc and then create the object programmatically with the info provided. Example
In this way, I'd recommend you persuade the DDD concept to design your solutions. For further information check the book I hope that it helps you. |
@camilamacedo86 Could we consider adding a separate make target that allows large CRDs to get installed. I used a combination of replace and create but there may be a better way. Given the declarative verbose syntax, we could blow past the limits even without using Pod specs. With regard to being able to add native k8 objects like Pods into CRDs, I think this is a very useful feature of kube-builder and allows for re-use/composition of CRD specs in interesting and powerful ways. This particular issue I presume will eventually be solved by server side application where we will not be limited by the current kubectl implementation. My 2 cents ..Given past related threads on this topic wold be good to get input from @DirectXMan12 |
it's pretty common to embed podspec in your CRD, that's definitely a feature we want to support. the workaround is to not |
we introduced |
@DirectXMan12 Thanks much for weighing in. Would it make sense to have a kubectl workaround as a make target ? I looked around and could not tell in what time-frame we can expect server-side apply to be available. If you had a pointer on why we needed to include complete component specs into the CRD spec instead of indirect references please post. Thanks. |
you can't use references in CRD validation. It was an upstream decision. They've gone back and forth on it. I bring it up at meetings every once and a while. |
IIRC their logic was that it might cause objects to become unexpectedly valid or invalid, but it's been up for debate for a while now |
Hi @DirectXMan12, Sorry, still not clear for me what should be done in kubebuilder regards this scenario. Reasons:
So, if some change needs to be done it shows that should be in K8S API instead of kubebuilder. Am I right? Or the makefile scaffolded should be changed with this approach as well? if not, would not make more sense close this issue and raise a new for the specific project since it is not a kubebuilder thing at all? |
@DirectXMan12 Regarding the maxDescLen workaround adding the option to CRD_OPTIONS ?= "crd:trivialVersions=true,maxDescLen=0" works; Is this Makefile var CRD_OPTIONS a good place for the workaround ? Some controller-gen options go with the // + ,. syntax and so was wondering. I think limiting maxDescLen is better that the kubectl delete/recreate path since it allows for non-disruptive patching of the operator to work. |
Hi @champak, IHMO what could be made here is doc the scenario, reason and possible solutions to avoid. However, I'd not change the makefile scaffolded. WDYT @DirectXMan12? Has it any place like a FAQ that it could be added? |
@camilamacedo86 Thanks for your input. Can the maxDescLen be hinted to controller-gen by means other than passing it in directly as a cli option like the kubebuilder Makefile does ? Would have been nice if there was a way to skip descriptions for included stuff like PodSpecs but keep descriptions for the new attributes that we create but did not seem to be a way. In any case glad the workaround exists :) |
@champak, I couldn't find any evidence in the code that would make it possible to give a hint to the operator-sdk and/or controller-gen to skip the descriptions. In the end, I just moved from using the operator-sdk to using controller-gen directly, as you can see in this PR: jaegertracing/jaeger-operator#932 . |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hello @camilamacedo86, I am facing same issue, and i fix it by replacing kubectl apply with kubectl create as @champak suggested. I am using below versions: Kubebuilder: 2.3.1 As @DirectXMan12 mentioned, where to put tag to change verification with maxDescLen? Both CRD Generation and CRD Validation docs does not provide such information about maxDescLen. Also the error log says that Currently i have fixed my issue with workaround solution with @champak 's answer but it does not feel right. I will appreciate if you guys can suggest more stable solution. |
@camilamacedo86 I thought that i have fixed my issue but i was wrong. kubectl create is working on the first run but when you want to upgrade your controller with make deploy, it causes error because of that resource already exists. So better workaround still required. When i check the api-server logs, there are nothing on there, i think it is client-side problem somehow. By the way, i had that issue during second tutorial of the Kubebuilder book. |
This is a workaround for the following issue: kubernetes-sigs/kubebuilder#1140
HI @bilalcaliskan, The workaround that I am aware so far is what is provided via controller-gen, see how we sort it out in the samples;
However, the seems that a more ideal solutions would to be move from client side to server-side, see:
|
I saw an issue with the "make install" step that complains:
...
The CustomResourceDefinition "..." is invalid: metadata.annotations: Too long: must have at most 262144 characters
Looked like the issue was from :
kustomize build config/crd | kubectl apply -f -
changing it to
kustomize build config/crd | kubectl create -f -
worked OK.
I did not see any corrupt or super long annotation generated at the kustomize build config/crd output.
Versions:
KustomizeVersion:3.0.3 // tried more recent versions too
KubeBuilderVersion:"2.0.1"
Could not identify controller-gen version used. go.mod specified ::
sigs.k8s.io/controller-tools v0.2.1 // indirect
The CRD spec contains fairly standard stuff.
Has anyone else seen/addressed this ?
The text was updated successfully, but these errors were encountered: