Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛Fix panic on anonymous nested struct in CRD #415

Conversation

hypnoglow
Copy link
Contributor

Anonymous nested structs are not allowed according to the error message, but the code was panicking because for such struct ctx.info.RawSpec is nil.

This change adds extra checks so that struct name must be present and spec must not be nil.

Fixes #340 (which was closed recently by bot)


Not sure if it is possible to add a test for this somewhere 🤔. I tested locally:

type ZugaSpec struct {
	NestedStruct ZugaNestedStruct `json:"nestedStruct,omitempty"`

	NestedAnonymousStruct struct {
		Bar string `json:"bar"`
	} `json:"nestedAnonymousStruct"`
}

type ZugaNestedStruct struct {
	Foo string `json:"foo"`
}

Running make manifests on master:

λ make manifests
/Users/hypnoglow/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x16678cc]

goroutine 1 [running]:
sigs.k8s.io/controller-tools/pkg/crd.structToSchema(0xc001263440, 0xc0003e4460, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:322 +0x9c
sigs.k8s.io/controller-tools/pkg/crd.typeToSchema(0xc001263440, 0x19aaf60, 0xc0003e4460, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:164 +0x3a6
sigs.k8s.io/controller-tools/pkg/crd.structToSchema(0xc000a860b0, 0xc0003e44a0, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:375 +0x491
sigs.k8s.io/controller-tools/pkg/crd.typeToSchema(0xc000a860b0, 0x19aaf60, 0xc0003e44a0, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:164 +0x3a6
sigs.k8s.io/controller-tools/pkg/crd.infoToSchema(...)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:107
sigs.k8s.io/controller-tools/pkg/crd.(*Parser).NeedSchemaFor(0xc00010c0f0, 0xc0003b2b40, 0xc000984040, 0x8)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/parser.go:174 +0x321
sigs.k8s.io/controller-tools/pkg/crd.(*schemaContext).requestSchema(0xc001263290, 0x0, 0x0, 0xc000984040, 0x8)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:99 +0x9e
sigs.k8s.io/controller-tools/pkg/crd.localNamedToSchema(0xc001263290, 0xc0003e4840, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:220 +0x1b1
sigs.k8s.io/controller-tools/pkg/crd.typeToSchema(0xc001263290, 0x19aab20, 0xc0003e4840, 0x1f)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:154 +0x2e9
sigs.k8s.io/controller-tools/pkg/crd.structToSchema(0xc000a86aa8, 0xc0003e4900, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:375 +0x491
sigs.k8s.io/controller-tools/pkg/crd.typeToSchema(0xc000a86aa8, 0x19aaf60, 0xc0003e4900, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:164 +0x3a6
sigs.k8s.io/controller-tools/pkg/crd.infoToSchema(...)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/schema.go:107
sigs.k8s.io/controller-tools/pkg/crd.(*Parser).NeedSchemaFor(0xc00010c0f0, 0xc0003b2b40, 0xc0009840f4, 0x4)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/parser.go:174 +0x321
sigs.k8s.io/controller-tools/pkg/crd.(*Parser).NeedFlattenedSchemaFor(0xc00010c0f0, 0xc0003b2b40, 0xc0009840f4, 0x4)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/parser.go:186 +0xd8
sigs.k8s.io/controller-tools/pkg/crd.(*Parser).NeedCRDFor(0xc00010c0f0, 0xc00099a03e, 0x1c, 0xc0009840f4, 0x4, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/spec.go:85 +0x615
sigs.k8s.io/controller-tools/pkg/crd.Generator.Generate(0x1, 0x0, 0x0, 0x0, 0x0, 0x0, 0xc00010c0a0, 0x199bee0, 0x199bee0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/crd/gen.go:108 +0x356
sigs.k8s.io/controller-tools/pkg/genall.(*Runtime).Run(0xc00022b300, 0xc00010fc70)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/pkg/genall/genall.go:171 +0x15e
main.main.func1(0xc000177b80, 0xc00010fc70, 0x5, 0x5, 0x0, 0x0)
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/cmd/controller-gen/main.go:176 +0xa6
github.com/spf13/cobra.(*Command).execute(0xc000177b80, 0xc0000d20d0, 0x5, 0x5, 0xc000177b80, 0xc0000d20d0)
	/Users/hypnoglow/go/pkg/mod/github.com/spf13/[email protected]/command.go:826 +0x460
github.com/spf13/cobra.(*Command).ExecuteC(0xc000177b80, 0xc00000fb60, 0x4, 0x0)
	/Users/hypnoglow/go/pkg/mod/github.com/spf13/[email protected]/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
	/Users/hypnoglow/go/pkg/mod/github.com/spf13/[email protected]/command.go:864
main.main()
	/Users/hypnoglow/sources/github.com/kubernetes-sigs/controller-tools/cmd/controller-gen/main.go:200 +0x37e
make: *** [manifests] Error 2

Running make manifests with this PR applied:

λ make manifests
/Users/hypnoglow/go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/Users/hypnoglow/sources/github.com/hypnoglow/nonsense-operator/api/v1alpha1/zuga_types.go:39:24: encountered non-top-level struct (possibly embedded), those aren't allowed
Error: not all generators ran successfully
run `controller-gen crd:trivialVersions=true rbac:roleName=manager-role webhook paths=./... output:crd:artifacts:config=config/crd/bases -w` to see all available markers, or `controller-gen crd:trivialVersions=true rbac:roleName=manager-role webhook paths=./... output:crd:artifacts:config=config/crd/bases -h` for usage
make: *** [manifests] Error 1

Anonymous nested structs are not allowed according to the error message,
but the code was panicking because for such struct `ctx.info.RawSpec` is
nil.

This commit adds extra checks so that struct name must be present and
spec must not be nil.
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Mar 10, 2020
@hoffoo
Copy link

hoffoo commented Apr 3, 2020

this is helpful, please review

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 2, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 1, 2020
@hypnoglow
Copy link
Contributor Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 3, 2020
Copy link
Member

@vincepri vincepri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve
/assign @alvaroaleman

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: hypnoglow, vincepri

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 19, 2020
Copy link
Member

@alvaroaleman alvaroaleman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hypnoglow can you add a testcase for this?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 17, 2020
@vincepri
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 30, 2020
@vincepri
Copy link
Member

@hypnoglow do you have time to add a test case as @alvaroaleman suggested?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 28, 2021
@vincepri
Copy link
Member

vincepri commented Mar 1, 2021

@hypnoglow Do you have time to add a test for this issue?

@hypnoglow
Copy link
Contributor Author

hypnoglow commented Mar 1, 2021

Hey, sorry for not responding for long.

As I mentioned in the first message, I wasn't sure where and how I could add the test for this. It would be really helpful if you could guide me a little.

Maybe something has changed since I opened the MR and there is an obvious place for the test, then I'll look into it in the nearest days.

@georgettica
Copy link

georgettica commented Mar 4, 2021

@hypnoglow I think the test can be added in https://github.com/kubernetes-sigs/controller-tools/tree/master/pkg/crd/testdata

where you can add a nested object inside cronjob_types.go and show it's generated

@vincepri @alvaroaleman do you think this is a good way to go?

I am posting this as I have something similar I was going to ask how to go on doing it and found this issue.

hope you get it merged soon!

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 3, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Nested anonymous structs cause panic when generating CRDs
7 participants