Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any plans to support binary releases for controller-gen? #500

Closed
jaypipes opened this issue Oct 5, 2020 · 35 comments · Fixed by #1026
Closed

Any plans to support binary releases for controller-gen? #500

jaypipes opened this issue Oct 5, 2020 · 35 comments · Fixed by #1026
Assignees

Comments

@jaypipes
Copy link

jaypipes commented Oct 5, 2020

First, want to say thanks for the great work on the controller-tools repo. It's generally awesome. :)

I work on a project (github.com/aws/aws-controllers-k8s) that makes heavy use of controller-gen to augment our own code generation. However, because there are no binary releases of controller-gen we've seen one issue crop up.

Since the only way to use the controller-gen tool is to "install" it with go get, if a contributor to our project doesn't have controller-tools locally and they run go get to install it, that invariably ends up modifying the go.mod/sum file in our source repository. These are // indirect entries in the go.mod but still, it's kind of annoying to have to tell contributors to undo the changes to their go.mod/sum files due to this.

I was wondering if there are any plans to produce binary artifacts for the controller-gen tool? This would certainly make our lives a bit easier on the downstream consumer side of things.

If not binary artifacts, perhaps publishing Docker images containing pre-built controller-gen binaries?

@dhellmann
Copy link

In metal3 we ended up creating a script to install controller-gen so we could create a temporary directory, run go mod init there, then run go get. https://github.com/metal3-io/baremetal-operator/blob/master/hack/install-controller-gen.sh

We would happily consume binary releases, too, if they existed.

@jaypipes
Copy link
Author

jaypipes commented Oct 5, 2020

In metal3 we ended up creating a script to install controller-gen so we could create a temporary directory, run go mod init there, then run go get. https://github.com/metal3-io/baremetal-operator/blob/master/hack/install-controller-gen.sh

We would happily consume binary releases, too, if they existed.

Sweet. Consider that script copied ;)

Thanks @dhellmann!

@dhellmann
Copy link

In metal3 we ended up creating a script to install controller-gen so we could create a temporary directory, run go mod init there, then run go get. https://github.com/metal3-io/baremetal-operator/blob/master/hack/install-controller-gen.sh
We would happily consume binary releases, too, if they existed.

Sweet. Consider that script copied ;)

Thanks @dhellmann!

To give credit where due, I'm pretty sure that the approach in the script actually came from the commands kubebuilder put in the Makefile it generated for us. We moved it to a script as part of hacking it to ensure it always installed exactly the version we wanted, without overwriting a version the user may have already had in $GOBIN.

jaypipes added a commit to jaypipes/aws-controllers-k8s that referenced this issue Oct 5, 2020
We need to ensure that all ACK developers are using the same
controller-gen version otherwise we get into situations where:

1) go.mod/go.sum change unnecessarily
2) The annotations for things like CRDs will be different for generated
   YAML manifests

This patch makes v0.4.0 of the controller-tools repo and controller-gen
binary our target version. It changes the `ensure_controller_gen` Bash
function to ensure that controller-gen at that version is installed and
refuses to proceed with building controllers if there is a different
version.

```
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ controller-gen --version
Version: v0.3.1-0.20200716001835-4a903ddb7005
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ make build-controller SERVICE=dynamodb
make[1]: Entering directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
go build -tags codegen -ldflags "-X main.version="v0.0.0" -X main.buildHash=598a3e29bb514d98660d04a088641922ccc020c9 -X main.buildDate=2020-10-05T16:52:56Z" -o bin/ack-generate cmd/ack-generate/main.go
./scripts/build-controller.sh dynamodb
FAIL: Existing version of controller-gen Version: v0.3.1-0.20200716001835-4a903ddb7005, required version is v0.4.0.
FAIL: Please uninstall controller-gen and re-run this script, which will install the required version.
make[1]: *** [Makefile:49: build-controller] Error 1
make[1]: Leaving directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ sudo rm -f `which controller-gen`
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ make build-controller SERVICE=dynamodb
make[1]: Entering directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
go build -tags codegen -ldflags "-X main.version="v0.0.0" -X main.buildHash=598a3e29bb514d98660d04a088641922ccc020c9 -X main.buildDate=2020-10-05T16:53:07Z" -o bin/ack-generate cmd/ack-generate/main.go
./scripts/build-controller.sh dynamodb
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.4.0
****************************************************************************
WARNING: You may need to reload your Bash shell and path. If you see an
         error like this following:

Error: couldn't find github.com/aws/aws-sdk-go in the go.mod require block

simply reload your Bash shell with `exec bash` and then re-run whichever
command you were running.
****************************************************************************
Building Kubernetes API objects for dynamodb
Error: couldn't find github.com/aws/aws-sdk-go in the go.mod require block
make[1]: *** [Makefile:49: build-controller] Error 2
make[1]: Leaving directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ exec bash
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ make build-controller SERVICE=dynamodb
make[1]: Entering directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
go build -tags codegen -ldflags "-X main.version="v0.0.0" -X main.buildHash=598a3e29bb514d98660d04a088641922ccc020c9 -X main.buildDate=2020-10-05T16:53:21Z" -o bin/ack-generate cmd/ack-generate/main.go
./scripts/build-controller.sh dynamodb
Building Kubernetes API objects for dynamodb
Generating deepcopy code for dynamodb
Generating custom resource definitions for dynamodb
Building service controller for dynamodb
Generating RBAC manifests for dynamodb
Running gofmt against generated code for dynamodb
make[1]: Leaving directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
```

Issue aws-controllers-k8s#349

Related: kubernetes-sigs/controller-tools#500
jaypipes added a commit to jaypipes/aws-controllers-k8s that referenced this issue Oct 6, 2020
We need to ensure that all ACK developers are using the same
controller-gen version otherwise we get into situations where:

1) go.mod/go.sum change unnecessarily
2) The annotations for things like CRDs will be different for generated
   YAML manifests

This patch makes v0.4.0 of the controller-tools repo and controller-gen
binary our target version. It changes the `ensure_controller_gen` Bash
function to ensure that controller-gen at that version is installed and
refuses to proceed with building controllers if there is a different
version.

```
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ controller-gen --version
Version: v0.3.1-0.20200716001835-4a903ddb7005
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ make build-controller SERVICE=dynamodb
make[1]: Entering directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
go build -tags codegen -ldflags "-X main.version="v0.0.0" -X main.buildHash=598a3e29bb514d98660d04a088641922ccc020c9 -X main.buildDate=2020-10-05T16:52:56Z" -o bin/ack-generate cmd/ack-generate/main.go
./scripts/build-controller.sh dynamodb
FAIL: Existing version of controller-gen Version: v0.3.1-0.20200716001835-4a903ddb7005, required version is v0.4.0.
FAIL: Please uninstall controller-gen and re-run this script, which will install the required version.
make[1]: *** [Makefile:49: build-controller] Error 1
make[1]: Leaving directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ sudo rm -f `which controller-gen`
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ make build-controller SERVICE=dynamodb
make[1]: Entering directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
go build -tags codegen -ldflags "-X main.version="v0.0.0" -X main.buildHash=598a3e29bb514d98660d04a088641922ccc020c9 -X main.buildDate=2020-10-05T16:53:07Z" -o bin/ack-generate cmd/ack-generate/main.go
./scripts/build-controller.sh dynamodb
go: creating new go.mod: module tmp
go: found sigs.k8s.io/controller-tools/cmd/controller-gen in sigs.k8s.io/controller-tools v0.4.0
****************************************************************************
WARNING: You may need to reload your Bash shell and path. If you see an
         error like this following:

Error: couldn't find github.com/aws/aws-sdk-go in the go.mod require block

simply reload your Bash shell with `exec bash` and then re-run whichever
command you were running.
****************************************************************************
Building Kubernetes API objects for dynamodb
Error: couldn't find github.com/aws/aws-sdk-go in the go.mod require block
make[1]: *** [Makefile:49: build-controller] Error 2
make[1]: Leaving directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ exec bash
jaypipes@thelio:~/go/src/github.com/aws/aws-controllers-k8s$ make build-controller SERVICE=dynamodb
make[1]: Entering directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
go build -tags codegen -ldflags "-X main.version="v0.0.0" -X main.buildHash=598a3e29bb514d98660d04a088641922ccc020c9 -X main.buildDate=2020-10-05T16:53:21Z" -o bin/ack-generate cmd/ack-generate/main.go
./scripts/build-controller.sh dynamodb
Building Kubernetes API objects for dynamodb
Generating deepcopy code for dynamodb
Generating custom resource definitions for dynamodb
Building service controller for dynamodb
Generating RBAC manifests for dynamodb
Running gofmt against generated code for dynamodb
make[1]: Leaving directory '/home/jaypipes/go/src/github.com/aws/aws-controllers-k8s'
```

Issue aws-controllers-k8s#349

Related: kubernetes-sigs/controller-tools#500
@estroz
Copy link
Contributor

estroz commented Nov 11, 2020

Bumping this. Perhaps tarring all 3 binaries by platform so the release artifact count doesn't blow up, ex.

$ tar --list -f controller-tools_linux_amd64.tar.gz
controller-gen
helpgen
type-scaffold

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 9, 2021
@bhiravabhatla
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 11, 2021
@jgallucci32
Copy link

Another reason for having this feature is the use of controller-tools behind a company firewall which rate limits/throttles downloads to Github and other sources. It sometimes can take an hour to get the controller tools loaded on a box for doing basic development. We started pre-packaging it into containers so we only have to download once; we're also looking at deploying a Go Proxy to cache them as well. Other methods involve creating a vendor directory on a local git repo and then installing the controller binary. These all seem like tedious workarounds to not having an offline installer binary.

@lbscorpio
Copy link

Vote +1, the binary releases can save more time than go get.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 16, 2021
@ob
Copy link

ob commented Jul 17, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 15, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jaypipes
Copy link
Author

/reopen

@k8s-ci-robot k8s-ci-robot reopened this May 16, 2023
@k8s-ci-robot
Copy link
Contributor

@jaypipes: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jaypipes
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 16, 2023
@jaypipes
Copy link
Author

I've been attempting to install controller-gen (and conversion-gen from k8s code-generator) in a Docker container in order to standardize my organization's use of controller-gen (there are eleventy billion different versions of it used throughout the organization and placing a vetted version in a canonical devtest Docker image is the best way to reduce that variance).

Unfortunately, no matter how much I try, I simply cannot get controller-gen to work properly from inside a Docker container. I've tried all ways of "installing" controller-gen (using @dhellmann's tmpdir go mod technique, using GOBIN=/bin go install with various static binary flags). I am able to get controller-gen "installed" but invariably when executing any command other than controller-gen --version, I get the following:

$ docker run -it -v $pwd:/test nc-aks-devtest:latest bash
root [ / ]# controller-gen 
Error: load packages in root "/": err: go resolves to executable in current directory (./go): stderr: 

Here's the Dockerfile I'm using:

FROM <REDACTED> as builder
WORKDIR /workspace

RUN dnf install -y \
    ca-certificates \
    tar \
    curl \
    git \
    bash \
    wget

RUN mkdir -p bin

ARG K8S_RELEASE=latest
RUN rm -f $(command -v kubectl) && \
    export KUBECTL_VERSION=$(curl -L https://dl.k8s.io/release/${K8S_RELEASE}.txt) && \
    wget -q https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl -O bin/kubectl && \
    chmod +x bin/kubectl

ARG KIND_VERSION=v0.18.0
RUN wget -q -O bin/kind https://kind.sigs.k8s.io/dl/${KIND_VERSION}/kind-linux-amd64 && \
    chmod +x bin/kind

ARG KUSTOMIZE_VERSION=v5.0.3
RUN export TARBALL="kustomize_${KUSTOMIZE_VERSION}_linux_amd64.tar.gz" && \
    export RELEASE_URL="https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2F${KUSTOMIZE_VERSION}/${TARBALL}" && \
    wget -q ${RELEASE_URL} && \
    tar xzf ${TARBALL} -C bin && rm ${TARBALL}

ARG JQ_VERSION=jq-1.6
RUN wget -q "https://github.com/stedolan/jq/releases/download/${JQ_VERSION}/jq-linux64" -O bin/jq && \
    chmod +x bin/jq

ARG YQ_VERSION=v4.33.3
RUN wget -q "https://github.com/mikefarah/yq/releases/download/${YQ_VERSION}/yq_linux_amd64" -O bin/yq && \
    chmod +x bin/yq

ARG HELM_VERSION=v3.12.0
RUN export TARBALL="helm-${HELM_VERSION}-linux-amd64.tar.gz" && \
    export RELEASE_URL="https://get.helm.sh/${TARBALL}" && \
    wget -q ${RELEASE_URL} && \
    tar xzf ${TARBALL} -C bin && \
    mv bin/linux-amd64/helm bin/helm && rm -rf bin/linux-amd64 && \
    rm ${TARBALL}

ARG CLUSTERCTL_VERSION=v1.3.5
RUN curl -sLo bin/clusterctl https://github.com/kubernetes-sigs/cluster-api/releases/download/${CLUSTERCTL_VERSION}/clusterctl-linux-amd64 && \
    chmod +x bin/clusterctl

ENV GOPATH /go
ENV PATH /usr/local/go/bin:$GOPATH/bin:$PATH

ARG GO_VERSION=1.19.9
RUN export TARBALL="go${GO_VERSION}.linux-amd64.tar.gz" && \
    wget -q "https://go.dev/dl/${TARBALL}" && \
    tar xzf "${TARBALL}" -C /usr/local && \
    rm "${TARBALL}"

ARG CONTROLLER_GEN_VERSION=0.11.3
RUN go install "sigs.k8s.io/controller-tools/cmd/controller-gen@v${CONTROLLER_GEN_VERSION}" && cp $(command -v controller-gen) bin/controller-gen

ARG CONVERSION_GEN_VERSION=0.23.6
RUN go install "k8s.io/code-generator/cmd/conversion-gen@v${CONVERSION_GEN_VERSION}" && cp $(command -v conversion-gen) bin/conversion-gen

ARG KFILT_VERSION="v0.0.7"
RUN go install github.com/ryane/kfilt@${KFILT_VERSION}

FROM <REDACTED> as final
COPY --from=builder /workspace/bin /usr/local/bin
# NOTE(jaypipes): gettext installs envsubst
RUN dnf install -y gettext

I've spent hours trying to get this working and am kind of at the end of my rope. Wondering if @dhellmann or anyone else may have been able to solve this dilemma? Certainly having binary releases of controller-gen would (I think) make life a whole lot easier, no?

@jaypipes
Copy link
Author

In looking for a reason why that output is showing up when running controller-gen inside a Docker container, I stumbled across golang/go#43724. Having read through that and all its comments, I suspect that because controller-gen has //go:generate directive that itself calls go there is something wonky going on? Does this mean that controller-gen can never be installed as a binary since it depends on the go executable?

@jaypipes
Copy link
Author

Note that if I don't use a multi-stage Docker build and instead remove these lines from the Docker file:

FROM <REDACTED> as final
COPY --from=builder /workspace/bin /usr/local/bin
# NOTE(jaypipes): gettext installs envsubst
RUN dnf install -y gettext

I do get a working controller-gen in the resulting Docker image. The only problem is that causes the resulting Docker image to balloon from 290MB to 1.6GB :(

@dhellmann
Copy link

(Long-shot disclaimer)

The fact that the command is being run in / and says it finds go in the current directory makes me wonder if the issue is this line

ENV GOPATH /go

Is it possible that directory is being picked up as the go executable by the //go:generate logic?

@dims
Copy link
Member

dims commented May 17, 2023

@jaypipes did you already try GODEBUG=execerrdot=0 when you run controller-gen?

@jaypipes
Copy link
Author

@jaypipes did you already try GODEBUG=execerrdot=0 when you run controller-gen?

* https://github.com/search?type=code&q=GODEBUG%3Dexecerrdot

No, I had no idea about that... I can give it a try.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 20, 2024
@sbueringer
Copy link
Member

/reopen

/remove-lifecycle rotten

/assign

@k8s-ci-robot
Copy link
Contributor

@sbueringer: Reopened this issue.

In response to this:

/reopen

/remove-lifecycle rotten

/assign

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 9, 2024
@insanity54
Copy link

Hi, just stopping by and sharing an antecdote. I'm new to contributing to kubernetes projects (currently interested in kubernetes-sigs/external-dns)

Spinning up a local dev environment and I'm going through the getting started guide. https://kubernetes-sigs.github.io/external-dns/v0.14.2/contributing/getting-started/

Step 2 is make build which I run. I get an error about a missing dep.

which: no controller-gen

So I search the regular places. pamac, brew, github releases, no matches.

So I search the issue trackers. Devs have been struggling with this issue since at least 2020. There's a documentation PR from 2021 which unfortunately never got merged.

Then I find this issue where a suggested workaround https://github.com/metal3-io/baremetal-operator/blob/main/hack/install-controller-gen.sh 404s.

I'm blocked and I'm frustrated.

Granted, there is a solution in the PR at https://github.com/kubernetes-sigs/controller-tools/pull/537/files but I'll be honest. I don't understand golang's GOPATH/GO111MODULE yet. I ran those commands and I still don't have a controller-tools.

I wish there were a controller-gen binary.

@jaypipes
Copy link
Author

Hi @insanity54, sorry that you are frustrated, I understand how you feel.

Unfortunately, I don't think the project is ever going to release binaries for controller-gen. Here is what I end up doing for most of my projects (since many projects I work on require different versions of controller-gen...):

CONTROLLER_GEN_VERSION=v0.15.0
cd $MY_PROJECT_DIR
mkdir -b bin/
GOBIN=bin/ go install sigs.k8s.io/controller-tools/cmd/[email protected]

and then I refer to the controller-gen binary by its path:

CONTROLLER_GEN=bin/controller-gen

and call the binary:

$CONTROLLER_GEN paths=./... crd:trivialVersions=true rbac:roleName=controller-perms output:crd:artifacts:config=config/crd/bases

@sbueringer
Copy link
Member

Once I get that for down my TODO list it will happen :)

(can't promise an ETA though, just too much going on)

@sbueringer
Copy link
Member

Finally found the time to implement it. Starting with the next CT release we will publish controller-gen binaries. Now it's just up to folks consuming the binaries from release attachments

@jaypipes
Copy link
Author

@sbueringer you rock, mate! :) thank you so much!

@sbueringer
Copy link
Member

sbueringer commented Aug 13, 2024

I did a quick check and everything looks fine, but if you (or someone else) wants to try them them out: https://github.com/kubernetes-sigs/controller-tools/releases/tag/v0.16.0-beta.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.