Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

packages: add sources for kubernetes-1.31 and ecr-credential-provider-1.31 #117

Merged
merged 2 commits into from
Sep 4, 2024

Conversation

ginglis13
Copy link
Contributor

@ginglis13 ginglis13 commented Aug 31, 2024

Issue number:

Description of changes:
This updates boilerplate for kubernetes-1.31 and ecr-credential-provider-1.31 packages to use upstream releases of 1.31 sources.

Testing done:

  • aarch64 IPv4 conformance tests
  • aarch64 IPv6 conformance tests
  • aarch64 NVIDIA conformance tests
  • aarch64 NVIDIA smoke tests
  • x86_64 IPv4 conformance tests
  • x86_64 IPv6 conformance tests
  • x86_64 NVIDIA conformance tests
  • x86_64 NVIDIA smoke tests
 NAME                                                            TYPE                        STATE                                         PASSED                    FAILED                    SKIPPED   BUILD ID                    LAST UPDATE
 aarch64-aws-k8s-131-conformance-test                      Test                  passed                              408                  0                6199                        2024-08-31T01:50:14Z
 aarch64-aws-k8s-131-ipv6-conformance-test                 Test                  passed                              408                  0                6199                        2024-08-31T01:53:31Z
 aarch64-aws-k8s-131-nvidia-conformance-test               Test                  passed                              408                  0                6199                        2024-08-31T02:00:30Z
 aarch64-aws-k8s-131-nvidia-nvidia-smoke-test              Test                  passed                                1                  0                   0                        2024-08-31T00:05:56Z
 aarch64-aws-k8s-131-instances                             Resource              completed                                                                                             2024-08-31T00:03:48Z
 aarch64-aws-k8s-131-ipv6-instances                        Resource              completed                                                                                             2024-08-31T00:04:06Z
 aarch64-aws-k8s-131-nvidia-instances                      Resource              completed                                                                                             2024-08-31T00:03:56Z

 x86-64-aws-k8s-nvidia-conformance-test                    Test                  running                               0                  0                   0                        2024-08-31T00:10:06Z
 x86-64-aws-k8s-nvidia-nvidia-smoke-test                   Test                  passed                                1                  0                   0                        2024-08-31T00:05:50Z
 x86-64-aws-k8s-nvidia-instances                           Resource              completed                                                                                             2024-08-31T00:03:44Z
 x86-64-aws-k8s-instances                                  Resource              completed                                                                                             2024-08-31T00:03:22Z
 x86-64-aws-k8s-conformance-test                           Test                  passed                              408                  0                6199                        2024-08-31T01:47:11Z
 x86-64-aws-k8s-ipv6-conformance-test                      Test                  passed                              408                  0                6199                        2024-08-31T02:02:05Z
 x86-64-aws-k8s-nvidia-conformance-test                    Test                  passed                              408                  0                6199                        2024-08-31T01:49:40Z
 x86-64-aws-k8s-nvidia-nvidia-smoke-test                   Test                  passed                                1                  0                   0                        2024-08-31T00:05:50Z
 x86-64-aws-k8s-ipv6-instances                             Resource              completed                                                                                             2024-08-31T00:03:36Z
  • EBS driver testing
KUBECONFIG=aarch64-aws-k8s-131.kubeconfig kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver
NAME                                  READY   STATUS    RESTARTS   AGE
ebs-csi-controller-7687fcccf8-2ffqz   6/6     Running   0          21s
ebs-csi-controller-7687fcccf8-gjq69   6/6     Running   0          21s
ebs-csi-node-6qxd5                    3/3     Running   0          21s
ebs-csi-node-g89x6                    3/3     Running   0          21s
apiclient apply <<EOF
[settings.kubernetes.credential-providers.ecr-credential-provider]
enabled = true
cache-duration = "30m"
image-patterns = [
  "*.dkr.ecr.us-east-2.amazonaws.com",
  "*.dkr.ecr.us-west-2.amazonaws.com"
]
EOF

followed by

apiclient set settings.aws.profile="ecr"
apiclient set settings.aws.config="<base64 encoded credentials for some 'ecr' profile>"
  Normal   Pulled     54s               kubelet            Successfully pulled image "458358962224.dkr.ecr.us-west-2.amazonaws.com/alpine-clone:latest" in 95ms (95ms including waiting). Image size: 4088948 bytes.
  Normal   Pulled     38s               kubelet            Successfully pulled image "458358962224.dkr.ecr.us-west-2.amazonaws.com/alpine-clone:latest" in 127ms (127ms including waiting). Image size: 4088948 bytes.
  Normal   Pulling    8s (x4 over 56s)  kubelet            Pulling image "458358962224.dkr.ecr.us-west-2.amazonaws.com/alpine-clone:latest"
  Normal   Created    8s (x4 over 54s)  kubelet            Created container private-registry-container
  Normal   Started    8s (x4 over 54s)  kubelet            Started container private-registry-container

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@@ -1,5 +1,8 @@
[clarify."sigs.k8s.io/yaml"]
expression = "MIT AND BSD-3-Clause"
license-files = [
{ path = "LICENSE", hash = 0xcdf3ae00 },
{ path = "LICENSE", hash = 0x617d80bc },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The license expression looks wrong, since this license indicates Apache-2.0.

I'd expect this clarify.toml entry to match the one under k8s-1.31. Are they using different versions of the module?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both kubernetes-1.31 and cloud-provider-aws upgrade https://github.com/kubernetes-sigs/yaml from 1.3 to 1.4:

kubernetes-sigs/yaml v1.4 adds goyaml.v2 and goyaml.v3 subdirectories, which is the cause for this license change.

The license expression looks wrong, since this license indicates Apache-2.0.

Good catch, looks like they switched to MIT and Apache-2.0 from just MIT between the 1.3 and 1.4 releases

I'd expect this clarify.toml entry to match the one under k8s-1.31.

I made these changes mostly to appease build failures, and making that change caused failures, which I concluded were due to different usages of the underlying library. Would the correct approach here to be include all these licenses in %files ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as

Are they using different versions of the module?

I see in go.mod for both projects, they're using both goyaml.v2 and goyaml.v3

(the sigs yaml package forks gopkg.in/goyaml v2 and v3 to the goyaml.v2 and goyaml.v3 subdirectories, respectively)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kubernetes has sigs.k8s.io/yaml vendored, which IIUC is how bottlerocket-license-tool picks up its forks of gopkg.in/goyaml

For cloud-provider-aws, go mod vendor results in no goyaml.v3 vendored:

$ ls vendor/sigs.k8s.io/yaml
code-of-conduct.md  CONTRIBUTING.md  fields.go  goyaml.v2  LICENSE  OWNERS  README.md  RELEASE.md  SECURITY_CONTACTS  yaml.go  yaml_go110.go

So it must not be using it

@ginglis13
Copy link
Contributor Author

^ correct license expression for sigs.k8s.io/yaml

@ginglis13
Copy link
Contributor Author

^ correct license expression for sigs.k8s.io/yaml to retain BSD-3-Clause

@ginglis13 ginglis13 merged commit 164df0b into bottlerocket-os:develop Sep 4, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants