Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to pull the image from China network? #6335

Closed
dhcn opened this issue Oct 16, 2020 · 44 comments
Closed

how to pull the image from China network? #6335

dhcn opened this issue Oct 16, 2020 · 44 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@dhcn
Copy link

dhcn commented Oct 16, 2020

k8s.gcr.io/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f
how to pull image from China network?
does any mirror for that exist?

/triage support

@dhcn dhcn added the kind/support Categorizes issue or PR as a support question. label Oct 16, 2020
@k8s-ci-robot
Copy link
Contributor

@dhcn: The label(s) triage/support cannot be applied, because the repository doesn't have them

In response to this:

k8s.gcr.io/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f
how to pull image from China network?
does any mirror for that exist?

/triage support

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@aledbf
Copy link
Member

aledbf commented Oct 16, 2020

Please try asia.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f

@dhcn
Copy link
Author

dhcn commented Oct 16, 2020

Please try asia.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f
Many Thanks,but it can't get connect

docker pull asia.gcr.io/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f

Error response from daemon: Get https://asia.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

@aledbf
Copy link
Member

aledbf commented Oct 16, 2020

Please test
gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f

or

gcr.io/k8s-staging-ingress-nginx/controller@sha256:5db5eeac72170fbe81eac8d214dcc48f6f0992d4d7351c0438939b711039e6de

@aledbf
Copy link
Member

aledbf commented Oct 16, 2020

From this test https://viewdns.info/chinesefirewall/?domain=gcr.io it should be available

@gitbeyond
Copy link

[root@bj-k8s-master-170 ~]# docker pull asia.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.40.2
Error response from daemon: Get https://asia.gcr.io/v2/: dial tcp 74.125.204.82:443: connect: connection timed out

The 172.217.212.82 from https://viewdns.info/ i got it. In China gcr.io None of them work.

It's very painful. Please pay attention to this issue. Thanks.

[root@bj-k8s-master-170 ~]# docker pull k8s.gcr.io/ingress-nginx/controller:v0.40.0
Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 172.217.212.82:443: connect: connection timed out

@aledbf
Copy link
Member

aledbf commented Oct 26, 2020

@gitbeyond @dhcn can you please try the mirrors provided by Azure?
You can find examples to pull ingress-nginx
https://github.com/Azure/container-service-for-azure-china/blob/master/aks/README.md#22-container-registry-proxy

@gitbeyond
Copy link

[root@docker-182 ~]# docker pull k8sgcr.azk8s.cn/ingress-nginx/controller:v0.35.0
Error response from daemon: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx/1.14.0 (Ubuntu)</center>\r\n</body>\r\n</html>\r\n"


[root@docker-182 ~]# docker pull k8sgcr.azk8s.cn/autoscaling/cluster-autoscaler:v1.18.2
Error response from daemon: error parsing HTTP 403 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>403 Forbidden</title></head>\r\n<body bgcolor=\"white\">\r\n<center><h1>403 Forbidden</h1></center>\r\n<hr><center>nginx/1.14.0 (Ubuntu)</center>\r\n</body>\r\n</html>\r\n"

According to the instructions here, public IP can't access here. Thank you for your reply.

Note: currently *.azk8s.cn could only be accessed by Azure China IP, we don't provide public outside access any more. If you have such requirement to whitelist your IP, please contact [email protected], provide your IP address, we will decide whether to whitelist your IP per your reasonable requirement, thanks for understanding.

I think the best way is to upload a copy to China's mirror station.

@hyc3z
Copy link

hyc3z commented Nov 3, 2020

Is it possible to push the image to hub.docker.com? We can reach that site in mainland China. Please give us reply!

@aledbf
Copy link
Member

aledbf commented Nov 4, 2020

@hyc3z I am sorry but no. This is a known issue for other projects in the Kubernetes organization.
Moving to docker is not a long term solution due to the rate limit being implemented.

@lyr5333
Copy link

lyr5333 commented Dec 15, 2020

在本机下载 通过docker save保存成tar 在拷贝服务器 docker load 解压就可以了 上传到集群的私有镜像仓库如harbor 就可以了 本机没办法下载的话目前dockerhub中有私人维护的ingreenginx镜像可以提供下载,阿里源和清华源都有

@SnailDove
Copy link

SnailDove commented Jan 26, 2021

阿里的镜像源拉取还是失败

I have the same problem,one of the advice of this command: minikube start --help

--image-repository='': Alternative image repository to pull docker images from. This can be used when you have
limited access to gcr.io. Set it to "auto" to let minikube decide one for you. For Chinese mainland users, you may use
local gcr.io mirrors such as registry.cn-hangzhou.aliyuncs.com/google_containers

So I start minikube with this command,

minikube start --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers' --vm=true. and the terminal shows me that

😄  minikube v1.16.0 on Darwin 11.1
✨  Automatically selected the hyperkit driver. Other choices: vmware, vmwarefusion
✅  Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
👍  Starting control plane node minikube in cluster minikube
🔥  Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.20.0 on Docker 20.10.0 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

So, I'm happy to type the command, minikube addons enable ingress. But after several minutes, it shows me that

❌  Exiting due to MK_ENABLE: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]

😿  If the above advice does not help, please let us know: 
👉  https://github.com/kubernetes/minikube/issues/new/choose

and then, I get the pods kubectl get pods -A

NAMESPACE     NAME                                        READY   STATUS              RESTARTS   AGE
kube-system   coredns-54d67798b7-d5vwm                    1/1     Running             0          7m7s
kube-system   etcd-minikube                               1/1     Running             0          7m22s
kube-system   ingress-nginx-admission-create-44759        0/1     ImagePullBackOff    0          6m46s
kube-system   ingress-nginx-admission-patch-sp948         0/1     ImagePullBackOff    0          6m46s
kube-system   ingress-nginx-controller-5f568d55f8-dtrmv   0/1     ContainerCreating   0          6m46s
kube-system   kube-apiserver-minikube                     1/1     Running             0          7m22s
kube-system   kube-controller-manager-minikube            1/1     Running             0          7m22s
kube-system   kube-proxy-chz6n                            1/1     Running             0          7m7s
kube-system   kube-scheduler-minikube                     1/1     Running             0          7m22s
kube-system   storage-provisioner                         1/1     Running             1          7m22s

To get more pieces of information and therefore I execute the command kubectl describe pod ingress-nginx-admission-create-44759 -n=kube-system

Events:
  Type     Reason       Age                    From               Message
  ----     ------       ----                   ----               -------
  Normal   Scheduled    28m                    default-scheduler  Successfully assigned kube-system/ingress-nginx-admission-create-44759 to minikube
  Warning  FailedMount  28m                    kubelet            MountVolume.SetUp failed for volume "ingress-nginx-admission-token-79ldg" : failed to sync secret cache: timed out waiting for the condition
  Normal   Pulling      27m (x4 over 28m)      kubelet            Pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2"
  Warning  Failed       27m (x4 over 28m)      kubelet            Failed to pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
  Warning  Failed       27m (x4 over 28m)      kubelet            Error: ErrImagePull
  Normal   BackOff      18m (x43 over 28m)     kubelet            Back-off pulling image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2"
  Warning  Failed       3m16s (x109 over 28m)  kubelet            Error: ImagePullBackOff

the Key is, "Failed to pull image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen, repository does not exist or may require 'docker login': denied: requested access to the resource is denied"

the result of kubectl describe pod ingress-nginx-controller-5f568d55f8-dtrmv -n=kube-system

Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    50m                   default-scheduler  Successfully assigned kube-system/ingress-nginx-controller-5f568d55f8-dtrmv to minikube
  Warning  FailedMount  27m (x4 over 41m)     kubelet            Unable to attach or mount volumes: unmounted volumes=[webhook-cert], unattached volumes=[ingress-nginx-token-gmz69 webhook-cert]: timed out waiting for the condition
  Warning  FailedMount  19m (x23 over 50m)    kubelet            MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
  Warning  FailedMount  5m17s (x14 over 48m)  kubelet            Unable to attach or mount volumes: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-gmz69]: timed out waiting for the condition

Well! After execution of docker login, it shows the same Events, Do you have any suggestions for that?

@selwynshen
Copy link

Waiting for any solutions...

@schnell18
Copy link

schnell18 commented Feb 14, 2021

As the problem is caused by the two images not being pulled successfully:

  • registry.cn-hangzhou.aliyuncs.com/google_containers/controller:v0.40.2
  • registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.2.2

The two images are absent from regristry of aliyun. The long repository is caused by setting minikube's image-repository option. In fact, the regristry.cn-hangzhou.aliyuncs.com can be shortened as registry.aliyuncs.com. According to yaml deoplyment file of the official nginx ingress controller, the second image is hosted on docker.io rather than k8s.gcr.io. This seems to be a mistake made by minikube.

For the first image, pull the k8s.gcr.io/ingress-nginx/controller:v0.40.2 image and use docker save to save it into a file, then transport to the worker node and use docker load to load the image. Finally, change the repository prefix to k8s.gcr.io/ingress-nginx.

For the second image, change the repository prefix to docker.io/jettech. Then you should be able to pull the image successfully.

This is far from decent solution, but it should keep you going.

@SnailDove
Copy link

SnailDove commented Mar 24, 2021

@schnell18 Thx for your advice! But the result is still disappointing.
I did as you suggested, as following

sudo docker pull registry.aliyuncs.com/google_containers/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f

But thus far it doesn't work although I have done docker login

Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/ingress-nginx/controller, repository does not exist or may require 'docker login': denied: requested access to the resource is denied 

@wswind
Copy link

wswind commented Mar 24, 2021

@schnell18 Thx for your advice! But the result is still disappointing.
I did as you suggested, as following

sudo docker pull registry.aliyuncs.com/google_containers/ingress-nginx/controller:v0.40.2@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f

But thus far it doesn't work although I have done docker login

Error response from daemon: pull access denied for registry.aliyuncs.com/google_containers/ingress-nginx/controller, repository does not exist or may require 'docker login': denied: requested access to the resource is denied 

@SnailDove this might help: kubernetes/minikube#10612 (comment)

@degibenz
Copy link

I have reproduced the same problem from Yandex-cloud network (VPN and VPC)

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 28, 2021
@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 6, 2021
@invzhi
Copy link

invzhi commented Sep 1, 2021

I found a way to work around by Config the Addon to Use Custom Registries and Images.

Steps below:

  1. Found a docker image instead of k8s.gcr.io/ingress-nginx/controller:xxx. I use bitnami/nginx-ingress-controller on docker.io.
  2. Make sure docker image above can be pulled. Try to run docker pull docker.io/bitnami/nginx-ingress-controller first.
  3. Enable ingress on minikube by custom registries and images. For example: minikube addons enable ingress --images="IngressController=bitnami/nginx-ingress-controller:latest" --registries="IngressController=docker.io".

@Crazyigor1987
Copy link

Managed to fix this, by renaming the Imagelocation from "k8s.gcr.io" to "k8sgcr.azk8s.cn" as described here:
https://github.com/Azure/container-service-for-azure-china/blob/master/aks/README.md

@wswind
Copy link

wswind commented Nov 30, 2021

Managed to fix this, by renaming the Imagelocation from "k8s.gcr.io" to "k8sgcr.azk8s.cn" as described here:
https://github.com/Azure/container-service-for-azure-china/blob/master/aks/README.md

I don't think this will work. The proxy for azure only serves the servers created by azure cloud. It's not available for others since last april

@plentifullee
Copy link

It's 2022 and we still can't pull a simple nginx-ingress-controller image in China. Sad.

@longwuyuan
Copy link
Contributor

are you denied access or is it slow ? Show the command and output of time docker pull k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de

@tao12345666333
Copy link
Member

tao12345666333 commented Jan 13, 2022

The image repository we are using now is on Google Cloud because of the GFW, you need a proxy.

@xwjahahahaha
Copy link

xwjahahahaha commented Feb 5, 2022

可以使用别人搬运的docker hub镜像代替解决,具体可以看我的博客:我使用v1.1.1版本ingress控制器的yaml如下

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: LoadBalancer
  externalTrafficPolicy: Local
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: anjia0532/google-containers.ingress-nginx.controller:v1.1.1
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.15
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: anjia0532/google-containers.ingress-nginx.kube-webhook-certgen:v1.1.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.15
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.15
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.1
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: anjia0532/google-containers.ingress-nginx.kube-webhook-certgen:v1.1.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

@rthamrin
Copy link

still not solution for the issue

@redek91
Copy link

redek91 commented Apr 28, 2022

I also had a problem with this yesterday. It turns out my firewall blocked the "k8s.gcr.io" domain...I lost hours for this.
I finally managed to deploy after adding "k8s.gcr.io" to a domain/url whitelist. I hope this helps you.

If you are on Windows a ipconfig /flushdns might help after adding the domain to the whitelist.

Ps: I just realized this issue was linked in another issue, which had this problem from outside China. I think only a VPN could help in this case. Sorry

@rthamrin
Copy link

I did by using a VPN but it doesn't work, However This might help

@afresh
Copy link

afresh commented May 6, 2022

I did by using a VPN but it doesn't work, However This might help

Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\",\"app.kubernetes.io/version\":\"1.2.0\"},\"name\":\"ingress-nginx-admission-create\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\",\"app.kubernetes.io/version\":\"1.2.0\"},\"name\":\"ingress-nginx-admission-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc\",\"--namespace=$(POD_NAMESPACE)\",\"--secret-name=ingress-nginx-admission\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\",\"securityContext\":{\"allowPrivilegeEscalation\":false}}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"fsGroup\":2000,\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"create"}],"containers":[{"image":"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70","name":"create"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-create", Namespace: "ingress-nginx"
for: "https://cdn.jsdelivr.net/gh/kade-code/k8s-mirror@master/deploy3.yaml": Job.batch "ingress-nginx-admission-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/part-of":"ingress-nginx", "app.kubernetes.io/version":"1.2.0", "controller-uid":"38623cdd-83ed-4af8-b0c9-ba539d607f9d", "job-name":"ingress-nginx-admission-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70", Command:[]string(nil), Args:[]string{"create", "--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc", "--namespace=$(POD_NAMESPACE)", "--secret-name=ingress-nginx-admission"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc006a0e460)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(0xc00a03e360), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc00106ed20), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00d73a380), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\",\"app.kubernetes.io/version\":\"1.2.0\"},\"name\":\"ingress-nginx-admission-patch\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\",\"app.kubernetes.io/version\":\"1.2.0\"},\"name\":\"ingress-nginx-admission-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--webhook-name=ingress-nginx-admission\",\"--namespace=$(POD_NAMESPACE)\",\"--patch-mutating=false\",\"--secret-name=ingress-nginx-admission\",\"--patch-failure-policy=Fail\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"patch\",\"securityContext\":{\"allowPrivilegeEscalation\":false}}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"fsGroup\":2000,\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"patch"}],"containers":[{"image":"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70","name":"patch"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-patch", Namespace: "ingress-nginx"
for: "https://cdn.jsdelivr.net/gh/kade-code/k8s-mirror@master/deploy3.yaml": Job.batch "ingress-nginx-admission-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/part-of":"ingress-nginx", "app.kubernetes.io/version":"1.2.0", "controller-uid":"d3c541b6-377d-473e-9f60-edc9c6649931", "job-name":"ingress-nginx-admission-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70", Command:[]string(nil), Args:[]string{"patch", "--webhook-name=ingress-nginx-admission", "--namespace=$(POD_NAMESPACE)", "--patch-mutating=false", "--secret-name=ingress-nginx-admission", "--patch-failure-policy=Fail"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc006ed83e0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(0xc00bf551a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc004b7b410), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00d559100), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil)}}: field is immutable

@longwuyuan
Copy link
Contributor

There's no benefit to anyone by keeping this issue open and continuing to add comments to it. The root-cause is described in this comment #6335 (comment) .

It is beyond the scope of this project to solve issues related to connection between a computer and k8s.gcr.io .

I will close for now. If you can connect to k8s.gcr.io but can not pull the image, please feel free to reopen.

/close

@k8s-ci-robot
Copy link
Contributor

@longwuyuan: Closing this issue.

In response to this:

There's no benefit to anyone by keeping this issue open and continuing to add comments to it. The root-cause is described in this comment #6335 (comment) .

It is beyond the scope of this project to solve issues related to connection between a computer and k8s.gcr.io .

I will close for now. If you can connect to k8s.gcr.io but can not pull the image, please feel free to reopen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@afresh
Copy link

afresh commented May 7, 2022

I did by using a VPN but it doesn't work, However This might help

Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\",\"app.kubernetes.io/version\":\"1.2.0\"},\"name\":\"ingress-nginx-admission-create\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\",\"app.kubernetes.io/version\":\"1.2.0\"},\"name\":\"ingress-nginx-admission-create\"},\"spec\":{\"containers\":[{\"args\":[\"create\",\"--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc\",\"--namespace=$(POD_NAMESPACE)\",\"--secret-name=ingress-nginx-admission\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"create\",\"securityContext\":{\"allowPrivilegeEscalation\":false}}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"fsGroup\":2000,\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"create"}],"containers":[{"image":"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70","name":"create"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-create", Namespace: "ingress-nginx"
for: "https://cdn.jsdelivr.net/gh/kade-code/k8s-mirror@master/deploy3.yaml": Job.batch "ingress-nginx-admission-create" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-create", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/part-of":"ingress-nginx", "app.kubernetes.io/version":"1.2.0", "controller-uid":"38623cdd-83ed-4af8-b0c9-ba539d607f9d", "job-name":"ingress-nginx-admission-create"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"create", Image:"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70", Command:[]string(nil), Args:[]string{"create", "--host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc", "--namespace=$(POD_NAMESPACE)", "--secret-name=ingress-nginx-admission"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc006a0e460)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(0xc00a03e360), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc00106ed20), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00d73a380), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil)}}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"batch/v1\",\"kind\":\"Job\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\",\"app.kubernetes.io/version\":\"1.2.0\"},\"name\":\"ingress-nginx-admission-patch\",\"namespace\":\"ingress-nginx\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"app.kubernetes.io/component\":\"admission-webhook\",\"app.kubernetes.io/instance\":\"ingress-nginx\",\"app.kubernetes.io/name\":\"ingress-nginx\",\"app.kubernetes.io/part-of\":\"ingress-nginx\",\"app.kubernetes.io/version\":\"1.2.0\"},\"name\":\"ingress-nginx-admission-patch\"},\"spec\":{\"containers\":[{\"args\":[\"patch\",\"--webhook-name=ingress-nginx-admission\",\"--namespace=$(POD_NAMESPACE)\",\"--patch-mutating=false\",\"--secret-name=ingress-nginx-admission\",\"--patch-failure-policy=Fail\"],\"env\":[{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"patch\",\"securityContext\":{\"allowPrivilegeEscalation\":false}}],\"nodeSelector\":{\"kubernetes.io/os\":\"linux\"},\"restartPolicy\":\"OnFailure\",\"securityContext\":{\"fsGroup\":2000,\"runAsNonRoot\":true,\"runAsUser\":2000},\"serviceAccountName\":\"ingress-nginx-admission\"}}}}\n"}},"spec":{"template":{"spec":{"$setElementOrder/containers":[{"name":"patch"}],"containers":[{"image":"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70","name":"patch"}]}}}}
to:
Resource: "batch/v1, Resource=jobs", GroupVersionKind: "batch/v1, Kind=Job"
Name: "ingress-nginx-admission-patch", Namespace: "ingress-nginx"
for: "https://cdn.jsdelivr.net/gh/kade-code/k8s-mirror@master/deploy3.yaml": Job.batch "ingress-nginx-admission-patch" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-admission-patch", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/component":"admission-webhook", "app.kubernetes.io/instance":"ingress-nginx", "app.kubernetes.io/name":"ingress-nginx", "app.kubernetes.io/part-of":"ingress-nginx", "app.kubernetes.io/version":"1.2.0", "controller-uid":"d3c541b6-377d-473e-9f60-edc9c6649931", "job-name":"ingress-nginx-admission-patch"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"patch", Image:"docker.io/liangjw/kube-webhook-certgen:v1.1.1@sha256:23a03c9c381fba54043d0f6148efeaf4c1ca2ed176e43455178b5c5ebf15ad70", Command:[]string(nil), Args:[]string{"patch", "--webhook-name=ingress-nginx-admission", "--namespace=$(POD_NAMESPACE)", "--patch-mutating=false", "--secret-name=ingress-nginx-admission", "--patch-failure-policy=Fail"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"POD_NAMESPACE", Value:"", ValueFrom:(*core.EnvVarSource)(0xc006ed83e0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(0xc00bf551a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"OnFailure", TerminationGracePeriodSeconds:(*int64)(0xc004b7b410), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"ingress-nginx-admission", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00d559100), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil)}}: field is immutable

error when applying patch: field is immutable

This solved my problem.

@longwuyuan
Copy link
Contributor

longwuyuan commented May 17, 2022 via email

@linyinli
Copy link

linyinli commented Jul 3, 2022

It's easy to go wrong, and actually we should do this:
For the ingress nginx 1.2.1, change the controller image from
k8s.gcr.io/ingress-nginx/controller:v1.2.1@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8
to
registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.1@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8

and change the webhook image from
k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
to
registry.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660

@tian-ren
Copy link

tian-ren commented Aug 4, 2023

I found a way to work around by Config the Addon to Use Custom Registries and Images.

Steps below:

  1. Found a docker image instead of k8s.gcr.io/ingress-nginx/controller:xxx. I use bitnami/nginx-ingress-controller on docker.io.
  2. Make sure docker image above can be pulled. Try to run docker pull docker.io/bitnami/nginx-ingress-controller first.
  3. Enable ingress on minikube by custom registries and images. For example: minikube addons enable ingress --images="IngressController=bitnami/nginx-ingress-controller:latest" --registries="IngressController=docker.io".

This worked for me. However, I did not use the bitnami images. Instead, I docker pulled the images on an EC2 instance in a US region. Then saved the images as two tarballs and scped and loaded them to my local machine. It takes considerably more time and energy than pulling directly, though.

@ongiant
Copy link

ongiant commented Sep 13, 2023

The image repository we are using now is on Google Cloud because of the GFW, you need a proxy.

Even though I have a network proxy, I am still unable to successfully pull the image.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

➜ ~ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

kubectl get pods -A

➜ ~ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-plbs5 0/1 ErrImagePull 0 36s
ingress-nginx ingress-nginx-admission-patch-sxrgq 0/1 ContainerCreating 0 36s
ingress-nginx ingress-nginx-controller-79d66f886c-xwwh9 0/1 ContainerCreating 0 36s
kube-system coredns-5d78c9869d-lp9rp 1/1 Running 0 99s
kube-system etcd-polar 1/1 Running 0 112s
kube-system kube-apiserver-polar 1/1 Running 0 111s
kube-system kube-controller-manager-polar 1/1 Running 0 111s
kube-system kube-proxy-5gncc 1/1 Running 0 99s
kube-system kube-scheduler-polar 1/1 Running 0 111s
kube-system storage-provisioner 1/1 Running 1 (69s ago) 111s

kubectl describe pod -n ingress-nginx ingress-nginx-admission-create-plbs5

➜  ~ kubectl describe pod -n ingress-nginx ingress-nginx-admission-create-plbs5                                                                      [40/243]
Name:             ingress-nginx-admission-create-plbs5                                                                                                       
Namespace:        ingress-nginx                                                                                                                              
Priority:         0                                                                                                                                          
Service Account:  ingress-nginx-admission                                                                                                                    
Node:             polar/192.168.49.2                                                                                                                         
Start Time:       Thu, 14 Sep 2023 01:55:27 +0800                                                                                                            
Labels:           app.kubernetes.io/component=admission-webhook                                                                                              
                  app.kubernetes.io/instance=ingress-nginx                                                                                                   
                  app.kubernetes.io/name=ingress-nginx                                                                                                       
                  app.kubernetes.io/part-of=ingress-nginx                                                                                                    
                  app.kubernetes.io/version=1.8.1                                                                                                            
                  batch.kubernetes.io/controller-uid=8dad234a-8341-4cbb-be27-bbcdeb4c9f9a                                                                    
                  batch.kubernetes.io/job-name=ingress-nginx-admission-create                                                                                
                  controller-uid=8dad234a-8341-4cbb-be27-bbcdeb4c9f9a                                                                                        
                  job-name=ingress-nginx-admission-create                                                                                                    
Annotations:                                                                                                                                           
Status:           Pending                                                                                                                                    
IP:               10.244.0.3                                                                                                                                 
IPs:                                                                                                                                                         
  IP:           10.244.0.3                                                                                                                                   
Controlled By:  Job/ingress-nginx-admission-create                                                                                                           
Containers:
  create:
    Container ID:  
    Image:         registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
    Image ID:      
    Port:          
    Host Port:     
    Args:
      create
      --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
      --namespace=$(POD_NAMESPACE)
      --secret-name=ingress-nginx-admission
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0                                                                                                                                        
    Environment:                                                                                                                                             
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)                                                                                                  
    Mounts:                                                                                                                                                  
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-htspx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-htspx:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  68s                default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-admission-create-plbs5 to polar
  Warning  Failed     36s                kubelet            Failed to pull image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b": rpc error: code = Unknown desc = Error response from daemon: Get "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/ingress-nginx/kube-webhook-certgen/manifests/sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b": dial tcp 142.251.2.82:443: i/o timeout
  Warning  Failed     36s                kubelet            Error: ErrImagePull
  Normal   BackOff    36s                kubelet            Back-off pulling image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b"
  Warning  Failed     36s                kubelet            Error: ImagePullBackOff
  Normal   Pulling    21s (x2 over 67s)  kubelet            Pulling image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b"

How should I configure my network settings?

@Undertone0809
Copy link

The image repository we are using now is on Google Cloud because of the GFW, you need a proxy.

Even though I have a network proxy, I am still unable to successfully pull the image.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
kubectl get pods -A
kubectl describe pod -n ingress-nginx ingress-nginx-admission-create-plbs5
How should I configure my network settings?

@ongiant I got the same problem. How did you solve this problem in the end? Using a image or else?

@ongiant
Copy link

ongiant commented Sep 27, 2023

@ongiant I got the same problem. How did you solve this problem in the end? Using a image or else?

You can try this

@ousiax
Copy link

ousiax commented Sep 28, 2023

NOTE: The registry hostname has been migrated from k8s.gcr.io to registry.k8s.io.

Solution one (recommended): Using HTTP Proxy with systemd.

  • For docker.service, e.g.

    $ cat /etc/systemd/system/containerd.service.d/override.conf
    [Service]
    Environment="HTTP_PROXY=http://127.0.0.1:7890"
    Environment="HTTPS_PROXY=http://127.0.0.1:7890"
    Environment="NO_PROXY=localhost,127.0.0.1,docker.io,docker.com,docker-cn.com,aliyuncs.com,mcr.microsoft.com,mcrea0.blob.core.windows.net,.azurecr.io,.elastic.co,.cloudfront.net,quay.io,.amazonaws.com,.amazonaws.com.cn,mscr.io"

    You can follow this official documentation to Configure the Docker daemon to use a proxy server.

  • For containerd.service, e.g.

    $ cat /etc/systemd/system/containerd.service.d/override.conf
    [Service]
    Environment="HTTP_PROXY=http://127.0.0.1:7890"
    Environment="HTTPS_PROXY=http://127.0.0.1:7890"
    Environment="NO_PROXY=localhost,127.0.0.1,docker.io,docker.com,docker-cn.com,aliyuncs.com,mcr.microsoft.com,mcrea0.blob.core.windows.net,.azurecr.io,.elastic.co,.cloudfront.net,quay.io,.amazonaws.com,.amazonaws.com.cn,mscr.io"

    You can also follow this blog to configure the HTTP proxy for containerd.

Solution two: using a registry mirror, like registry.aliyuncs.com/google_containers.

Please pay attention to data integrity and consistency.

  1. Another way to pull these images in China is to replace the registry hostname by a registry mirror, like registry.aliyuncs.com/google_containers in kustomization.yaml, for example:

    # ```
    # namespace: ingress-nginx
    # bases:
    #   - github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/baremetal
    # ```
    
    . . .
    
    images:
      - name: registry.k8s.io/ingress-nginx/controller
        newName: registry.aliyuncs.com/google_containers/nginx-ingress-controller
      - name: registry.k8s.io/ingress-nginx/kube-webhook-certgen
        newName: registry.aliyuncs.com/google_containers/kube-webhook-certgen

    However, the digest of the images may be different within the original definition.

    For example, the digest of the image nginx-ingress-controller in controller-v1.9.0 is sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60, but the digest of the same version or tag in registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.0 is sha256:bafd871c650d0d7a628b1959dd28f5f114d585a9b5d99c5b39038436b19459f3.

    $ docker images --digests
    REPOSITORY                                                         TAG              DIGEST                                                                    IMAGE ID       CREATED             SIZE
    . . .
    registry.aliyuncs.com/google_containers/nginx-ingress-controller   v1.9.0           sha256:bafd871c650d0d7a628b1959dd28f5f114d585a9b5d99c5b39038436b19459f3   bafd871c650d   About an hour ago   419MB
    registry.k8s.io/ingress-nginx/controller                           v1.9.0           sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60   c15d1a617858   55 minutes ago      419MB
    registry.k8s.io/ingress-nginx/controller                           <none>           sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60   c15d1a617858   About an hour ago   419MB

    It will cause problem when the manifests are applied in our cluster:

    $ kubectl get po -n ingress-nginx 
    NAME                                        READY   STATUS             RESTARTS   AGE
    ingress-nginx-admission-create-zs97h        0/1     Completed          0          102m
    ingress-nginx-admission-patch-k6kmj         0/1     Completed          0          102m
    ingress-nginx-controller-7fd8b554f8-nhqdz   0/1     Pending            0          29s
    ingress-nginx-controller-7fd8b554f8-vkdng   0/1     ImagePullBackOff   0          29s
    $ kubectl describe -n ingress-nginx po ingress-nginx-controller-7fd8b554f8-vkdng 
    
    . . .
    
    Events:
      Type     Reason            Age                From               Message
      ----     ------            ----               ----               -------
      Warning  FailedScheduling  43s                default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
      Normal   Scheduled         36s                default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-controller-7fd8b554f8-vkdng to node-0
      Normal   Pulling           21s (x2 over 37s)  kubelet            Pulling image "registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.0@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60"
      Warning  Failed            21s (x2 over 36s)  kubelet            Failed to pull image "registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.0@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60": rpc error: code = NotFound desc = failed to pull and unpack image "registry.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60": failed to resolve reference "registry.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60": registry.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60: not found
      Warning  Failed            21s (x2 over 36s)  kubelet            Error: ErrImagePull
      Normal   BackOff           5s (x2 over 36s)   kubelet            Back-off pulling image "registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.0@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60"
      Warning  Failed            5s (x2 over 36s)   kubelet            Error: ImagePullBackOff

    On the other hand, because the digest is inconsistent, that is, the content of the images in the registry.aliyuncs.com/google_containers can't also be trusted.

@ongiant
Copy link

ongiant commented Sep 28, 2023

NOTE: The registry hostname has been migrated from k8s.gcr.io to registry.k8s.io.

Solution one (recommended): Using HTTP Proxy with systemd.

  • For docker.service, e.g.

    $ cat /etc/systemd/system/containerd.service.d/override.conf
    [Service]
    Environment="HTTP_PROXY=http://127.0.0.1:7890"
    Environment="HTTPS_PROXY=http://127.0.0.1:7890"
    Environment="NO_PROXY=localhost,127.0.0.1,docker.io,docker.com,docker-cn.com,aliyuncs.com,mcr.microsoft.com,mcrea0.blob.core.windows.net,.azurecr.io,.elastic.co,.cloudfront.net,quay.io,.amazonaws.com,.amazonaws.com.cn,mscr.io"
    

    You can follow this official documentation to Configure the Docker daemon to use a proxy server.

It doesn't work for me. My docker configuration has always been as shown below, but at that time, I still encountered an ErrImagePull problem.

➜  ~ cd /etc/systemd/system/docker.service.d
➜  docker.service.d                                        
➜  docker.service.d 
➜  docker.service.d cat http-proxy.conf                    
[Service]
Environment="HTTP_PROXY=socks5://localhost:7891"
Environment="HTTPS_PROXY=socks5://localhost:7891"
Environment="NO_PROXY=localhost,127.0.0.1"

@ousiax
Copy link

ousiax commented Oct 10, 2023

NOTE: The registry hostname has been migrated from k8s.gcr.io to registry.k8s.io.
Solution one (recommended): Using HTTP Proxy with systemd.

  • For docker.service, e.g.

    $ cat /etc/systemd/system/containerd.service.d/override.conf
    [Service]
    Environment="HTTP_PROXY=http://127.0.0.1:7890"
    Environment="HTTPS_PROXY=http://127.0.0.1:7890"
    Environment="NO_PROXY=localhost,127.0.0.1,docker.io,docker.com,docker-cn.com,aliyuncs.com,mcr.microsoft.com,mcrea0.blob.core.windows.net,.azurecr.io,.elastic.co,.cloudfront.net,quay.io,.amazonaws.com,.amazonaws.com.cn,mscr.io"
    

    You can follow this official documentation to Configure the Docker daemon to use a proxy server.

It doesn't work for me. My docker configuration has always been as shown below, but at that time, I still encountered an ErrImagePull problem.

➜  ~ cd /etc/systemd/system/docker.service.d
➜  docker.service.d                                        
➜  docker.service.d 
➜  docker.service.d cat http-proxy.conf                    
[Service]
Environment="HTTP_PROXY=socks5://localhost:7891"
Environment="HTTPS_PROXY=socks5://localhost:7891"
Environment="NO_PROXY=localhost,127.0.0.1"

It seems that Docker doesn't support SOCK5 protocol well. Did you try to use HTTP protocol?

Or, try to convert SOCK5 to HTTP, like the privoxy?

ref: docker/roadmap#100

@ongiant
Copy link

ongiant commented Oct 27, 2023

It seems that Docker doesn't support SOCK5 protocol well. Did you try to use HTTP protocol?

Or, try to convert SOCK5 to HTTP, like the privoxy?

ref: docker/roadmap#100

It still doesn't work after using privoxy. Maybe the speed of my network proxy is too slow.

@matschaffer-roblox
Copy link

matschaffer-roblox commented Sep 12, 2024

#6335 (comment) is a great answer. Thank you so much @linyinli !

registry.aliyuncs.com mirrors a lot of common containers, the trick is knowing the right repository name since they're often not the same. Like in this case where ingress-nginx/controller becomes google_containers/nginx-ingress-controller.

The sha for my version of the chart (1.8.0, sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3) is a match though, so it should be an identical image.

I haven't yet found a good way to search registry.aliyuncs.com for possible matches.

@MehdiMst00
Copy link

NOTE: The registry hostname has been migrated from k8s.gcr.io to registry.k8s.io.

Solution one (recommended): Using HTTP Proxy with systemd.

  • For docker.service, e.g.

    $ cat /etc/systemd/system/containerd.service.d/override.conf
    [Service]
    Environment="HTTP_PROXY=http://127.0.0.1:7890"
    Environment="HTTPS_PROXY=http://127.0.0.1:7890"
    Environment="NO_PROXY=localhost,127.0.0.1,docker.io,docker.com,docker-cn.com,aliyuncs.com,mcr.microsoft.com,mcrea0.blob.core.windows.net,.azurecr.io,.elastic.co,.cloudfront.net,quay.io,.amazonaws.com,.amazonaws.com.cn,mscr.io"
    

    You can follow this official documentation to Configure the Docker daemon to use a proxy server.

  • For containerd.service, e.g.

    $ cat /etc/systemd/system/containerd.service.d/override.conf
    [Service]
    Environment="HTTP_PROXY=http://127.0.0.1:7890"
    Environment="HTTPS_PROXY=http://127.0.0.1:7890"
    Environment="NO_PROXY=localhost,127.0.0.1,docker.io,docker.com,docker-cn.com,aliyuncs.com,mcr.microsoft.com,mcrea0.blob.core.windows.net,.azurecr.io,.elastic.co,.cloudfront.net,quay.io,.amazonaws.com,.amazonaws.com.cn,mscr.io"
    

    You can also follow this blog to configure the HTTP proxy for containerd.

Solution two: using a registry mirror, like registry.aliyuncs.com/google_containers.

Please pay attention to data integrity and consistency.

  1. Another way to pull these images in China is to replace the registry hostname by a registry mirror, like registry.aliyuncs.com/google_containers in kustomization.yaml, for example:

    # ```
    # namespace: ingress-nginx
    # bases:
    #   - github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/baremetal
    # ```
    
    . . .
    
    images:
      - name: registry.k8s.io/ingress-nginx/controller
        newName: registry.aliyuncs.com/google_containers/nginx-ingress-controller
      - name: registry.k8s.io/ingress-nginx/kube-webhook-certgen
        newName: registry.aliyuncs.com/google_containers/kube-webhook-certgen

    However, the digest of the images may be different within the original definition.
    For example, the digest of the image nginx-ingress-controller in controller-v1.9.0 is sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60, but the digest of the same version or tag in registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.0 is sha256:bafd871c650d0d7a628b1959dd28f5f114d585a9b5d99c5b39038436b19459f3.

    $ docker images --digests
    REPOSITORY                                                         TAG              DIGEST                                                                    IMAGE ID       CREATED             SIZE
    . . .
    registry.aliyuncs.com/google_containers/nginx-ingress-controller   v1.9.0           sha256:bafd871c650d0d7a628b1959dd28f5f114d585a9b5d99c5b39038436b19459f3   bafd871c650d   About an hour ago   419MB
    registry.k8s.io/ingress-nginx/controller                           v1.9.0           sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60   c15d1a617858   55 minutes ago      419MB
    registry.k8s.io/ingress-nginx/controller                           <none>           sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60   c15d1a617858   About an hour ago   419MB
    

    It will cause problem when the manifests are applied in our cluster:

    $ kubectl get po -n ingress-nginx 
    NAME                                        READY   STATUS             RESTARTS   AGE
    ingress-nginx-admission-create-zs97h        0/1     Completed          0          102m
    ingress-nginx-admission-patch-k6kmj         0/1     Completed          0          102m
    ingress-nginx-controller-7fd8b554f8-nhqdz   0/1     Pending            0          29s
    ingress-nginx-controller-7fd8b554f8-vkdng   0/1     ImagePullBackOff   0          29s
    $ kubectl describe -n ingress-nginx po ingress-nginx-controller-7fd8b554f8-vkdng 
    
    . . .
    
    Events:
      Type     Reason            Age                From               Message
      ----     ------            ----               ----               -------
      Warning  FailedScheduling  43s                default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod..
      Normal   Scheduled         36s                default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-controller-7fd8b554f8-vkdng to node-0
      Normal   Pulling           21s (x2 over 37s)  kubelet            Pulling image "registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.0@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60"
      Warning  Failed            21s (x2 over 36s)  kubelet            Failed to pull image "registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.0@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60": rpc error: code = NotFound desc = failed to pull and unpack image "registry.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60": failed to resolve reference "registry.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60": registry.aliyuncs.com/google_containers/nginx-ingress-controller@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60: not found
      Warning  Failed            21s (x2 over 36s)  kubelet            Error: ErrImagePull
      Normal   BackOff           5s (x2 over 36s)   kubelet            Back-off pulling image "registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.9.0@sha256:c15d1a617858d90fb8f8a2dd60b0676f2bb85c54e3ed11511794b86ec30c8c60"
      Warning  Failed            5s (x2 over 36s)   kubelet            Error: ImagePullBackOff
    

    On the other hand, because the digest is inconsistent, that is, the content of the images in the registry.aliyuncs.com/google_containers can't also be trusted.

It worked for me! Thanks : )
I setup xray client proxy and followed solution one (Using HTTP Proxy) for containerd.

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests