Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

odo push fails on psi minikube #4383

Closed
prietyc123 opened this issue Jan 22, 2021 · 3 comments
Closed

odo push fails on psi minikube #4383

prietyc123 opened this issue Jan 22, 2021 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@prietyc123
Copy link
Contributor

/kind bug

What versions of software are you using?
Operating System:

Output of odo version:

How did you run odo exactly?

# odo create nodejs --project testblah cdefgh
Validation
 ✓  Checking devfile existence [31330ns]
 ✓  Creating a devfile component from registry: DefaultDevfileRegistry [37159ns]
 ✓  Validating devfile component [51621ns]

Please use `odo push` command to create the component with source deployed

# odo push --project testblah -v 4
I0122 10:19:12.601886  929649 context.go:115] absolute devfile path: '/root/openshift/odo/testdev/project/devfile.yaml'
I0122 10:19:12.601934  929649 context.go:69] absolute devfile path: '/root/openshift/odo/testdev/project/devfile.yaml'
I0122 10:19:12.601957  929649 util.go:723] HTTPGetRequest: https://raw.githubusercontent.com/openshift/odo/master/build/VERSION
[...]
I0122 10:19:12.634999  929649 preference.go:219] The path for preference file is /root/.odo/preference.yaml
I0122 10:19:12.637070  929649 utils.go:65] Deployment cdefgh not found

Validation
 •  Validating the devfile  ...
I0122 10:19:12.637166  929649 command.go:206] Build command: install
I0122 10:19:12.637178  929649 command.go:213] Run command: run
 ✓  Validating the devfile [82621ns]

Creating Kubernetes resources for component cdefgh
I0122 10:19:12.637298  929649 utils.go:228] Updating container runtime entrypoint with supervisord
I0122 10:19:12.637313  929649 utils.go:123] Updating container runtime with supervisord volume mounts
I0122 10:19:12.637323  929649 utils.go:133] Updating container runtime env with run command
I0122 10:19:12.637332  929649 utils.go:150] Updating container runtime env with run command's workdir
I0122 10:19:12.637339  929649 utils.go:186] Updating container runtime env with debug command
I0122 10:19:12.637346  929649 utils.go:203] Updating container runtime env with debug command's workdir
I0122 10:19:12.637352  929649 utils.go:212] Updating container runtime env with debug command's debugPort
I0122 10:19:12.637385  929649 preference.go:219] The path for preference file is /root/.odo/preference.yaml
I0122 10:19:12.637415  929649 adapter.go:339] Generating PVC name for odo-projects
I0122 10:19:12.637446  929649 utils.go:113] Checking PVC for volume odo-projects and label component=cdefgh,storage-name=odo-projects
I0122 10:19:12.641192  929649 adapter.go:410] Creating deployment cdefgh
I0122 10:19:12.641261  929649 adapter.go:411] The component name is cdefgh
I0122 10:19:12.666246  929649 adapter.go:454] Successfully created component cdefgh
I0122 10:19:12.688981  929649 adapter.go:462] Successfully created Service for component cdefgh
I0122 10:19:12.689013  929649 utils.go:113] Checking PVC for volume odo-projects and label component=cdefgh,storage-name=odo-projects
I0122 10:19:12.697235  929649 utils.go:38] Creating a PVC for odo-projects
I0122 10:19:12.704558  929649 utils.go:86] Creating a PVC with name odo-projects-cdefgh-zlgp and labels map[component:cdefgh odo-source-pvc:odo-projects storage-name:odo-projects]
I0122 10:19:12.709753  929649 deployments.go:101] Waiting for cdefgh deployment rollout
 •  Waiting for component to start  ...
I0122 10:19:12.711726  929649 deployments.go:134] Deployment Condition: {"type":"Available","status":"False","lastUpdateTime":"2021-01-22T10:19:12Z","lastTransitionTime":"2021-01-22T10:19:12Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."}
I0122 10:19:12.711746  929649 deployments.go:134] Deployment Condition: {"type":"Progressing","status":"True","lastUpdateTime":"2021-01-22T10:19:12Z","lastTransitionTime":"2021-01-22T10:19:12Z","reason":"ReplicaSetUpdated","message":"ReplicaSet \"cdefgh-6b7974c76d\" is progressing."}
I0122 10:19:12.711757  929649 deployments.go:145] Waiting for deployment "cdefgh" rollout to finish: 0 of 1 updated replicas are available...
I0122 10:19:12.711764  929649 deployments.go:152] Waiting for deployment spec update to be observed...
 ✗  Waiting for component to start [5m]
 ✗  Failed to start component with name cdefgh. Error: Failed to create the component: error while waiting for deployment rollout: timeout while waiting for cdefgh deployment roll out

# free -m
              total        used        free      shared  buff/cache   available
Mem:           7953         748         125           2        7080        6976
Swap:             0           0           0

# dmesg | grep memory
[    0.036242] Early memory node ranges
[    0.071187] PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
[    0.071188] PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[    0.071189] PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
[    0.071189] PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
[    0.071190] PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
[    0.071191] PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
[    0.071191] PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
[    0.071192] PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
[    0.071193] PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
[    0.304662] Freeing SMP alternatives memory: 36K
[    0.965896] Freeing initrd memory: 24888K
[    1.079132] Non-volatile memory driver v1.3
[    1.288356] Freeing unused decrypted memory: 2040K
[    1.291112] Freeing unused kernel image (initmem) memory: 2452K
[    1.299761] Freeing unused kernel image (text/rodata gap) memory: 2044K
[    1.302011] Freeing unused kernel image (rodata/data gap) memory: 1276K
[  832.495845] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)

# minikube version
minikube version: v1.11.0

# kubectl cluster-info
Kubernetes master is running at https://<ip>:<port>
KubeDNS is running at https://<ip>:<port>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Actual behavior

•  Waiting for component to start  ...
I0122 10:19:12.711726  929649 deployments.go:134] Deployment Condition: {"type":"Available","status":"False","lastUpdateTime":"2021-01-22T10:19:12Z","lastTransitionTime":"2021-01-22T10:19:12Z","reason":"MinimumReplicasUnavailable","message":"Deployment does not have minimum availability."}
I0122 10:19:12.711746  929649 deployments.go:134] Deployment Condition: {"type":"Progressing","status":"True","lastUpdateTime":"2021-01-22T10:19:12Z","lastTransitionTime":"2021-01-22T10:19:12Z","reason":"ReplicaSetUpdated","message":"ReplicaSet \"cdefgh-6b7974c76d\" is progressing."}
I0122 10:19:12.711757  929649 deployments.go:145] Waiting for deployment "cdefgh" rollout to finish: 0 of 1 updated replicas are available...
I0122 10:19:12.711764  929649 deployments.go:152] Waiting for deployment spec update to be observed...
 ✗  Waiting for component to start [5m]
 ✗  Failed to start component with name cdefgh. Error: Failed to create the component: error while waiting for deployment rollout: timeout while waiting for cdefgh deployment roll out

Expected behavior

odo push should pass.

Any logs, error output, etc?

Verified manually.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 22, 2021
@prietyc123 prietyc123 assigned prietyc123 and unassigned prietyc123 Jan 22, 2021
@prietyc123
Copy link
Contributor Author

prietyc123 commented Jan 22, 2021

I tried looking into the pod details and pod is not getting scheduled due to

Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" 
filter plugin for pod "cdefgh-6b7974c76d-dr7tf": pod has unbound immediate PersistentVolumeClaims

@girishramnani @dharmit Could you please have a look.

@jeffmaury
Copy link
Member

I had this error once in Minikube. This is caused by errors in storage-provisioner pod. Went back to a normal situation once I killed that pod.

@prietyc123
Copy link
Contributor Author

I have upgraded minikube version to

# minikube version
minikube version: v1.12.3

and it works fine now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants