-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl exec -it <podname> -- df -h shows size of 60Z #108
Comments
aks kubernetes version 1.21.2 |
node type E8ds_v4 |
what's your disk size? looks like file system is corrupted |
I am in aks if I drop the deployment and reistablish it its supposed to repartition and reformat it correct. This doesnt seem to be happening. Can I drop the partition from a pod and recreate it? |
df WARNING!!! The filesystem is mounted. If you continue you WILL Do you really want to continue? yes |
Seems like this should be something the provisioner should be handling. I dont seem to be able to do anything about it. |
su - |
also its a 300G but I reserver 299. |
I also tried an aks debug pod. That didnt work either. root@aks-default2-35939363-vmss000000:/# su - It seems I cannot fix the issue with the node. Setting up a new node pool every time this happens is less then desirable. |
could you provide |
provisioner_node2.log ls |
it looks like the log is continuing to try and clean /dev/sdb1 with the shred script. I guess I just have to wait. The go program is scanning all the files under /dev. This seems to be slowing it down considerably. |
so the cleanup finished I waited a few minutes after that then redeployed same issue. kubectl exec -it deployment-localdisk-6f95f4f858-twx7s -- df -h |
So I see some issues Is it appropriate on the aks E8ds_v4's to be using the following? |
it seems to contradict what I am seeing as the mount path on the node. |
the mount path for temp appears to be this for a non provisioned node. `-sdb1 8:17 0 300G 0 part /host/mnt |
I have not heard about this issue in quite a while. I am kind of stuck in terms of using tempdb space and nvme space until i either understand how to resolve this, or configure it correctly or if there is a patch comming do you have any kind of eta on it. |
following these instructions. |
Followed the instructions on this page when i got to testing the deployment recieved the following out put from
kubectl exec -it deployment-localdisk-6f95f4f858-bjkms -- df -h
Filesystem Size Used Avail Use% Mounted on
overlay 124G 23G 102G 19% /
tmpfs 64M 0 64M 0% /dev
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sdb1 60Z 60Z 0 100% /mnt/localdisk
/dev/sda1 124G 23G 102G 19% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 32G 12K 32G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 32G 0 32G 0% /proc/acpi
tmpfs 32G 0 32G 0% /proc/scsi
tmpfs 32G 0 32G 0% /sys/firmware
connected to the pod and tried to make a directory at the mountpath /mnt/localdisk
mkdir test
mkdir: cannot create directory 'test': Structure needs cleaning
logs shows many
/bin/sh: 1: cannot create /mnt/localdisk/outfile: Structure needs cleaning
/bin/sh: 1: cannot create /mnt/localdisk/outfile: Structure needs cleaning
/bin/sh: 1: cannot create /mnt/localdisk/outfile: Structure needs cleaning
/bin/sh: 1: cannot create /mnt/localdisk/outfile: Structure needs cleaning
/bin/sh: 1: cannot create /mnt/localdisk/outfile: Structure needs cleaning
/bin/sh: 1: cannot create /mnt/localdisk/outfile: Structure needs cleaning
/bin/sh: 1: cannot create /mnt/localdisk/outfile: Structure needs cleaning
The text was updated successfully, but these errors were encountered: