flobbr Posted May 20, 2021 Share Posted May 20, 2021 Hi there, My unraid server ssems like it has some damn problems this morning: i got an email that the cache drive is neraly full (no wonder as i was downloading some movies for plex via sab overnight). I ran the mover manually and then everything seemed to went down. Nearly all docker images stopped themselves and my Home Assistant VM shutdown. After the mover finished i rebooted the whole server and all docker images are running well again. But the HA VM wont startup properly now: It shows it has started in the webui but the VNC view shows this: I installed the VM from this manual a long time ago and never had any problems: https://community.home-assistant.io/t/ha-os-on-unraid/59959 Coukld it be that the qcow2 image is corrupted somehow? My latest backup is from Januaray and i changed quiet a few things in the meantime 😞 Does anybody know how i can boot the image up again? 1 Quote Link to comment
ghost82 Posted May 20, 2021 Share Posted May 20, 2021 (edited) 10 minutes ago, flobbr said: Does anybody know how i can boot the image up again? Try this to see if it's an issue with boot order or other. That is the UEFI shell. Once inside the UEFI shell type (one command per line) : fs0: cd EFI\BOOT BOOTx64.efi Report if it boots or not. If it doesn't boot check with the ls commands that folders/files are in the fs0 EFI partition. Edited May 20, 2021 by ghost82 Quote Link to comment
flobbr Posted May 20, 2021 Author Share Posted May 20, 2021 9 minutes ago, ghost82 said: Once inside the UEFI shell type (one command per line) : fs0: cd EFI\BOOT BOOTx64.efi Thanks for your answer! I get the following output in the first line: 'fs0:' is not a valid mapping. Quote Link to comment
ghost82 Posted May 20, 2021 Share Posted May 20, 2021 Just now, flobbr said: Thanks for your answer! I get the following output in the first line: 'fs0:' is not a valid mapping. Try this command: map fs* Quote Link to comment
flobbr Posted May 20, 2021 Author Share Posted May 20, 2021 1 minute ago, ghost82 said: Try this command: map fs* map: Cannot find mapped device - 'fs*' Quote Link to comment
ghost82 Posted May 20, 2021 Share Posted May 20, 2021 (edited) 2 minutes ago, flobbr said: map: Cannot find mapped device - 'fs*' Then I suspect something serious has happened to your image file...if the vm settings are correct, especially the disk part pointing to the virtual disk, it seems the efi partition has gone..you can wait for some more comments here, but I wouldn't expect anything good Edited May 20, 2021 by ghost82 Quote Link to comment
flobbr Posted May 20, 2021 Author Share Posted May 20, 2021 1 minute ago, ghost82 said: Then I suspect something serious has happened to your image file...it seems the efi partition has gone..you can wait for some more comments here, but I wouldn't expect anything good 😞 Damn that doesn't sound good. Its such a weird issue, the mover runs every night and the dockers never stopped for that... Quote Link to comment
flobbr Posted May 20, 2021 Author Share Posted May 20, 2021 Is there some way to extract data from the qcow2 image? Would be great to have at leat the config file... Quote Link to comment
ghost82 Posted May 20, 2021 Share Posted May 20, 2021 7 minutes ago, flobbr said: Is there some way to extract data from the qcow2 image? Would be great to have at leat the config file... Let's see if you can mount it, from unraid terminal: modprobe nbd max_part=8 qemu-nbd --connect=/dev/nbd0 /path/to/the/image/test.qcow2 Create a mount point: mkdir /path/to/the/mount/point/ List the partitions of nbd0: fdisk /dev/nbd0 -l Mount the partition (in this example is p1): mount /dev/nbd0p1 /path/to/the/mount/point/ cd to the mount point and do what you need to do. Once finished: umount /path/to/the/mount/point/ qemu-nbd --disconnect /dev/nbd0 rmmod nbd 1 Quote Link to comment
flobbr Posted May 20, 2021 Author Share Posted May 20, 2021 So.... The output of fdisk /dev/nbd0 -l is fdisk /dev/nbd0 -l Disk /dev/nbd0: 30 GiB, 32212254720 bytes, 62914560 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes So there are no partitions?? I dont quiet get it. Quote Link to comment
ghost82 Posted May 20, 2021 Share Posted May 20, 2021 Yes, no partitions...So I don't know what to do with it... Quote Link to comment
flobbr Posted May 20, 2021 Author Share Posted May 20, 2021 hmm weird. Anyway, thank you very much for your help!! Quote Link to comment
smbunn Posted October 28, 2021 Share Posted October 28, 2021 Had exactly this problem. then relized like some newbie that the QCOW image has been zipped now by HA when previously it wasnt. Just unzip it and then point the KVM VM image to the unzipped version...simple Quote Link to comment
CSmits Posted December 14, 2021 Share Posted December 14, 2021 Hi guys, i encountered this after my cache drive (nvme) crashed and got replaced. Wanted to update you on what might have gone wrong. I tried so much things, but the solution was: In Settings/VMSettings set "enable VM's" to "no" Delete the `libvirt` image in /mnt/user/system/libvirt/libvirt.img Set "enable VM's" to "yes" again After that, I used a brand new qcow2 image from the OS I was trying to boot (Home Assistant), and it worked... However, trying the older (backed up) qcow2 image that I previously had resulted in a corruption that caused this situation all over again. I suspect that image was corrupt anyway because it was a backup of a spinning machine Quote Link to comment
makin Posted December 29, 2021 Share Posted December 29, 2021 Hey guys, after moving the HA VM from disk1 to cache, I encounter the same issue but the partitions seem to still be intact. Disk /dev/nbd0: 32 GiB, 34359738368 bytes, 67108864 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: A4E33CCF-98F3-43B5-B08D-6F37CFF42D17 Device Start End Sectors Size Type /dev/nbd0p1 2048 67583 65536 32M EFI System /dev/nbd0p2 67584 116735 49152 24M Linux filesystem /dev/nbd0p3 116736 641023 524288 256M Linux filesystem /dev/nbd0p4 641024 690175 49152 24M Linux filesystem /dev/nbd0p5 690176 1214463 524288 256M Linux filesystem /dev/nbd0p6 1214464 1230847 16384 8M Linux filesystem /dev/nbd0p7 1230848 1427455 196608 96M Linux filesystem /dev/nbd0p8 1427456 67108830 65681375 31.3G Linux filesystem Any idea how to save my VM? I do not have a backup... Quote Link to comment
Barry Staes Posted February 6, 2022 Share Posted February 6, 2022 (edited) Had exactly this problem, when i tried to upgrade cache SSD, and found no solution. Suspect causes: - Maybe i forgot to stop the VM before i started Mover. (cache > array > new cache) - Bad cable for new disk caused 10 UDMA errors. The commands found here resulted in unexpected errors in my Unraid terminal: modprobe ndb max_part=8 modprobe: FATAL: Module ndb not found in directory /lib/modules/5.10.28-Unraid root@bTower:/mnt/user/domains/HassOS# qemu-nbd --connect=/dev/nbd0 /mnt/user/domains/HassOS/hassos_ova-3.1.qcow2 qemu-nbd: Failed to open /dev/nbd0: No such file or directory qemu-nbd: Disconnect client, due to: Failed to read request: Unexpected end-of-file before all bytes were read I just tried removing the libvirt.img and the VM webpage now lists ancient VMs that i maybe had years ago, so something is going wrong there as well.. Edited February 6, 2022 by barrystaes Quote Link to comment
Mat W Posted March 11, 2022 Share Posted March 11, 2022 (edited) Just had the same problem BUT was able to resolve it pretty easily and without any data loss!!! So the problem I expected what caused by me manually moving the qcow2 image from one cache drive to another (w/ the VM stopped of course). After that it wouldn't boot and sat the UEFI Shell just like @flobbr did. After digging through some search results I stumbled across a post on the HA OS Community Forums. In the thread someone resolved the issue by simply deleting the VM but NOT the qcow2 vdisk. Then recreate the VM and point it to the existing qcow2. BAM! It boots just like it used to without any data/config loss. I know this is far too late to help the OP but for anyone currently having the issue (or in the future) this solution worked for me. Not entirely sure what actually causes the issue with boot failures after moving the image though. Edited March 11, 2022 by Mat W corrected OP tag 1 Quote Link to comment
JeanR Posted March 12, 2022 Share Posted March 12, 2022 I had exactly the same problem after migrating my VMs from one cache pool to a new config & pools. Only home-assistant VM did not like that and couldn't start. Recreating the VM with the same parameters and vdisk worked for me too. Thanks @Mat W ! Quote Link to comment
exus Posted November 18, 2023 Share Posted November 18, 2023 Same issue with homeassistant VM after migration, recreating the VM (same vdisk) was the solution Quote Link to comment
Tephea Posted March 3 Share Posted March 3 On 5/20/2021 at 5:15 AM, ghost82 said: Let's see if you can mount it, from unraid terminal: modprobe nbd max_part=8 qemu-nbd --connect=/dev/nbd0 /path/to/the/image/test.qcow2 Create a mount point: mkdir /path/to/the/mount/point/ List the partitions of nbd0: fdisk /dev/nbd0 -l Mount the partition (in this example is p1): mount /dev/nbd0p1 /path/to/the/mount/point/ cd to the mount point and do what you need to do. Once finished: umount /path/to/the/mount/point/ qemu-nbd --disconnect /dev/nbd0 rmmod nbd This was extremely helpful. I recently had a slowly dying cache drive containing my Home Assistant VM. My "backups" were contained within the VM rather than pulled anywhere else, which in hindsight was as good as no backup. I was able to navigate into the backups and pull them out by following your instructions. I tried Mat W's solution above but with my corrupted cache I can't currently install any VM's and haven't dug for a solution yet. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.