Fredrick Posted July 18, 2021 Share Posted July 18, 2021 Hi, I'm upgrading my server, and wanted to run my VMs from a newly created cache pool. I moved my vdisks to the array, formatted the drive as a cache-pool (btrfs) with a single drive, and moved the vdisks back. I then started the VM and checked everything, it was working well. Now I had to do a reboot since I'd filled up my log-vdisk from running the mover (moved plex-directory with loads of files). After rebooting the new cache_vms is showing as unmountable. I've tried step 1 and 2 from here, but it didn't work. root@Tower:/# mount -o usebackuproot,ro /dev/sdn1 /x mount: /x: wrong fs type, bad option, bad superblock on /dev/sdn1, missing codepage or helper program, or other error. root@Tower:/mnt/disk8# btrfs restore -v /dev/sdn1 /mnt/disk8/restore No valid Btrfs found on /dev/sdn1 Could not open root, trying backup super No valid Btrfs found on /dev/sdn1 Could not open root, trying backup super ERROR: superblock bytenr 274877906944 is larger than device size 240056360960 Now is there any chance of getting my images back? One of the VMs runs my home automations and is kinda critical. I've got a windows backup of the important files, but I was hoping to avoid having to set up a new VM and restoring the data. Diagnostics attached Thanks a bunch! tower-diagnostics-20210718-1547.zip Quote Link to comment
Fredrick Posted July 18, 2021 Author Share Posted July 18, 2021 I've also tried restoring backup superblock as outlined here, but got the same results as OP in that thread. Quote Link to comment
Fredrick Posted July 18, 2021 Author Share Posted July 18, 2021 I suspect the drive didn't get re-formatted correctly when I made it into cache_vms, and that's why btrfs commands are not working. I hope it's okay to try and tag @JorgeB here cause I think he or she is the person for the job. Thanks a lot root@Tower:~# fdisk /dev/sdn -l Disk /dev/sdn: 223.57 GiB, 240057409536 bytes, 468862128 sectors Disk model: KINGSTON SV300S3 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdn1 2048 468862127 468860080 223.6G 83 Linux root@Tower:~# file -sL /dev/sdn /dev/sdn: DOS/MBR boot sector; partition 1 : ID=0x83, start-CHS (0x0,0,0), end-CHS (0x0,0,0), startsector 2048, 468860080 sectors, extended partition table (last) parted command is not available on unraid it seems. Quote Link to comment
JorgeB Posted July 19, 2021 Share Posted July 19, 2021 18 hours ago, Fredrick said: After rebooting the new cache_vms is showing as unmountable I can see what happened since after the reboot there wasn't a valid btrfs filesystem on that SSD, looks like it was wiped, you that's what happened you can try this, with the array stopped: btrfs-select-super -s 1 /dev/sdX1 Replace X with correct letter. Quote Link to comment
Fredrick Posted July 19, 2021 Author Share Posted July 19, 2021 2 hours ago, JorgeB said: Replace X with correct letter. Thanks for getting back to me. Unfortunately it still doesn't find a file system there root@Tower:~# btrfs-select-super -s 1 /dev/sdn1 No valid Btrfs found on /dev/sdn1 Open ctree failed Quote Link to comment
JorgeB Posted July 19, 2021 Share Posted July 19, 2021 I don't know what happened to that device but there's no valid filesystem there, so don't see how to recover. 1 Quote Link to comment
Fredrick Posted July 19, 2021 Author Share Posted July 19, 2021 2 hours ago, JorgeB said: I don't know what happened to that device but there's no valid filesystem there, so don't see how to recover. Thats a shame. I've been working on my backup, and it's nowhere near as good/fresh/complete as it should have been. Is there any reason I shouldn't trust this drive from your point of view? Thanks again Quote Link to comment
JorgeB Posted July 19, 2021 Share Posted July 19, 2021 9 minutes ago, Fredrick said: Is there any reason I shouldn't trust this drive from your point of view? Something very weird happened, like mentioned I can't say what since the filesystem was gone at boot time, if it had been wiped by mistake the command above should have worked, but some data was lost, I remember that the Kingston SV300 is the only SSD that caused sync errors when used in the array after a reboot/power cycle, so it could be a device problem. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.