jslay Posted February 16, 2022 Share Posted February 16, 2022 (edited) Welp, it's my turn for this one I guess. Looks like I lost the superblock on one of my cache drives in the cache pool (as well as the other one showing signs of dying as well). I have been trying to recover some of the data (unsuccessfully), and am wondering if there are any other paths for me before formatting the pool and losing data. I have been following the @JorgeB guide Feb 15 16:13:19 unraid kernel: BTRFS info (device nvme0n1p1): turning on async discard Feb 15 16:13:19 unraid kernel: BTRFS info (device nvme0n1p1): using free space tree Feb 15 16:13:19 unraid kernel: BTRFS info (device nvme0n1p1): has skinny extents Feb 15 16:13:20 unraid kernel: BTRFS info (device nvme0n1p1): enabling ssd optimizations Feb 15 16:13:20 unraid kernel: BTRFS info (device nvme0n1p1): start tree-log replay Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099264 op 0x1:(WRITE) flags 0x1800 phys_seg 4 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099424 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099488 op 0x1:(WRITE) flags 0x1800 phys_seg 2 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099616 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099680 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099776 op 0x1:(WRITE) flags 0x1800 phys_seg 3 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2099904 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2100000 op 0x1:(WRITE) flags 0x1800 phys_seg 2 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2100096 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: blk_update_request: critical medium error, dev nvme1n1, sector 2100160 op 0x1:(WRITE) flags 0x1800 phys_seg 1 prio class 0 Feb 15 16:13:20 unraid kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 1, corrupt 0, gen 0 Feb 15 16:13:20 unraid kernel: BTRFS warning (device nvme0n1p1): chunk 507969011712 missing 1 devices, max tolerance is 0 for writable mount Feb 15 16:13:20 unraid kernel: BTRFS: error (device nvme0n1p1) in write_all_supers:3845: errno=-5 IO failure (errors while submitting device barriers.) Feb 15 16:13:20 unraid kernel: BTRFS warning (device nvme0n1p1): Skipping commit of aborted transaction. Feb 15 16:13:20 unraid kernel: BTRFS: error (device nvme0n1p1) in cleanup_transaction:1942: errno=-5 IO failure Feb 15 16:13:20 unraid kernel: BTRFS: error (device nvme0n1p1) in btrfs_replay_log:2279: errno=-5 IO failure (Failed to recover log tree) Feb 15 16:13:20 unraid root: mount: /mnt/cache: can't read superblock on /dev/nvme1n1p1. Only way I can get this to mount is mount -o ro,notreelog,nologreplay /dev/nvme0n1p1 /x But subsequently, trying to copy data out of it results in all sorts of I/O errors and unreadable/incomplete files. btrfs restore is failing as well. Restoring /mnt/user/Backups/cache_backup/domains/vm1/vdisk1.img offset is 1114112 offset is 1138688 offset is 16384 offset is 81920 offset is 176128 offset is 20480 offset is 4096 offset is 3416064 offset is 3465216 offset is 3538944 offset is 3854336 offset is 3928064 offset is 163840 offset is 6578176 offset is 12173312 offset is 12193792 offset is 4096 offset is 12288 offset is 1114112 offset is 1228800 offset is 1478656 offset is 1585152 offset is 1703936 offset is 1769472 offset is 1916928 offset is 1982464 offset is 2027520 offset is 2056192 offset is 2076672 offset is 2121728 offset is 2306048 offset is 2351104 offset is 2433024 offset is 2441216 offset is 2482176 offset is 2498560 offset is 2666496 offset is 2707456 offset is 2727936 offset is 2748416 offset is 2863104 offset is 3002368 offset is 3014656 offset is 3104768 offset is 3207168 offset is 3272704 offset is 3297280 offset is 3469312 offset is 3493888 offset is 3563520 offset is 3629056 offset is 3801088 offset is 3895296 offset is 3928064 offset is 3948544 offset is 4005888 offset is 4022272 offset is 4149248 offset is 4202496 offset is 4227072 offset is 4243456 offset is 4321280 offset is 4345856 offset is 4452352 offset is 4472832 offset is 4575232 offset is 4603904 offset is 4636672 offset is 4648960 offset is 4915200 offset is 4988928 offset is 5320704 offset is 5386240 offset is 6975488 offset is 7143424 offset is 8925184 offset is 9699328 offset is 9854976 offset is 9904128 We seem to be looping a lot on /mnt/user/Backups/cache_backup/domains/vm1/vdisk1.img, do you want to keep going on ? (y/N/a): Restoring /mnt/user/Backups/cache_backup/appdata/server/some_file1.txt ERROR: exhausted mirrors trying to read (2 > 1) Error copying data for /mnt/user/Backups/cache_backup/appdata/server/some_file1.txt Restoring /mnt/user/Backups/cache_backup/appdata/server/some_file2.txt ERROR: exhausted mirrors trying to read (2 > 1) Error copying data for /mnt/user/Backups/cache_backup/appdata/server/some_file2.txt Restoring /mnt/user/Backups/cache_backup/appdata/server/some_file3.txt ERROR: exhausted mirrors trying to read (2 > 1) Error copying data for /mnt/user/Backups/cache_backup/appdata/server/some_file3.txt unraid-diagnostics-20220215-1616.zip Edited February 16, 2022 by jslay More logs Quote Link to comment
jslay Posted February 16, 2022 Author Share Posted February 16, 2022 Well, came back to it after letting it sit, and the cache drives are just gone now. Rebooted, still missing. Nothing under /dev for nvme. Nothing under BIOS. They ded. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.