Gazzo Posted October 18, 2023 Share Posted October 18, 2023 Hello, My docker containers and vm's disappeared out of nowhere. I rebooted thinking that would help it was stuck on spinning down drives. I forced reboot and when I started the array, my cache drive was unmountable. I tried reformatting it, doing more damage than anything. If I'm able to somehow get this to work or just reformat the cache drive, I'd greatly appreciate it. I am running Unraid version 6.12.2. I have also attached the diagnostics to this post. Thanks in advance. sage-diagnostics-20231018-1834.zip Quote Link to comment
JorgeB Posted October 19, 2023 Share Posted October 19, 2023 Oct 18 14:56:52 Sage kernel: sd 1:0:1:0: [sdc] tag#3135 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=DRIVER_OK cmd_age=0s Oct 18 14:56:52 Sage kernel: sd 1:0:1:0: [sdc] tag#3135 CDB: opcode=0x28 28 00 00 00 02 00 00 00 08 00 Oct 18 14:56:52 Sage kernel: I/O error, dev sdc, sector 512 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2 Oct 18 14:56:52 Sage kernel: sd 1:0:1:0: [sdc] tag#3134 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=DRIVER_OK cmd_age=7s Oct 18 14:56:52 Sage kernel: sd 1:0:1:0: [sdc] tag#3134 CDB: opcode=0x2a 2a 00 3a 39 3b 1b 00 08 00 00 Oct 18 14:56:52 Sage kernel: I/O error, dev sdc, sector 976829211 op 0x1:(WRITE) flags 0x1800 phys_seg 20 prio class 2 Oct 18 14:56:52 Sage kernel: XFS (dm-4): log recovery write I/O error at daddr 0xb80b len 4096 error -5 Oct 18 14:56:52 Sage kernel: XFS (dm-4): failed to locate log tail Oct 18 14:56:52 Sage kernel: XFS (dm-4): log mount/recovery failed: error -5 Oct 18 14:56:52 Sage kernel: XFS (dm-4): log mount failed Oct 18 14:56:52 Sage kernel: sd 1:0:1:0: Power-on or device reset occurred Issues with the cache device, try replacing cables and post new diags after array start. Quote Link to comment
Gazzo Posted October 19, 2023 Author Share Posted October 19, 2023 Swapped out the cables and looks like I'm getting the same error. Attached the screenshot and logs. sage-diagnostics-20231019-1510.zip Quote Link to comment
JorgeB Posted October 20, 2023 Share Posted October 20, 2023 Much less but there are still errors, I guess it can also be a device problem, connect the SSD to the onboard SATA instead. Quote Link to comment
Gazzo Posted October 25, 2023 Author Share Posted October 25, 2023 So after successfully connecting it to an onboard sata, it still shows up as unmountable. sage-diagnostics-20231024-2006.zip Quote Link to comment
JorgeB Posted October 25, 2023 Share Posted October 25, 2023 No device errors so far which is good, check filesystem on cache, run it without -n, and if it asks for -L use it. Quote Link to comment
Gazzo Posted October 25, 2023 Author Share Posted October 25, 2023 So I'm kind of stuck... having trouble determining what the cache drive file system was or is since it is on auto. Don't seem to be any instructions if the drive is set to auto on that page. Quote Link to comment
Gazzo Posted October 26, 2023 Author Share Posted October 26, 2023 Received the following at the end of the scan. .................Sorry, could not find valid secondary superblock Exiting now. Quote Link to comment
itimpi Posted October 26, 2023 Share Posted October 26, 2023 9 minutes ago, Gazzo said: Received the following at the end of the scan. .................Sorry, could not find valid secondary superblock Exiting now. What was the exact command you used? You can get this symptom if the command is not quite right. Quote Link to comment
Gazzo Posted October 26, 2023 Author Share Posted October 26, 2023 xfs_repair /dev/sdb and xfs_repair -n /dev/sdb, both yielded the same message Quote Link to comment
JorgeB Posted October 26, 2023 Share Posted October 26, 2023 That's no the correct command, see the link above. Quote Link to comment
Gazzo Posted October 26, 2023 Author Share Posted October 26, 2023 only other command im seeing is "reiserfsck --fix-fixable /dev/sdb1" should i run that one? Quote Link to comment
itimpi Posted October 26, 2023 Share Posted October 26, 2023 1 hour ago, Gazzo said: only other command im seeing is "reiserfsck --fix-fixable /dev/sdb1" should i run that one? The xfs_repair always requires the partition number to e supplied so the device should have been /dev/sdb1 Quote Link to comment
Gazzo Posted October 26, 2023 Author Share Posted October 26, 2023 After running xfs_repair -n /dev/sdb1 it returns the text below and then continues to scan and returns sorry could not find secondary block. Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... I tried without -n and it said it couldn't run it because resource was busy. I did start the array in maintenance mode. Quote Link to comment
JorgeB Posted October 27, 2023 Share Posted October 27, 2023 9 hours ago, Gazzo said: I tried without -n and it said it couldn't run it because resource was busy. If enabled disable array auto-start, reboot, start in maintenance mode and try again without -n. Quote Link to comment
Gazzo Posted October 27, 2023 Author Share Posted October 27, 2023 I checked to make sure that the auto-start is off. I rebooted, started in maintenance mode and tried xfs_repair /dev/sdb1 again. Same message. Quote Link to comment
JorgeB Posted October 27, 2023 Share Posted October 27, 2023 That's rather strange, try rebooting in safe mode in case there's a plugin interfering. Quote Link to comment
Gazzo Posted October 27, 2023 Author Share Posted October 27, 2023 Unfortunately same results Rebooted in safe mode, started in maintenance mode and ran the command xfs_repair /dev/sdb1 Quote Link to comment
Gazzo Posted October 27, 2023 Author Share Posted October 27, 2023 Here it is. sage-diagnostics-20231027-1201.zip Quote Link to comment
JorgeB Posted October 27, 2023 Share Posted October 27, 2023 Before missed that you are using encryption, it should be: xfs_repair -v /dev/mapper/sdb1 Quote Link to comment
JorgeB Posted October 27, 2023 Share Posted October 27, 2023 Alternatively you can always use the GUI like the link above explains, no need for typing commands. Quote Link to comment
Gazzo Posted October 27, 2023 Author Share Posted October 27, 2023 So I did xfs_repair -v /dev/mapper/sdb1 and received a short error. Then I ran xfs_repair -vL /dev/mapper/sdb1 and got some results that I don't understand. Should I post a new diagnostics? Not sure if this would be included. Quote Link to comment
JorgeB Posted October 27, 2023 Share Posted October 27, 2023 Instead of a screenshot copy/paste the full xfs_repair output here. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.