DCWolfie Posted July 17, 2022 Share Posted July 17, 2022 So I recently realized I had my disks set up the wrong way (added SSD's to my array). I wanted to move everything off the cache drive back onto the array so I could reorganize everything. I switched from Cache: Preferred to Cache: Yes and invoked the mover. Then nothing happened... I then noticed i couldn't access the cache drive. I restarted the array back into maintenance and tried to fix the XFS filesystem with. This gave me "couldn't verify primary superblock - not enough secondary superblocks with matching geometry" and the results after was "Sorry, could not find valid secondary superblock". How can I try to fix this so I can get my cache data of the drive? I found this in the logs when starting up the array. Not sure if it helps: Jul 17 13:00:04 UnraidHub emhttpd: shcmd (55): mkdir -p /mnt/cache Jul 17 13:00:04 UnraidHub emhttpd: shcmd (56): mount -t xfs -o noatime,nouuid /dev/nvme0n1p1 /mnt/cache Jul 17 13:00:04 UnraidHub kernel: XFS (nvme0n1p1): Mounting V5 Filesystem Jul 17 13:00:04 UnraidHub kernel: XFS (nvme0n1p1): Starting recovery (logdev: internal) Jul 17 13:00:04 UnraidHub kernel: XFS (nvme0n1p1): Metadata CRC error detected at xfs_agfl_read_verify+0x27/0x56 [xfs], xfs_agfl block 0x3 Jul 17 13:00:04 UnraidHub kernel: XFS (nvme0n1p1): Unmount and run xfs_repair Jul 17 13:00:04 UnraidHub kernel: XFS (nvme0n1p1): First 128 bytes of corrupted metadata buffer: Jul 17 13:00:04 UnraidHub kernel: 00000000: 86 27 b9 af e8 f1 6b 8e fc ce db 83 12 58 bc 6d .'....k......X.m Jul 17 13:00:04 UnraidHub kernel: 00000010: 37 b8 9c f3 08 df 82 cd 09 02 c2 58 03 d4 d1 46 7..........X...F Jul 17 13:00:04 UnraidHub kernel: 00000020: 56 e0 07 f5 e8 84 79 da 99 b6 3e 21 51 2c 14 bd V.....y...>!Q,.. Jul 17 13:00:04 UnraidHub kernel: 00000030: 59 97 b8 37 5f c3 ea b2 41 5b 2a 00 7f fa 40 37 Y..7_...A[*...@7 Jul 17 13:00:04 UnraidHub kernel: 00000040: 82 f8 e6 30 4e 99 35 b0 03 78 ef f1 6b f8 dd 21 ...0N.5..x..k..! Jul 17 13:00:04 UnraidHub kernel: 00000050: 70 08 f2 9a 27 e9 95 a8 08 b6 a1 b8 5a 6f ba be p...'.......Zo.. Jul 17 13:00:04 UnraidHub kernel: 00000060: 4b ba 3e 97 83 e6 84 a6 ea e4 1b 04 2e ad d5 52 K.>............R Jul 17 13:00:04 UnraidHub kernel: 00000070: e5 42 f5 85 63 4d 65 f6 f0 66 bb f2 b5 46 43 31 .B..cMe..f...FC1 Jul 17 13:00:04 UnraidHub kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_alloc_read_agfl+0x82/0xc0 [xfs]" at daddr 0x3 len 1 error 74 Jul 17 13:00:04 UnraidHub kernel: 00000000: f6 91 53 00 00 00 00 00 01 00 00 00 00 00 00 00 ..S............. Jul 17 13:00:04 UnraidHub kernel: XFS (nvme0n1p1): Internal error xfs_efi_item_recover at line 633 of file fs/xfs/xfs_extfree_item.c. Caller xlog_recover_process_intents+0xa6/0x268 [xfs] Attached diagnostics when the array is running. diag-mounted.zip Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 I'd like to add that I can't seem to run SMART self tests either. However I can get my last run which I believe wasn't that long ago. I don't know much about this, but I can't see that there was anything wrong with the drive at that time. INTEL_SSDPEKKW128G7_BTPY63920H1J128A-20220717-1911.txt Quote Link to comment
trurl Posted July 17, 2022 Share Posted July 17, 2022 disks 21,22 also SSDs. Are you planning to get those out of the array also? 6 hours ago, DCWolfie said: tried to fix the XFS filesystem Did you try to do this from webUI or from command line? Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 1 minute ago, trurl said: disks 21,22 also SSDs. Are you planning to get those out of the array also? Yeah. Correct. I want them both out. DISK 21 has seen better days and have disk speeds all over the place so that's going in the trash. 1 minute ago, trurl said: Did you try to do this from webUI or from command line? Both. Doing it the GUI didn't give me the results so what I added in this post is from the cli. Quote Link to comment
trurl Posted July 17, 2022 Share Posted July 17, 2022 17 minutes ago, DCWolfie said: from the cli. What was the exact command you used? Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 Just now, trurl said: What was the exact command you used? xfs_repair -v /dev/nvme0n1 Quote Link to comment
trurl Posted July 17, 2022 Share Posted July 17, 2022 You have to specify the partition. Try xfs_repair -v /dev/nvme0n1p1 Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 4 minutes ago, trurl said: You have to specify the partition. Try xfs_repair -v /dev/nvme0n1p1 Same results Quote Link to comment
trurl Posted July 17, 2022 Share Posted July 17, 2022 What do you think was on the disk? Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 (edited) 1 minute ago, trurl said: What do you think was on the disk? It had my complete appdata and system folder (about 120GB). Well, almost complete. I remember it was around 500MB left on Disk22 (which had the shares) Edited July 17, 2022 by DCWolfie Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 @trurl could force log zeroing be an option? -L flag Quote Link to comment
trurl Posted July 17, 2022 Share Posted July 17, 2022 If it said you had logs you could zero then you would have to do that. Apparently it didn't find anything. Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 Just now, trurl said: If it said you had logs you could zero then you would have to do that. Apparently it didn't find anything. Okey. Any suggestion what I might do, or am I dead in the water? Quote Link to comment
trurl Posted July 17, 2022 Share Posted July 17, 2022 25 minutes ago, DCWolfie said: It had my complete appdata and system folder (about 120GB). Well, almost complete. I remember it was around 500MB left on Disk22 (which had the shares) disk22 has appdata share currently, maybe not complete, maybe nothing in it even. Did you have any VMs? docker.img can be easily recreated. https://wiki.unraid.net/Manual/Docker_Management#Re-Create_the_Docker_image_file And your containers can be reinstalled just as they were from the templates on flash https://wiki.unraid.net/Manual/Docker_Management#Re-Installing_Docker_Applications but unless they have their appdata they won't remember anything. Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 As mentioned, it only have 500MB out of ~120GB, so ye... good as empty. No I did not, only Docker. Yeah, that I know. That's not the issue as I guess you also understand. I'm guessing you don't have any suggestions for what I can do? Can't seem to find much online either. Quote Link to comment
trurl Posted July 17, 2022 Share Posted July 17, 2022 I have appdata backed up to the array with CA Backup plugin. Looks like you have that plugin installed. Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 Yeah. Correct, that was my next step, get the backup up and running 🥲. But doesn't help me now as I never got to that stage. Quote Link to comment
trurl Posted July 17, 2022 Share Posted July 17, 2022 4 hours ago, DCWolfie said: Yeah. Correct, that was my next step, get the backup up and running 🥲. But doesn't help me now as I never got to that stage. Don't understand. Do you have appdata backup or not? Quote Link to comment
DCWolfie Posted July 17, 2022 Author Share Posted July 17, 2022 (edited) 2 minutes ago, trurl said: Don't understand. Do you have appdata backup or not? I did NOT got the time to take a backup. So no. I've stated that multiple times 😄. Edited July 17, 2022 by DCWolfie Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.