AngelEyes Posted May 3, 2020 Posted May 3, 2020 (edited) Hi, I rebooted my server recently and noticed 90% of the media in my 'TV' folder is missing and the server stats show probably 10-15 TB more free space than there should be across multiple disks. No one has access to the server but me and most of the time I have it turned off lately. I am pretty gutted I have lost so much but am more interested in making sure it doesn't happen again. My error log keeps filling up so no doubt something is up but I didn't manage to resolve things when I posted about that previously: Is is possible Sonarr just 'decided' to delete a load of media?? Any help would be gratefully received as I am worried about losing my Film Media folder too. Thanks, Adam server-diagnostics-20200503-1422.zip Edited May 3, 2020 by AngelEyes Quote
Squid Posted May 3, 2020 Posted May 3, 2020 run the https://wiki.unraid.net/Check_Disk_Filesystems against disk 3 Quote
AngelEyes Posted May 3, 2020 Author Posted May 3, 2020 Hi, I have done that 3 times now, doesn't seem to make any difference. Quote
Squid Posted May 3, 2020 Posted May 3, 2020 1 minute ago, AngelEyes said: Hi, I have done that 3 times now, doesn't seem to make any difference. You probably never took off the "n" flag from the command Quote
AngelEyes Posted May 3, 2020 Author Posted May 3, 2020 Definitely did that but i'll give it one more go for fun Any reason why 90% of my TV library across 12 disks has disappeared. I doubt it is related to the Disk 3 issue? Thank you! Quote
AngelEyes Posted May 3, 2020 Author Posted May 3, 2020 Phase 1 - find and verify superblock... - block cache size set to 1493528 entries Phase 2 - using internal log - zero log... zero_log: head block 9173 tail block 9173 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 bogus .. inode number (0) in directory inode 7516192864, clearing inode number - agno = 8 - agno = 9 - agno = 10 - agno = 11 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 6 - agno = 4 - agno = 11 - agno = 5 - agno = 9 - agno = 10 - agno = 7 - agno = 8 - agno = 3 bogus .. inode number (0) in directory inode 7516192864, clearing inode number Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 entry "ANY!" in dir ino 4294967392 doesn't have a .. entry, will set it in ino 7516192864. - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 setting .. in sf dir inode 7516192864 to 4294967392 Metadata corruption detected at 0x462e2c, inode 0x1c0000060 data fork xfs_repair: warning - iflush_int failed (-117) - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Sun May 3 19:24:08 2020 Phase Start End Duration Phase 1: 05/03 19:24:06 05/03 19:24:06 Phase 2: 05/03 19:24:06 05/03 19:24:06 Phase 3: 05/03 19:24:06 05/03 19:24:06 Phase 4: 05/03 19:24:06 05/03 19:24:06 Phase 5: 05/03 19:24:06 05/03 19:24:06 Phase 6: 05/03 19:24:06 05/03 19:24:06 Phase 7: 05/03 19:24:06 05/03 19:24:06 Total run time: done Here is the completed Fix, I think it posted similar results in the other thread too. Quote
AngelEyes Posted May 9, 2020 Author Posted May 9, 2020 Hi, I have run the fix multiple times now and am still getting a full log on a regular basis. Also no comments on the problem that is really bothering me, that Sonarr seems to have wiped all my TV content across 12 disks. Any help greatly welcome, thank you! server-diagnostics-20200509-1451.zip Quote
AngelEyes Posted May 10, 2020 Author Posted May 10, 2020 server-diagnostics-20200510-1027.zip Ok, rebooted and diags attached. Thanks. Quote
JorgeB Posted May 10, 2020 Posted May 10, 2020 Upgrade to v6.8.3 since it includes a newer xfsprogs and run xfs_repair again. Quote
AngelEyes Posted May 10, 2020 Author Posted May 10, 2020 Ok, I updated, rebooted and ran the Check without any options added in the box. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 bogus .. inode number (0) in directory inode 7516192864, clearing inode number - agno = 8 - agno = 9 - agno = 10 - agno = 11 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 5 - agno = 6 - agno = 7 - agno = 4 - agno = 3 - agno = 8 - agno = 1 - agno = 9 - agno = 10 - agno = 11 - agno = 2 bogus .. inode number (0) in directory inode 7516192864, clearing inode number Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... entry "ANY!" in dir ino 4294967392 doesn't have a .. entry, will set it in ino 7516192864. setting .. in sf dir inode 7516192864 to 4294967392 Metadata corruption detected at 0x462c1c, inode 0x1c0000060 data fork xfs_repair: warning - iflush_int failed (-117) - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done Quote
JorgeB Posted May 10, 2020 Posted May 10, 2020 It appears it's still failing, start array again, if the fs is still corrupt you'll need to ask for help on the xfs mailing list, manually upgrade xfsprogs to latest (not the one on the link) and try again, wait for a newer Unraid or re-format the disk. Quote
JorgeB Posted May 10, 2020 Posted May 10, 2020 Just now, johnnie.black said: re-format the disk. Note that for this you'd need to back up first, you can't format and rebuild from parity. Quote
AngelEyes Posted May 10, 2020 Author Posted May 10, 2020 Ok, I think I understand. The disk is brand new so slightly annoying but I have a free empty disk of the same size on the server, not in the array. Can I just replace it and rebuild and then see if i can figure what is wrong with the current disk 3? Thank you. Quote
JorgeB Posted May 10, 2020 Posted May 10, 2020 The problem is the filesystem, not the disk, rebuilding from parity won't help since it will rebuild the same filesystem, this looks like an xfs_repair bug since it seams unable to fix it. Quote
AngelEyes Posted May 10, 2020 Author Posted May 10, 2020 Ok, thank for your help. Can I just ask where to find the latest xfsprogs and where the xfs mailing list thread is? Sorry, bit of a noob. Cheers. Quote
JorgeB Posted May 10, 2020 Posted May 10, 2020 Latest version appears to be 5.6.0, then follow then instructions on the link above, mailing list is below: https://xfs.org/index.php/XFS_email_list_and_archives 1 Quote
AngelEyes Posted May 10, 2020 Author Posted May 10, 2020 Thank you. I have manually updated it, checked it was the new version and run it again... and got this: Quote Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 bogus .. inode number (0) in directory inode 7516192864, clearing inode number - agno = 8 - agno = 9 - agno = 10 - agno = 11 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 2 - agno = 11 - agno = 6 - agno = 0 - agno = 8 - agno = 7 - agno = 4 - agno = 9 - agno = 5 - agno = 10 - agno = 3 bogus .. inode number (0) in directory inode 7516192864, clearing inode number Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... entry "ANY!" in dir ino 4294967392 doesn't have a .. entry, will set it in ino 7516192864. setting .. in sf dir inode 7516192864 to 4294967392 Metadata corruption detected at 0x46432a, inode 0x1c0000060 data fork xfs_repair: warning - iflush_int failed (-117) - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done I also rebooted again and added the diagnostics. Thank you. server-diagnostics-20200510-1841.zip Quote
JorgeB Posted May 11, 2020 Posted May 11, 2020 If the latest xfs_repair can't fix the filesystem don't have any other ideas besides the ones suggested above. P.S. don't forget to delete the "extra" folder on the flash drive. Quote
AngelEyes Posted May 11, 2020 Author Posted May 11, 2020 Hey, I successfully subscribed to the XFS mailing list but have no idea how I am supposed to interest with anyone on it. If I send a message it just bounces. This is all a bit new to me, sorry 😕 Quote
AngelEyes Posted May 11, 2020 Author Posted May 11, 2020 Is it possible to copy the disk content to elsewhere on the array and bin the disk? Sorry but I am just hoping to find some other options as I am way out of my depth. Quote
JorgeB Posted May 11, 2020 Posted May 11, 2020 1 hour ago, AngelEyes said: Is it possible to copy the disk content to elsewhere on the array and bin the disk? It should be since the disk is still mounting, as long you have available space. Quote
zhengqiang Posted October 22, 2020 Posted October 22, 2020 do you solve this problem? It's happened to me. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.