DTMHibbert Posted April 3, 2023 Share Posted April 3, 2023 HI All, So im seeing some potentially worry error in the syslog. Today, i randomly ran a Fix Common Problems which alerted me to the fact that my syslog was at 100% hadnt noticed before hand and the server seems to running perfectly fine. I checked the log and im seeing the following errors; Mar 25 05:01:13 Zeus kernel: XFS (md1p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 25 05:01:13 Zeus kernel: CPU: 5 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 These errors seems to occur every day at the same time give or take a few seconds (5am in the morning) - This is roughly the time i had the plugin CA Backup and Restore to start its daily backup. Which i have since noticed that is now deprecated so i have of course removed this from my system. Could that be the culprit for these errors? Ultimatley do i have anything to worry about? I have looked up the error on google which points me to a bunch of articles about running checks/ scans in maintance mode? I'll restart the server soon but currently just finishing a parity scan. I have attached the diagnotics as well in case that helps. Thanks All zeus-diagnostics-20230403-0916.zip Quote Link to comment
JorgeB Posted April 3, 2023 Share Posted April 3, 2023 Check filesystem on disk1. Quote Link to comment
DTMHibbert Posted April 3, 2023 Author Share Posted April 3, 2023 This is what im getting as a result xfs_repair -v /dev/md1 /dev/md1: No such file or directory /dev/md1: No such file or directory fatal error -- couldn't initialize XFS library Anything else i can do? What has actually happened? is there anything i have done wrong, something bad with the drive? Sorry for all the questions just trying to understand. Thanks Quote Link to comment
JorgeB Posted April 3, 2023 Share Posted April 3, 2023 With v6.12 is now /dev/md1p1, you can also use the GUI, that's still the same. Quote Link to comment
DTMHibbert Posted April 3, 2023 Author Share Posted April 3, 2023 ok that the command worked with /dev/md1p1 and it spit out the following root@Zeus:~# xfs_repair -v /dev/md1p1 Phase 1 - find and verify superblock... - block cache size set to 1886968 entries Phase 2 - using internal log - zero log... zero_log: head block 1216181 tail block 1216181 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Mon Apr 3 16:03:09 2023 Phase Start End Duration Phase 1: 04/03 16:03:08 04/03 16:03:08 Phase 2: 04/03 16:03:08 04/03 16:03:09 1 second Phase 3: 04/03 16:03:09 04/03 16:03:09 Phase 4: 04/03 16:03:09 04/03 16:03:09 Phase 5: 04/03 16:03:09 04/03 16:03:09 Phase 6: 04/03 16:03:09 04/03 16:03:09 Phase 7: 04/03 16:03:09 04/03 16:03:09 Total run time: 1 second done What next? Thanks for your help as well Quote Link to comment
itimpi Posted April 3, 2023 Share Posted April 3, 2023 Restart the array in normal mode. Quote Link to comment
DTMHibbert Posted April 3, 2023 Author Share Posted April 3, 2023 Great so will that fix whatever was wrong? Also in the system log looking closer at the errors it seems the other drives are being reported too (see below) Mar 26 05:01:12 Zeus kernel: XFS (md2p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md3p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md4p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md5p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md6p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md7p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md8p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md9p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 The only thing changing it seems is the (md"x"p1) - so im assuming i just run the previous command on all reported disks? Any idea why whatever has happened, happened? Thanks Quote Link to comment
JorgeB Posted April 3, 2023 Share Posted April 3, 2023 Strange all disks reporting the same issue, are these new filesystems? Quote Link to comment
JorgeB Posted April 3, 2023 Share Posted April 3, 2023 Also the errors start at 5:01am for all disks, anything scheduled going on at that time that could be related? Quote Link to comment
DTMHibbert Posted April 3, 2023 Author Share Posted April 3, 2023 No - been up and running for a few years now adding disks as i go Quote Link to comment
DTMHibbert Posted April 3, 2023 Author Share Posted April 3, 2023 i had CA backup and Restore start at 5am - but have since removed it due to it not being supported on 6.12 RC2 ... could this have been the cause? Quote Link to comment
JorgeB Posted April 3, 2023 Share Posted April 3, 2023 Not sure what it does, but strange that it would corrupt all disks, run xfs_repair on all and keep monitoring. Quote Link to comment
rutherford Posted February 18 Share Posted February 18 (edited) I got a corruption error on md1p1. https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/ I'm going to do an xfs_repair on disk 1 and see if that comes up with anything. to answer my own question about What-The-Hell is md1p1: https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/ "Disk 7 will always be /mnt/disk7 and will always be /dev/md7" so, md1, is disk1. XFS repair found a single error and fixed it. <shrug> Edited February 18 by rutherford Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.