XFS Syslog Errors


Recommended Posts

HI All, 

 

So im seeing some potentially worry error in the syslog. Today, i randomly ran a Fix Common Problems which alerted me to the fact that my syslog was at 100% hadnt noticed before hand and the server seems to running perfectly fine. I checked the log and im seeing the following errors;

 

Mar 25 05:01:13 Zeus kernel: XFS (md1p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 25 05:01:13 Zeus kernel: CPU: 5 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

 

These errors seems to occur every day at the same time give or take a few seconds (5am in the morning) - This is roughly the time i had the plugin CA Backup and Restore to start its daily backup. Which i have since noticed that is now deprecated so i have of course removed this from my system. Could that be the culprit for these errors?

 

Ultimatley do i have anything to worry about?  I have looked up the error on google which points me to a bunch of articles about running checks/ scans in maintance mode?

 

I'll restart the server soon but currently just finishing a parity scan. I have attached the diagnotics as well in case that helps. 

 

Thanks All :)

zeus-diagnostics-20230403-0916.zip

Link to comment

This is what im getting as a result

 

xfs_repair -v /dev/md1
/dev/md1: No such file or directory
/dev/md1: No such file or directory

fatal error -- couldn't initialize XFS library

 

Anything else i can do? 

What has actually happened? is there anything i have done wrong, something bad with the drive?

 

Sorry for all the questions just trying to understand.


Thanks 

Link to comment

ok that the command worked with /dev/md1p1 and it spit out the following 

 

root@Zeus:~# xfs_repair -v /dev/md1p1
Phase 1 - find and verify superblock...
        - block cache size set to 1886968 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 1216181 tail block 1216181
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...

        XFS_REPAIR Summary    Mon Apr  3 16:03:09 2023

Phase           Start           End             Duration
Phase 1:        04/03 16:03:08  04/03 16:03:08
Phase 2:        04/03 16:03:08  04/03 16:03:09  1 second
Phase 3:        04/03 16:03:09  04/03 16:03:09
Phase 4:        04/03 16:03:09  04/03 16:03:09
Phase 5:        04/03 16:03:09  04/03 16:03:09
Phase 6:        04/03 16:03:09  04/03 16:03:09
Phase 7:        04/03 16:03:09  04/03 16:03:09

Total run time: 1 second
done

 

What next?

 

Thanks for your help as well 

Link to comment

Great so will that fix whatever was wrong?

 

Also in the system log looking closer at the errors it seems the other drives are being reported too (see below)

 

Mar 26 05:01:12 Zeus kernel: XFS (md2p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

Mar 26 05:01:12 Zeus kernel: XFS (md3p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

Mar 26 05:01:12 Zeus kernel: XFS (md4p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

Mar 26 05:01:12 Zeus kernel: XFS (md5p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

Mar 26 05:01:12 Zeus kernel: XFS (md6p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

Mar 26 05:01:12 Zeus kernel: XFS (md7p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

Mar 26 05:01:12 Zeus kernel: XFS (md8p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

Mar 26 05:01:12 Zeus kernel: XFS (md9p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c.  Caller xfs_get_acl+0x125/0x169 [xfs]
Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1

 

The only thing changing it seems is the (md"x"p1) - so im assuming i just run the previous command on all reported disks?

 

Any idea why whatever has happened, happened?

 

Thanks 

Link to comment
  • 10 months later...

I got a corruption error on md1p1.

https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/

I'm going to do an xfs_repair on disk 1 and see if that comes up with anything.

 

to answer my own question about What-The-Hell is md1p1:

https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/

"Disk 7 will always be /mnt/disk7 and will always be /dev/md7"

so, md1, is disk1.

 

XFS repair found a single error and fixed it. <shrug>

Edited by rutherford
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.