DTMHibbert

Members
  • Posts

    34
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DTMHibbert's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi All, Hopefully someone can help, i just recieved this notification via Discord from my server. (Image Attached) Check my System Log and can see lots of the following errors... May 15 21:21:09 Zeus kernel: critical medium error, dev sda, sector 3571842 op 0x0:(READ) flags 0x84700 phys_seg 28 prio class 2 May 15 21:21:46 Zeus kernel: fat_get_cluster: 1498 callbacks suppressed May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 233e3e70) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 233e3e70) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 233e3e70) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start b9b677b7) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start b9b677b7) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start b9b677b7) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start cdf9b71d) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start cdf9b71d) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start cdf9b71d) May 15 21:21:46 Zeus kernel: FAT-fs (sda1): error, fat_get_cluster: invalid start cluster (i_pos 0, start 9994e31f) I'm guessing i need to replace with another usb drive but how? I've downloaded a backup via the Main>Flash page. can i just copy the contents of this zip file to another usb drive? Thanks
  2. Did this work? im suffering from really slow performance as well - browsing smb shares is horrific and im sure transferring files has taken a hit to. - maybe since i updated to 6.12 rc 1/2/3
  3. i had CA backup and Restore start at 5am - but have since removed it due to it not being supported on 6.12 RC2 ... could this have been the cause?
  4. No - been up and running for a few years now adding disks as i go
  5. Great so will that fix whatever was wrong? Also in the system log looking closer at the errors it seems the other drives are being reported too (see below) Mar 26 05:01:12 Zeus kernel: XFS (md2p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md3p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md4p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md5p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md6p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md7p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md8p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 Mar 26 05:01:12 Zeus kernel: XFS (md9p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 26 05:01:12 Zeus kernel: CPU: 0 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 The only thing changing it seems is the (md"x"p1) - so im assuming i just run the previous command on all reported disks? Any idea why whatever has happened, happened? Thanks
  6. ok that the command worked with /dev/md1p1 and it spit out the following root@Zeus:~# xfs_repair -v /dev/md1p1 Phase 1 - find and verify superblock... - block cache size set to 1886968 entries Phase 2 - using internal log - zero log... zero_log: head block 1216181 tail block 1216181 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Mon Apr 3 16:03:09 2023 Phase Start End Duration Phase 1: 04/03 16:03:08 04/03 16:03:08 Phase 2: 04/03 16:03:08 04/03 16:03:09 1 second Phase 3: 04/03 16:03:09 04/03 16:03:09 Phase 4: 04/03 16:03:09 04/03 16:03:09 Phase 5: 04/03 16:03:09 04/03 16:03:09 Phase 6: 04/03 16:03:09 04/03 16:03:09 Phase 7: 04/03 16:03:09 04/03 16:03:09 Total run time: 1 second done What next? Thanks for your help as well
  7. This is what im getting as a result xfs_repair -v /dev/md1 /dev/md1: No such file or directory /dev/md1: No such file or directory fatal error -- couldn't initialize XFS library Anything else i can do? What has actually happened? is there anything i have done wrong, something bad with the drive? Sorry for all the questions just trying to understand. Thanks
  8. HI All, So im seeing some potentially worry error in the syslog. Today, i randomly ran a Fix Common Problems which alerted me to the fact that my syslog was at 100% hadnt noticed before hand and the server seems to running perfectly fine. I checked the log and im seeing the following errors; Mar 25 05:01:13 Zeus kernel: XFS (md1p1): Internal error xfs_acl_from_disk at line 43 of file fs/xfs/xfs_acl.c. Caller xfs_get_acl+0x125/0x169 [xfs] Mar 25 05:01:13 Zeus kernel: CPU: 5 PID: 18676 Comm: shfs Not tainted 6.1.20-Unraid #1 These errors seems to occur every day at the same time give or take a few seconds (5am in the morning) - This is roughly the time i had the plugin CA Backup and Restore to start its daily backup. Which i have since noticed that is now deprecated so i have of course removed this from my system. Could that be the culprit for these errors? Ultimatley do i have anything to worry about? I have looked up the error on google which points me to a bunch of articles about running checks/ scans in maintance mode? I'll restart the server soon but currently just finishing a parity scan. I have attached the diagnotics as well in case that helps. Thanks All zeus-diagnostics-20230403-0916.zip
  9. Will give this a shot and see how it runs, thanks
  10. HI All, Hoping for some help, recently i've noticed that my Log file is filling up and reaching 100% - which then in turn essentially crashes the system im not able to reach the dashboard or any dockers. Currently i have a reminder set for every 15 days to perform a clean restart of the Server but i obviously don't want to do this forever. Can someone take a look through my diagnostics files and see if they can spot anything which might be causing this. Thanks zeus-diagnostics-20220227-0904.zip
  11. I'm also struggling from the same error, whenever i try to turn off/on a docker container from either webUI or homeassistant.
  12. Sorry, what i mean is when i look at the log file, from the unraid dashboard (picture attached) there is no time/ date of when specific actions took place. In the image attached there is an error highlighted in red - it would useful to see what time/ date this was so i could narrow down what possibly caused it. it would be also helpful to see when certain files were started uploading to see how long they took to upload. Thanks
  13. Is it possible to include date and time is the logs for this?
  14. Yesterday i recieved a notification during a parity check that Disk 5 had 22 read errors. The pariety check completed without any errors however. i have since ran Extended Smart tests on all drives and all pass without errors. However last night i recieved a notification to say the health of my array failed due to read errors on Disk 5. How serious is this? I have attached my diagnostics file, if anyone could help that would be great. zeus-diagnostics-20200621-1231.zip