evilmobster

Members
  • Posts

    7
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

evilmobster's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I read another post about changing the split level setting. Mine was set to "1" in my share config file. I removed it and changed it to null "". Now it appears to be working. Strange that my other unraid server is set to 1 and working fine. Not sure what this server needed to be changed.
  2. I'm writing large amount of data to my app data share that is installed by default on Unraid. Strangely, its only writing the data to 4th drive not not any of the others. The 4th Drive will fill up completely and the application stops working (syncthing). I have high water enabled on this share so on a 2TB drive it should fill up 50% then fill up the next drive to 50%. I have a cache drive but not using it for this share. Diags attached. Thanks in advance. lb-unsevern-diagnostics-20201011-1513.zip
  3. I found out I had a bad Sata controller on my motherboard (I have two on this one). I removed all the drives from the bad controller. Then put everything on my LSI raid card and the working controller. Now its working flawlessly with high write speeds. Party sync finished in 6 hours for 18TB array. Thanks everyone for your help.
  4. Thank you both for your help. Have taken the following steps... Removed the LSI Card and connected the drive directly to the motherboard Changed Sata to AHCI as suggested Replaced the drive 5 with a new drive. At one point I was getting really high write speeds but only when I completely removed drive 5 hard drive and do not replace it. I then replaced it with a new drive giving me 13 drives including my cache drive. Then the speeds started to slow again. Now it seems the issue is following drive 7 now so I replaced that drive with a new drive and did not seem to help. Maybe I'm trying to power to many drives? Crazy thing is, I have not changed my power supply from my old build. This PSU used to power my 24 bay un-raid NAS with no issues. I now have paired down to 12 drive array. sd 7:0:6:0: Power-on or device reset occurred lb-unsevern-diagnostics-20201010-1251.zip
  5. Setting up a new server and its telling me a parity drive sync is going to take over a week to complete when I have less 300GB on the entire array. I have looked at my logs and I suspected it might be a failing hard drive in slot 5 and I replaced it. Still getting low write speeds after replacing the drive. I have attached my diag to this posting. Can someone take a look and tell me what im missing? lb-unsevern-diagnostics-20201009-2216.zip
  6. Thank you so much for your prompt reply. I removed -n and ran again. It said to attempt the mount again before using -L so I did so and it did not mount. The mount failed. I then used -L and the following output was generated. The disk is still being listed as unmountatble. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... agf_freeblks 121652, counted 121670 in ag 1 agi_freecount 8, counted 9 in ag 1 agi_freecount 8, counted 9 in ag 1 finobt agi unlinked bucket 9 is 454730313 in ag 0 (inode=454730313) sb_ifree 4308, counted 4616 sb_fdblocks 1346139874, counted 1348594004 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... bad hash table for directory inode 104139180 (no leaf entry): rebuilding rebuilding directory inode 104139180 xfs_repair: phase6.c:1314: longform_dir2_rebuild: Assertion `done' failed.
  7. I followed the instructions listed under Checking a File System in the Unraid 6 documentation and it said to post the results if I did not understand them. I would greatly appreciate it if someone could take a look at my results and tell me if the information of this drive can be recovered. Background: I have a parity drive but the failed drive is not being emulated. I installed the drive in question a while ago but I'm unsure if any data was actually on it. Is it possible that no data was on this drive and that is why the drive is not being emulated by the parity drive? Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... agf_freeblks 121652, counted 121670 in ag 1 agi_freecount 8, counted 9 in ag 1 agi_freecount 8, counted 9 in ag 1 finobt agi unlinked bucket 9 is 454730313 in ag 0 (inode=454730313) sb_ifree 4308, counted 4616 sb_fdblocks 1346139874, counted 1348594004 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 1 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... bad hash table for directory inode 104139180 (no leaf entry): would rebuild - traversal finished ... - moving disconnected inodes to lost+found ... disconnected inode 454730313, would move to lost+found Phase 7 - verify link counts... would have reset inode 1601087717 nlinks from 1 to 2 would have reset inode 454695794 nlinks from 1 to 2 would have reset inode 454730313 nlinks from 0 to 1 would have reset inode 459241237 nlinks from 1 to 2 would have reset inode 480639760 nlinks from 1 to 2 would have reset inode 861775891 nlinks from 1 to 2 No modify flag set, skipping filesystem flush and exiting.