MrOz

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

MrOz's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Has anyone seen this issue? HT7 is showing 40-43% all the time. I don't have anything pinned to that CPU. I have rebooted a couple of times and it persists. Disabled docker -- no change. Stopped the array. No change. Ideas on how to figure out what is doing this?
  2. I had an issue over the weekend where two drives errored at the same time and both were disabled. I tested each but was advised to look at the power connections. Everything seemed fine. Looked at the drives and they are fine. I have restored the array after moving the drives to separate power cables. So my question to anyone who would like to answer is how do you power your drives? I have a relatively small array . Two parity drives, two SSD cache drives and five array drives. But I am sure many of you have much larger arrays. I have a 500W desktop power supply that I needed to add some Y power supply splitters to be able to power all of my drives. I was thinking of getting a modular power supply and try to minimize the use of any power splitters. Maybe use some custom cabling. Also, I don't have. a powered GPU in this system. Give me your thoughts. Thanks.
  3. Thanks. It is rebuilding now. The data seemed fine on that drive but it will still take some time to rebuild. This is not the first time that I have had issues with this build even though the drives were fine. Let me ask you this.... I have two parity drives, two cache drivs and five data drives. I have a 500 watt 80+ Thermaltake power supply so I thought that would be more than enough power since I don't have a gpu to power but the last time I had issues it seemed tied to a power issue as well. Now this time I did move the power cables of these two drives so they were not on the same set of cables. The last time I had the issue it was also with the parity and this same data drive and also the drive health was fine with these drives back then. I have never heard of any issues with power distribution to the drives in Unraid but you never know. Thoughts on the power supply?
  4. Attached are the new diagnostics but the problem is that the drive is not mounting. It still shows disabled/emulated. ironman-diagnostics-20240223-1106.zip
  5. Also, my parity drive is also still in emulated/disabled mode. I have not run any commands against it. I would appreciate guidance handling this as well.
  6. I ran this command xfs_repair -v /dev/mdX where mdX was replaced with my drive designation. It ran in the command windows for quite some time and then the window closed. I then downloaded the smart report.... I realize now that was not helpful. I then ran the repair from the gui.... added the -L switch and it seemed to run to success. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done The drive is still in emulated/disabled mode and I would love to bring it back without losing data. Your guidance is appreciated.
  7. I did read the link and I ran the command in the link not sure what else I can do.
  8. SMART overall-health self-assessment test result: PASSED Best process for rebuilding it? And I will ask again, do I need to rebuild the Parity drive first? WDC_WD40EFZX-68AWUN0_WD-WX42D514RN6C-20240222-2158.txt
  9. Should I start with the Parity disk or Disk 4?
  10. Also, I would love to hear your suggestion for the process of rebuilding since I have a parity and a data drive both offline right now.
  11. Thanks again for your help. It looks like you may have had it right. Both of the drives were on the same power splitter. I didn't have any option other than to replace the one going to one of the spinning drives with an SSD. I figured the power consumption would be lower with the SSD. Attached is the fresh diagnostics. ironman-diagnostics-20240222-1542.zip
  12. Thanks for that. I will tear open the server and take a look at the power connectors. So I can learn to find these types of errors on my own where do you get these errors from. I am pretty good with diagnosing windows server but still struggle with Unraid. Also, what should I do to return it to an online status again. I have done this wrong in the past and it was way too much work. Since this has happened before it makes me want to give up on the technology and move to a self contained NAS solution.
  13. Ugh. I woke up to array errors this morning. I would like some guidance before I do anything. I have two 4tb parity disks and one is in disabled/emulated. The array consists of five other disks one of them is disabled/emulated this morning. I have attached my diagnostics and have not rebooted. I appreciate the assistance. I pulled the diagnostics but not sure where to look in there. Your help is appreciated. ironman-diagnostics-20240222-0818.zip
  14. Thanks. I replaced the data cables but will check the power cables. Both drives failed previously but showed no errors and I mounted them in an external linux server and copies files from them. I precleared them before adding them back in. I ordered a new drive and I am going to install it and see how that goes.
  15. Thanks for the response. ironman-diagnostics-20231026-1038.zip