Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About srfnmnk

  • Rank
    Advanced Member


  • Personal Text

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. One more oddity, when I go to mount it as read-only in UD it shows up as "luks" FS? very confused.
  2. 1 more thing. I have reconstruct write enabled via the turbo write plugin. Should I disable this? Could this be causing bad writes? Is there any chance that something is mis-configured that could be causing bad writes?
  3. but how does an emulated disk have lost and found items that are missing on the array? For years of using unraid, when a disk is disabled its contents are emulated in place meaning those files would be in the directories they belong, not in lost+found. I'm just completely confused on how the array is in this state. I have dual parities -- I had 2 disks get disabled, why is data missing, that's where i'm lost.
  4. Well, it's still not over. Could the array be corrupted or something? I'm now thinking that a VM or Docker container is causing the issues because last time I reported all good and I couldn't break anything I had all docker containers disabled and all VMs disabled. I rebuilt disks and parity several times without issue. Now, after about 1 day of trying to be back to normal operations two disks went offline again, I tried to rebuild and it completed successfully but later that evening (after it completed successfully, bam, same two disks were disabled again. So, now what I'd like to do is figure out how to get this repaired again...and then review and troubleshoot my VM/docker setups to see if something is causing an issue there. Currently, if we just look at disk 6, as you can see, it's disabled and emulated. The emulated FS has a BUNCH of lost and found, how is that? The disk was not mountable so I started array in maintenance mode, repaired disk with -v (-L was not used), started the array and the FS doesn't say unmountable for disk 6 anymore, it is still disabled/emulated, but there are now all these lost and founds. Upon a check of the actual array, these files are, in fact, missing from the array and the share where they should be as you can see from the screenshot. Is the FS meta of the array corrupted? What are next steps? Lets assume no power issue ATM as I tested that EXTENSIVELY and rearranged mobo components and PSU wires and rails. Thanks again. pumbaa-diagnostics-20200921-0816.zip
  5. Cool, that's what i figured. Thank you.
  6. Well, there you have it. I can't seem to break it any more. Thank you so much to everyone for all your help. @trurl I'll be circling back to your docker recommendations now. One last question now that things are healthy again. In the image below, you'll notice that disks 11,12,13 are not in the list. I would like to clean this up (been like this for ages, I just never worried about it). When I stop the array I see 3 empty, unassigned disks but I will never fill these up since i'm at the max # of disks I ever plan to have. What's the best way to get rid of these? Thanks again
  7. I can't seem to break it now, knock on wood. I've just reinstalled the LSI 9201-16e and added all my DAS back with the new power config and kicked off another 4TB drive rebuild. Will check back in when it's done and we'll see...if no errors here I'd say we're good. wowzers (crosses fingers)
  8. Absolutely nuts...well, the parity sync finished without an issue given the new power config. I have now booted out a 4TB disk and am rebuilding it. If this succeeds, I'll re-install the LSI 9201-16e and do one more rebuild... I will say, there's one more possibility. The SAS Expander card, when I had the issue, I had the card installed differently (not in the PCIe slot) since it was being powered by the molex and didn't need the PCIe slot anymore. The way I had it installed in the case made it possible for the pcie pins to make contact with the metal, so...I wrapped the pcie headers of the card with electrical tape to make sure nothing shorted the pins. I'm wondering if somehow the tape I had was allowing static to cause strange behaviors with the card...not sure...just an alternative theory to the power issues.
  9. I like this Idea -- will do it. Perhaps a few corrections. I currently have 4 MOLEX/SATA strings from PSU (modular). 1 3X4 HDD (MOLEX --> Backplane) 1 2X4 HDD (MOLEX --> Backplane) 1 3-drive SSD String (SATA Power --> SSD direct) 1 string to expander card directly from modular output on PSU. Screenshot below shows where all the outputs are coming. I have 1 (bottom left) still open, I will probably run another molex to backplane so that I am powering as suggested 2,2,1 on the backplane. Parity check is still running, of course, but is at max speed and progressing.
  10. Right, but a 14.6 watt max TDP pcie card pulling from a molex on a dedicated connection...the only thing I can think of is bad wire, bad PSU slot, bad molex connector on card, or just a bad power rail on the card itself. Do you have any other thoughts/ideas on what could cause it? If you look at the SAS Expander datasheet it shows max TDP is 12v / 14.6w Thanks again @Michael_P
  11. Another update removed the SAS expander from PCIe power and put it back on MOLEX power from the SATA slots on PSU. I have not added the new LSI 9201-16e back in yet. I had 2 SSDs running on one SATA power out from PSU and 1 SSD on another. I have strung those together on a single wire all now going into a single out on PSU. This was to open a slot from PSU to try another output I removed the MOLEX cable I was using for the SAS expander card (it was a 4x molex power out string) and have replaced it with a 2 output moled xtring where the first outupt in the serial is connected directly to PSU. The new molex has been plugged into a different output port on the PSU in case there was an issue with the PSU out. All 20 HDDs are on 2 PSU outs. The norco 20 has a backplace with 5 rows each powered by a single molex connector. One PSU output powers 3 and the other powers 2. Running Parity check now to see if I run into any issues. If no issues, I may try to force a rebuild to see if that causes issues...if not I'll have to surmise that one of the changes resolved the issue. Additionally, I have pulled the data sheet for the PSU and the SAS expander to determine if there are any strange power requirements / considerations when running from molex, I found nothing unexpected. max power draw is 14.6W 12v -- no jumpers or anything to switch. The PSU supplies up to 996W on 12v and with 3 SSDs, 20 HDDs, 1 gtx-1060, Ryzen 3900, and the SAS expander card all running at full tilt "should" not exceed 592W as per seasonic's calculator. SAS Expander Data Sheet PSU Data Sheet (1000W) I realize this is not the right forum to troubleshoot power delivery but I figured I'd keep the main thread here for those that are curious in the future.
  12. And just like that the array is healthy and parity sync is complete. Wow. I will continue to poke around to see what is causing the power fluctuations. My guess is the molex power to the SAS expander, I will try to dedicate a rail or use the pcie power out from the PSU. @Michael_P thank you so much for chiming in here! That's great to know and it lines up exactly with my situation. I too have the Norco 20 bay and I believe I am using 3 rails to power the 5 layers but I will double check. It seems that using the PSU SATA power out is causing power sags in the SAS expander linked above. I will check back in when I have more details. I will also post diagram of PSU setup after I get time to get back into the server. Not entirely sure how to test this other than create new hardware config and then run full parity check to see if there are issues. This is going to be quite a fun ride but hopefully with the new power out from pcie it will just keep working. Anyone else have more ideas on how to test for power sags other than parity check? @JorgeB -- great insights on the power/hardware issue. I hate that you seem to be right but props to you friend.
  13. Back in June (shortly before I started having these issues), I did add a new component. I ran out of pcie slots on my gigabyte mobo and thus powered my SAS expander with a molex instead of from the pcie slot. I did a full power review and here's what I have running off a Seasonic Prime PD-1000 Platinum. Seems like I have plenty of headroom but I'm no expert on calculating watts / rail or anything. If anyone has any insights, I'd love to hear if you think there's a better way way to rearrange the power. Meanwhile, a new config has been launched, everything is mounted, the array seems healthy and the the parity is rebuilding. I'm not entirely sure how to go about debugging where the power issue may be but I figured revert back the newest changes and start from there...perhaps adding the SAS expander to molex and/or adding the LSI 9201-16e to the pcie caused unstable power...so testing that now...after that, I'm not sure. I did the best I could to use the PSU calculators and it seems as though I have sufficient power but would love input if someone else has experience. Seasonic Prime PD-1000 Platinum GIGABYTE X570 AORUS Master Ryzen 3900 H5-25379-00 SAS 9201-16e - powered via pcie4 (temporarily removed) Intel RAID (SAS) Expander Card (RES2SV240) -- powered via pcie4 (temorarily moved -- was on molex as of June 2020) HighPoint RocketRAID 2720SGL 8-Port SAS -- powered via pcie4 GTX - 1660 -- installed on pcie4 powered via 6 pin from PSU 1 Sabrent 1TB Rocket NVMe PCIe M.2 2280 3 samsung 850 pro 500GB 20 7200 HDD Just had a thought as I was putting this list together. This PSU has power output for SATA/IDE/MOLEX which is what is powering my HDDs and SSDs. To accommodate the new LSI 9201, I moved the SAS expander card to molex (from pcie power). I was powering it from IDE/SATA/Molex NOT the CPU/PCI-E rails...wondering if I should have been powering the SAS expander from the PCIE/CPU rails instaed?? Thoughts?
  14. Ok, will work on hardware troubleshooting today. One more question. I have a suspicion that my parity is inaccurate and we keep trusting it. What would be an approach to invalidate the parity and trust the disks. All the disks have passed SMART test and seem to have the proper files but the emulated disks seem to have the wrong information. Is there a way to invalidate the parity and rebuild from the disks and their data? Since I have a disabled disk -- I'm thinking I can create a new config and just have the parity rebuild but wanted to confirm. Thanks.
  15. @trurl still struggling. Below is before a reboot. After a reboot and array start Disk 8 is mountable via UD RO and in fact, all of the files that are in lost+found on the emulated driver are here and accessible. Disk 8 also passes SMART self extended.