disposable-alleviation3423

Members
  • Posts

    11
  • Joined

  • Last visited

disposable-alleviation3423's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Did you fine one? I've got the web UI but can't add anything.
  2. This is my second HBA. Guess I'll buy a board and just throw money at the problem. Thank you for your help. Helps me not think I'm losing my mind.
  3. I have tackled both those suggestions as a part of a previous thread. This new thread is a result of: 1. Connecting a noctua fan to the heatsink on the HBA. 2. Trying all three PCIE slots for the HBA 3. Finding that the 8 lane slot works flawlessly (for 2 weeks) UNTIL I install the gpu in the 16 lane slot. My issue seems like a PCIE lane issue but according to the mobo manual, my configuration should work as installed. For added context I also have (3) M.2 drives installed, but again, the manual says those share lanes with the sata drives, not PCIE. I've added the relevant snips from the manual below. Am I reading this wrong? Configuration that is causing the issue is as follows. If I remove the GPU it works without issue. PCIE1 - 2.5gbs network card PCIE2 - 3070 gpu PCIE3 - Empty PCIE4 - LSI 9300-8i HBA PCIE5 - Empty M2_1 - 1TB M.2 M2_2 - 1TB M.2 M2_3 - 1TB M.2
  4. I have not tried a different PSU but it's a 750w so I can't imagine it's running up against the limit. The diagnostics indicate the HBA goes offline and causes the issue.
  5. Drives have been disabling randomly and for no reason 2-3 times a week. I posted in the forum previously, received advice and have been testing. I have been running copious tests to try to rule out variables and here is where I've landed. Previously, I had a GPU (3070) in the PCIEx8 slot and the HBA in the PCIEx16 slot. To eliminate variables, I removed the GPU altogether, relocated the HBA to the x8 slot and ran the server for a full week. Not a single drive disabled and everything was great. Even through a parity check at it with no disabled drives. Tonight, I reinstalled the GPU, changing nothing else. The server booted up and ran fine for an hour, then, a drive was disabled. Either the GPU is the issue or the motherboard does not like having 2 cards installed. I checked the manual and it states that if a card is installed in the x16 and x8 slots, both slots will run at x8. I've set both slots to run at 3.0. According to the mother board manual (z390) my current arrangement should work just fine. I'd really like to keep my gpu installed for transcoding but it seems I may not have a choice. Diagnostics attached. Hoping someone can point me to a bios setting or something that will solve this. theark-diagnostics-20240312-1915.zip
  6. Okay, sorry for the wait. Here is my report. Previously, I had a GPU (3070) in the PCIEx8 slot and the HBA in the PCIEx16 slot. To eliminate variables, I removed the GPU altogether, relocated the HBA to the x8 slot and ran the server for a full week. Not a single drive disabled and everything was great. Tonight, I reinstalled the GPU, changing nothing else. The server booted up and ran fine for an hour, then, a drive was disabled. Either the GPU is the issue or the motherboard does not like having 2 cards installed. I checked the manual and it states that if a card is installed in the x16 and x8 slots, both slots will run at x8. Diagnostics attached. Do I just have to buy another motherboard that will support 2 cards? Very confused. theark-diagnostics-20240312-1915.zip
  7. Sorry for the late post. I just switched to fiber and am having the same issue. I have the fiber router set to forward ports 80 and 443 to the server but I still don't have remote access via Unraid Connect. All other services work fine. What did you have the ISP do for you?
  8. Drives have been disabling due to an HBA issue I'm still trying to diagnose. I rebuilt the one of the drives as I always do by stopping the array-selecting no drive for that slot, start, stop, reselect the drive. The drive rebuilt as normal. The drive indicates it's operational but the says it is unmountable. Following the manual, I get this when I attempt the xfs_repair. Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now. Do I just have to format the drive and rebuild again?
  9. Thanks for the reply. The HBA has new thermal paste, a 40mm Noctua fan attached to the heat sink, and I left the case open with a box fan blowing on it to rule out a heat issue. I have reseated it several times. If the connection were the issue, would it work for several days and then fail? I've used 2 HBAs now with the same result. Both HBAs were tried in the same slot. I'll try the other slot tonight but my GPU only fits if it's in the slot it's in now so if this works, I'll have to figure something out. I've noticed these events always occur overnight. I put a UPS on the server to clean the power and prevent any blips. Is it possible that the card or the port has some sleep setting where it powers down due to some inactivity? Would that explain why only 1 or 2 of the drive fail instead of all 7?
  10. I'm having a similar issue were a couple drives get disabled every few days. I rebuild (which takes a couple days) then everything is fine, till it isn't. Things I've seen on other posts and tried which did not solve the problem: Replace HBA with specific make and model listed Remove HBA heatsink, apply thermal past, reinstall, add fan to heatsink Replace HBA to SATA cables Replace power cables Reseat everything Replace power supply I'm at a loss and my wit's end. Any help would be appreciated. Diagnostics attached. theark-diagnostics-20240229-0642.zip