dojesus

Members
  • Posts

    38
  • Joined

  • Last visited

Everything posted by dojesus

  1. Only 1 drive requires a rom swap. Thanks for the heads up.
  2. I believe I've tracked down all the replacement circuit boards for all the drives on ebay. Thanks for the advice.
  3. Thanks for all the input. Yes, it appears my own idiocy caused this problem.i didn't swap the modular cables believing them to be standardized.
  4. Well, I was having random shut downs and heat wasn't an issue. The PSU a little long in the tooth, so I decided to upgrade the Thermaltake with a new 850watt 80 gold Seasonic. I'd hate to believe that I lost 30+ TB of data due to a PSU upgrade.
  5. Yes it is modular and I've changed port positions and cables. Even putting old PSU back in didn't change anything.
  6. 8 mechanical drives, 8 on a backplane and 1 not. Impossible that all failed simultaneously.
  7. I recently updated my power supply and now no mechanical drives are showing. I have 8 on a backplane connected to an LSI card and 1 connected to onboard SATA. NONE are showing up in unRAID. I figured there was something wrong with the PSU and swapped back...same issue. I'm thoroughly stumped on this. All NVME drives are there. I have 1 SATA SSD connected to the MB SATA port, and it shows up. None of my spinning drives are showing...or spinning as far as I can tell. Any suggestions are welcome, because I've pulled out what little hair I had on this one.
  8. I'm assuming no, cuz I don't know what you mean. Is that an email notification of errors?
  9. That's the beauty of knowing where to look in the logs....I don't. I meant nothing in the plugin tab list of plugins said it was deprecated. I didn't doubt what you were saying, just it's not obvious to noobs like me.
  10. Thanks for that. There was nothing in plugins showing it was deprecated...but I removed it nonetheless. Do you think it was the Nvidia plugin causing all my issues? Is that why I lost my boot GUI?
  11. I upgraded from 6.8.3 and upon first reboot, I lost my GUI boot that I had a dedicated GPU for. Figured I'd get around to finding out when I had some free time. In the past couple of weeks, my server experienced several hard crashes. I assumed my PSU isn't up to the task and got a new one...same issue. Everytime I get my system back up, something is missing/not working. This last time binhex-nzbget wouldn't start and when I went to reinstall, I discovered CA "Apps" tab was missing. Reinstalled CA and nzbget, but nzbget won't run on a bridged connection and configuring it to host allows the app to start, but nothing can connect to it and no GUI. I'm too much of a noob to know what I should be looking for to fix. I'm hoping someone can help me with my diagnostics file. Thanks in advance! tower-diagnostics-20211112-0227.zip
  12. It was a royal pain rolling it back to 6.8.3 after having upgraded to 6.9.0, then 6.9.1. I may be adding a pcie card to the mix next weekend. I may just jump to 6.9.1, get the crash download the diagnostic, and roll back to 6.8.3...if y'all want me to do that to help discover what the problem is.
  13. Perhaps my older logs are still there, but I only had today to roll back and finish parity sync. *edit* I did back up my flash drive before rolling back....would the appropriate info be contained in that backup? tower-diagnostics-20210321-1821.zip
  14. I'm running an x570 board. Was waiting on 6.9.0, so I could pass through my onboard audio to VM, as it would lock up the server on 6.8.3. After upgrade, audio IS being passed through, but random VM crashes followed. Every CPU core and thread (including the isolated ones), would max to 100% same for RAM usage, and everything would become unresponsive until the VM would crash, then the resources are restored to the server. I upgraded to 6.9.1 to see if this would resolve the problem...it didn't. There is something very wrong with the new kernel drivers, IMO. I've been forced to roll back to 6.8.3, due to failing WAF. If you need anything from me that would help track down the issue, I'm happy to oblige.
  15. Hmm....I did watch the video, but I didn't see the part where it said I was to edit the script. Which part am I editing?
  16. Thanks! I found them and added them manually, however, after running the helper script, it didn't generate the VM. What am I missing?
  17. User scripts didn't install....any idea why? And can you post the actual scripts so I can add them manually?
  18. Final update on this issue. Clearly the problem was/is hardware related, as the parity drives are disabling again in 6.8.2. Not sure why they work so long before the issue returned, but it's NOT the software. Ordered a new SAS controller, to replace the failing onboard controller, which will tide me over til my AMD upgrade later this year. Thank you everyone for your help and advise.
  19. Except it's not happening in 6.8.2....which is why I'm having a hard time wrapping my head around what the problem is. I see that 6.9 is the same code as 6.8.3 with an upgraded kernel. I expect this problem to return when I try RC1, as it's unlikely the new kernel will "fix" the issue.
  20. Makes sense. Thanks for the response, trurl. Does 6.8.3 have any type of "sleep" commands that would shut down the SATA controller, that would fail to "wake up" on a write? I'm trying to wrap my head around how docker or plugin upgrades would trigger this weird disabling.
  21. Will do. Is there an approximate time frame?
  22. Well, I've been running the same gear since Jan. 2016. The only issues I've had were the database corruption one that kept me on 6.6.7 for quite some time and this one from 6.8.3. Can you think of a non-software related thing that would make both drives drop out, when they work fine on 6.8.2? I'm open for testing to help prevent future problems.
  23. Just a follow up...It's been a week since I've rolled back to 6.8.2 and have done numerous updates and the parity drives aren't dropping out anymore. There is clearly something wrong in the 6.8.3 update that is negatively affecting the Asmedia sata controller....at least my revision of it. I'm hoping a dev sees this, as it may be a minor problem now, but it could snowball into something worse down the road on later updates.
  24. I'm guessing the parity drives rarely spin down, is that incorrect? Why both parity drives on the same backplane as all other drives, that are the only drives disabling? Are they doing something profoundly different that would cause both to stop, or the controller to fail, after a docker update?
  25. Thanks for looking at the log. I did, but could make sense of it. Well, I've been running the same system for 4 years and the controller may be going away....but I kind of doubt it, since it goes back to operating fine after restoring drives and parity rebuild. The failures ONLY come after I updated to .3 and I update a docker or plugin. I rolled back to .2 and will monitor and report back if it stays working or not. Thanks again. *edit* Sorry, missed your question. All drives are on a common backplane on a Silverstone CS-380. Only 2 power legs feeding 8 drives. If it were a power issue, I would lose, at least, 4 drives at once, I would imagine. Since they were only the parity drives and different MFGs that dropped, it seemed to likely be software related. I was one of those bitten by the database corruption bug and stayed at 6.6.7 for a long time. When I did upgrade to 6.8.2, it was smooth sailing, no issues. This happen after my first docker update after upping to .3. It could be coincidental, but I have my doubts and I guess running it back on .2 for a while will prove it, one way or another.