cyberspectre

Members
  • Content Count

    102
  • Joined

  • Last visited

Community Reputation

10 Good

About cyberspectre

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. You guys, this is the weirdest thing. My UnRaid server has a 6800 and a 6800XT. Both have been working perfectly since February. Yesterday, I installed water blocks on both cards. Since I did that, now, reset doesn't work anymore. If I reboot the VM, the display doesn't come back, and one CPU thread assigned to the VM gets stuck at around 87%. Changing the cooler on these cards couldn't possibly cause this, right? The only other thing I did was install an NVME SSD in the M.2 slot. Do you think that could cause reset to fail for both cards? Edit: False alarm.
  2. Looks like a good plan. You know, I've got a 970 and two matching water blocks for it. If you decide to water-cool your build, DM me. I'd gladly give you these things for cheap so you could have two 970s in there.
  3. Awesome rig, man! Now that you're on UnRaid, I can promise you'll never go back to not using it. It makes so much sense to put your machine's extra muscle to work for you instead of letting it go unused in Windows. Your build actually looks similar to my last one, though with much better hardware. That 5950X is making me feel kinda jelly, I passed on that and did a 3900X in my new rig. Mine's got soft black tubing, too, not to mention purple lighting. We have similar tastes. Which flow meter is that?
  4. Months after replacing the Crucial drive with a Samsung 970 Evo, I'm pleased to say this issue has not happened since. Not even once. The Samsung doesn't miss a beat.
  5. Nevermind, got it working on Q35 5.0. Very pleased so far. For posterity, let me say this. It required all of the things the OP mentioned + an extra kernel argument to make my GTX-970 work again after switching to UEFI mode. That kernel argument is... video=efifb:off
  6. Which version of Q35 let you successfully install the Radeon software? I'm getting incompatibility messages from the installer on Q35 5.1.
  7. Thanks for this. I happen to be doing the same thing right now, so your post has great timing. I'll try the things you suggested and see if I get anywhere. Should have known it wouldn't be easy... but the card came highly recommended by Level 1 Techs, so I was hopeful.
  8. I just get "Whoops, looks like something went wrong."
  9. Unfortunately, this doesn't work for me. In case anyone's interested, I found a different free invoice software that can be used in a Docker container. It's called Crater. There isn't a pre-made image for it on DockerHub, though, so, despite messing with it for an hour or two, I have no idea how to install it... They provide instructions but I'm nervous about following them instead of using the UnRaid UI.
  10. Very cool! Now, if only we could have auto checkout. 🤩
  11. Unfortunately, the issue has resurfaced. It is certainly connected to the temperature of the controller. Moving the SSD away from the graphics cards and installing a heat sink did help, but ultimately did not eliminate the problem. Certain activities can cause the temperature of the controller to spike suddenly, and when it does, it's lights out. To anyone who reads this in the future, do not buy a Crucial P1 NVME SSD for UnRaid, or for a gaming PC for that matter. It cannot tolerate even moderate heat.
  12. For posterity, I'd like to report that I solved the problem. Using the smartctl / nvme, I discovered that even though the disk's main temperature was within the acceptable range, one of the disk's secondary temperature sensors (labeled Temperature Sensor 5) was reading 60-64c at idle and 70c or higher under load. This is most likely the temperature of the controller. Apparently, most nvme controllers begin throttling at 70c, so the I/O errors make sense. I moved the disk from the M.2 slot to a PCI-e adapter and installed a metal heatsink. The controller now runs 20 degr
  13. Thanks johnnie.black. Updating the firmware seemed promising at first, but ultimately made no difference. Since it's clear to me now that this is an issue with the disk itself, I'm going to make a new thread in hardware to get some more opinions.
  14. Did you ever discover what was wrong? I'm having the same issue as we speak.
  15. Nope, it's still happening. Seems to be that whenever it seems stable with a VM running, all I need to do is start a few docker containers and that will cause it to fail. Basically, heavier I/O causes it.