jordanmw

Members
  • Posts

    288
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by jordanmw

  1. Ok, this is a weird one. I have 4 gaming VMs setup and worked great with 4x 960s- no issues in any games no matter how long we play or what is thrown at it. Upgraded 2 of the GPUs to 2070s and everything appeared to be great- passed through all devices from those cards to my machines and gaming was great- but only for so long. After gaming for a couple of hours, those 2 machines will go black screen, flipping on and off the monitor. If I unplug the hdmi from one card at that point- the other VM comes back and has no issues- can play for hours more. The other machine has to be rebooted to come back up- and usually will require a couple of resets to get the GPU back- but it eventually works and can play for several more hours without issue. I can login remotely to the machine that needs the reboot before rebooting, and can see that the game is still playing and functional. It just won't re-enable the monitor output, and every time I plug it back in (before reboot) it takes out the screen for VM #2. Once it reboots, I can plug both monitors back in and continue as normal. Looking at the logs, here are the errors it shows: (Receiver ID) May 20 20:43:02 Tower kernel: pcieport 0000:40:01.3: device [1022:1453] error status/mask=00000040/00006000 May 20 20:43:02 Tower kernel: pcieport 0000:40:01.3: [ 6] Bad TLP May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: AER: Corrected error received: 0000:00:00.0 May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: device [1022:1453] error status/mask=00000040/00006000 May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: [ 6] Bad TLP May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: AER: Corrected error received: 0000:00:00.0 May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, It's complaining about this device: [1022:1453] 40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge Not sure where to go from here- looks like everything is passing through correctly, Diag attached. tower-diagnostics-20190522-0844.zip
  2. Just setup the 7dtd container- working perfectly, thanks again ich777!
  3. Not sure exactly what I did wrong, but deleted and set it back up and it did put the files in the correct place this time. Don't care much about parity- I have scheduled backups that take care of data security. Nothing runs 24/7, just have 4 gaming computers used on demand.
  4. That is what I am doing during the install: Before: After: Then checking those shares- there is nothing in them and they put everything in the container- maybe I am just used to steamcache docker that allowed me to pick a disk location for all the data.
  5. Mods- Edit GameUserSettings.ini Under [ServerSettings] ActiveMods=517605531,519998112,496026322 Admins- you will need their steamid and then create a file: \ShooterGame\Saved\AllowedCheaterSteamIDs.txt and enter the ID of each of the admins you want to have admin rights.
  6. I am also testing out the Ark docker- it does not look like it respects the file locations for steamcmd and serverfiles directories. I have them mapped directly to one of my disks but the share is empty and docker still contains all the related files. This should work, no?
  7. I have an old shuttle sh55j2 that finally gave up the ghost. Unfortunately it's the motherboard, so I am going to look for replacements and haven't come up with many options. I would love to find an ITX board, but seems like a pipe dream to find something that will fit in their case. So I am looking for anyone who has a working LGA1156 board- for a reasonable price. I need one with 4 DIMM slots and a PCIE 16x- don't really care about brand or other features. Alternately, if someone is looking for a LGA1156 i7 cpu and 4x8Gb (16Gb)redline RAM- I may just sell the components.
  8. Out of curiosity, have you tried clocking down your RAM? I get issues if I go beyond the spec 2667.... just a thought.
  9. It may just be incompatibility with that card. As I said- some others have had major issues getting some older nvidia cards to work. If you can get your hands on a newer card- to test- you may prove that to be true. I don't have anything that old to test with unfortunately. Sorry.
  10. It is purely to dump the bios from the card. As I said- sometimes you can go to https://www.techpowerup.com/vgabios/ and look for your bios there to save yourself some work.
  11. Some have had a lot of luck doing that- but it never worked for me- always gave me a code 43 error in device manager if I installed with VNC, then removed it- and added a GPU. You could give it a shot. I have installed windows 10 maybe 100 times while testing different setup processes- would have saved me a bunch of time if I had just sys-prepped a vm image. My experience was that if at any point I added VNC as the primary video- the GPU would give me code 43 when I added it.
  12. You do need to boot a machine with that GPU installed to dump the bios from the card. I did this by installing windows 10 on a separate drive that is dedicated to a bare metal install. You can always wipe the drive and add it back to the array when you are done. Just install win 10 bare metal and dump your bios with GPUZ. Alternately you might be able to find one on techpowerup.com that will work. For that matter, for burning in the machine- I usually stress test with bare metal before I start an unraid build. Once you have the bios dumped, you can remove the nvidia header from it, then save it as a .rom that you can point to that file in your xml for the VM.
  13. These are the basics- Pay attention to the bios dumping and editing. It's kind of an advanced hack since it requires a hex editor- but not too bad. Really important process for picky GPUs.
  14. Why don't you try to blacklist the GPU and pass a bios file to it for that machine?- that may work better. I had to do that for the GPU that unraid wanted to use for itself. Just a thought. Also, you may need to pass a modified bios to that machine even without blacklisting it.
  15. I'll add "the forest" also.... really fun, scary game for 4.
  16. If you PM me your steam ID I may be generous
  17. I would also love a 7 days to die option for this. Looks like someone created one just for that- but would love to have one plugin to rule all my game servers. Ark is on there- so that is great, and TF2 is a staple. I guess the other one that would be useful is Conan exiles.
  18. I am guessing it is the GTX 650 that is having the issue. I would try swapping slots on those cards and testing to see if the 650 can provide video for any of the VMs. I know that some of the older nvidia cards are a real pain to get working. May I suggest that you run headless instead of dedicating a card to unraid? It's kind of a waste to dedicate a GPU to unraid since the only real advantage is managing the server without a network. That is how I am currently setup- my last VM is setup to take over unraid's host card when it boots. Then you just manage unraid from the web gui or ssh. I currently have 2x gtx960 and 2x rtx2070s setup with 4 game stations on a 1920x with no issues.
  19. I am running the Asrock taichi x399 with 4 GPUs and have had no real issues. I started with 4x 960s but have since replaced the 2 bottom cards with RTX 2070s and couldn't be happier. Getting within 3% of bare metal performance even with the 8x slots. The bottom card gets the best airflow so I have a nice overclock on that card. I use the top card for unraid unless all 4 gaming workstations are in use- then it takes over the top card on boot. It shouldn't matter where your favorite GPU sits, since 8x and 16x slots perform very close to the same. My preference is to have the GPU with the best airflow- be the primary for my VM. I use web gui or ssh to manage the array after the 4th machine boots.
  20. Good to see another 7d2d fan on here. I'll have to give this one a shot- have used ubuntu in the past for hosting my server but switched to windows recently. Docker should make things even easier, thanks for the info!
  21. Yeah- I was assuming you were talking about an all around gaming machine. Light gaming should be pretty solid. I think nuhll has the right idea above- that 1050 is a pretty good candidate if you get a passive version and have decent airflow.
  22. Pretty sure that is impossible. That mini ITX board only has a single PCIE slot- so only one GPU can be setup. So it will only really be suitable for a single gaming PC. Did see that double height DIMM- but seems like a real niche item. I actually got a 256Mb double DIMM like that back in the celeron 300 days- it worked as expected but did cause clearance issues with other things.
  23. Not true. The model he has- has "qtier" which is exactly what I am talking about. And if you look at the linked article in my previous post, it tells you exactly what to do to find what things are spinning your disks. If you use that, and qtier to create your hot/cold storage pools, you could effectively prevent disks from spinning for specific things.
  24. I guess I am not understanding why they all have to be spun up all the time. Usually, they spin down if not in use actively. With that many bays, couldn't you just break up your array to "hot" and "cold" storage to prevent those from being spun up unless accessing? There is an article on their site to troubleshoot that issue here: https://www.qnap.com/en-us/how-to/faq/article/why-are-my-nas-hard-drives-not-entering-standby-mode/ You could even have a cache setup that would allow your hot data to reside on ssds so drive spin would only happen when cache needs new data.
  25. Now that is a qnap that sounds suitable to run unraid- only the higher end ones really make sense to run it on. Out of curiosity, why did you decide to run unraid on it? It seems like most functionality you are using would be available through the qnap os. I could understand if there were limitations that you were trying to overcome- but can't really see any benefit since dockers are already available with qnap units.