Jump to content

jordanmw

Members
  • Content Count

    193
  • Joined

  • Last visited

Community Reputation

17 Good

3 Followers

About jordanmw

  • Rank
    Advanced Member

Converted

  • Personal Text
    Corsair 740 Air case- Taichi X399 Threadripper- 1920X- Corsair 64Gb 8x8Gb 2933Mhz- Evga CLC 280 Evga 1Kw gold PSU- Allegro Pro USB 3.0 to U.2 port/PLX bridge- 2x Evga GTX 960 SSC 4GB- 2x Evga 2070 Black- 2x Plextor 512Gb SSD- 2x WD Black 1Tb NVMe-

Recent Profile Visitors

359 profile views
  1. It is very strange- as soon as VM#4 starts having issues- blacking the screen with a few lines of pixels, VM#3 starts having issues- with the same black screen but flashes a few times before it stops displaying. Then if I unplug the monitor for VM#4- VM#3 is completely fine and continues on with no issue. If a wait a while and plug the monitor back in- VM#3 goes back to black screen. I can then force stop VM#4 and still if I plug in a monitor to VM#4- VM#3 goes back to black. After I reset that machine again, and login remotely- it shows VM#4 with 800x600 res, and unresponsive. I can reset it again, then plug the monitor back in and everything on VM#3 has no issues and screen doesn't go black. VM#4 then boots just fine and has no issues from then on- and the issue never occurs again. I am thinking this has to be an interrupt issue since it hits 2 machines unless one is reset multiple times. Anyone make sense of this craziness
  2. Latest unraid makes no difference, will attach new diag in a few. tower-diagnostics-20190524-1330.zip
  3. Well I updated to the newest unraid- nothing blew up yet, but won't know if it helped until we can beat it up for a few hours. I'll report back with results.
  4. But I'm scared.... 😱 Everything was so dialed in before I swapped GPUs.... guess I'll give it a shot, wish me luck....
  5. Well that didn't help. Added the pcie_aspm=off with no result. Anyone have any ideas?
  6. Saw another thread saying this is the fix: pcie_aspm=off Anyone else have to do this to get things going? Why would there be a change needed after upgrading from 960s to 2070s?
  7. Ok, this is a weird one. I have 4 gaming VMs setup and worked great with 4x 960s- no issues in any games no matter how long we play or what is thrown at it. Upgraded 2 of the GPUs to 2070s and everything appeared to be great- passed through all devices from those cards to my machines and gaming was great- but only for so long. After gaming for a couple of hours, those 2 machines will go black screen, flipping on and off the monitor. If I unplug the hdmi from one card at that point- the other VM comes back and has no issues- can play for hours more. The other machine has to be rebooted to come back up- and usually will require a couple of resets to get the GPU back- but it eventually works and can play for several more hours without issue. I can login remotely to the machine that needs the reboot before rebooting, and can see that the game is still playing and functional. It just won't re-enable the monitor output, and every time I plug it back in (before reboot) it takes out the screen for VM #2. Once it reboots, I can plug both monitors back in and continue as normal. Looking at the logs, here are the errors it shows: (Receiver ID) May 20 20:43:02 Tower kernel: pcieport 0000:40:01.3: device [1022:1453] error status/mask=00000040/00006000 May 20 20:43:02 Tower kernel: pcieport 0000:40:01.3: [ 6] Bad TLP May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: AER: Corrected error received: 0000:00:00.0 May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID) May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: device [1022:1453] error status/mask=00000040/00006000 May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: [ 6] Bad TLP May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: AER: Corrected error received: 0000:00:00.0 May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, It's complaining about this device: [1022:1453] 40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge Not sure where to go from here- looks like everything is passing through correctly, Diag attached. tower-diagnostics-20190522-0844.zip
  8. Just setup the 7dtd container- working perfectly, thanks again ich777!
  9. Not sure exactly what I did wrong, but deleted and set it back up and it did put the files in the correct place this time. Don't care much about parity- I have scheduled backups that take care of data security. Nothing runs 24/7, just have 4 gaming computers used on demand.
  10. That is what I am doing during the install: Before: After: Then checking those shares- there is nothing in them and they put everything in the container- maybe I am just used to steamcache docker that allowed me to pick a disk location for all the data.
  11. Mods- Edit GameUserSettings.ini Under [ServerSettings] ActiveMods=517605531,519998112,496026322 Admins- you will need their steamid and then create a file: \ShooterGame\Saved\AllowedCheaterSteamIDs.txt and enter the ID of each of the admins you want to have admin rights.
  12. I am also testing out the Ark docker- it does not look like it respects the file locations for steamcmd and serverfiles directories. I have them mapped directly to one of my disks but the share is empty and docker still contains all the related files. This should work, no?
  13. I have an old shuttle sh55j2 that finally gave up the ghost. Unfortunately it's the motherboard, so I am going to look for replacements and haven't come up with many options. I would love to find an ITX board, but seems like a pipe dream to find something that will fit in their case. So I am looking for anyone who has a working LGA1156 board- for a reasonable price. I need one with 4 DIMM slots and a PCIE 16x- don't really care about brand or other features. Alternately, if someone is looking for a LGA1156 i7 cpu and 4x8Gb (16Gb)redline RAM- I may just sell the components.
  14. Out of curiosity, have you tried clocking down your RAM? I get issues if I go beyond the spec 2667.... just a thought.
  15. It may just be incompatibility with that card. As I said- some others have had major issues getting some older nvidia cards to work. If you can get your hands on a newer card- to test- you may prove that to be true. I don't have anything that old to test with unfortunately. Sorry.