Jump to content

jordanmw

Members
  • Posts

    288
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by jordanmw

  1. Ok, this is a weird one.  I have 4 gaming VMs setup and worked great with 4x 960s- no issues in any games no matter how long we play or what is thrown at it.  Upgraded 2 of the GPUs to 2070s and everything appeared to be great- passed through all devices from those cards to my machines and gaming was great- but only for so long.  After gaming for a couple of hours, those 2 machines will go black screen, flipping on and off the monitor.  If I unplug the hdmi from one card at that point- the other VM comes back and has no issues- can play for hours more.  The other machine has to be rebooted to come back up- and usually will require a couple of resets to get the GPU back- but it eventually works and can play for several more hours without issue.  I can login remotely to the machine that needs the reboot before rebooting, and can see that the game is still playing and functional.  It just won't re-enable the monitor output, and every time I plug it back in (before reboot) it takes out the screen for VM #2.  Once it reboots, I can plug both monitors back in and continue as normal.  

     

    Looking at the logs, here are the errors it shows:

    (Receiver ID)
    May 20 20:43:02 Tower kernel: pcieport 0000:40:01.3: device [1022:1453] error status/mask=00000040/00006000
    May 20 20:43:02 Tower kernel: pcieport 0000:40:01.3: [ 6] Bad TLP 
    May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: AER: Corrected error received: 0000:00:00.0
    May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Receiver ID)
    May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: device [1022:1453] error status/mask=00000040/00006000
    May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: [ 6] Bad TLP 
    May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: AER: Corrected error received: 0000:00:00.0
    May 20 20:43:03 Tower kernel: pcieport 0000:40:01.3: PCIe Bus Error: severity=Corrected, type=Data Link Layer, 

     

    It's complaining about this device:

    [1022:1453] 40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

     

    Not sure where to go from here- looks like everything is passing through correctly, Diag attached.

     

    tower-diagnostics-20190522-0844.zip

  2. 10 minutes ago, ich777 said:

    Like you install it on a normal dedicated server. Navigate to your appdata folder and install them.

     

    Don't know what you exactly mean, you must set the path on first creation of the docker otherwise it will not work correctly (on the first start all dockers, not only mine, will create the required directorys and permissions for the folders and files needed).

    Delete the docker completely and map them when you install the docker.

    That is what I am doing during the install:

    Before:

    image.thumb.png.5ba11f2dea0c00d0de1f80bb4388e7c1.png

    After:

    image.thumb.png.eae974f216b0e7e23f6d48864fd3e45f.png

    Then checking those shares- there is nothing in them and they put everything in the container- maybe I am just used to steamcache docker that allowed me to pick a disk location for all the data.

     

     

    image.png

  3. 2 hours ago, bktaylor said:

    Good Morning,

    I got the Ark docker working but how can I add mods and admins?

     

    Mods-

    Edit GameUserSettings.ini

    Under

    [ServerSettings]
    ActiveMods=517605531,519998112,496026322

     

    Admins-

    you will need their steamid and then create a file:

    \ShooterGame\Saved\AllowedCheaterSteamIDs.txt and enter the ID of each of the admins you want to have admin rights.

    • Like 1
    • Thanks 1
  4. I have an old shuttle sh55j2 that finally gave up the ghost.  Unfortunately it's the motherboard, so I am going to look for replacements and haven't come up with many options.  I would love to find an ITX board, but seems like a pipe dream to find something that will fit in their case.  So I am looking for anyone who has a working LGA1156 board- for a reasonable price.  I need one with 4 DIMM slots and a PCIE 16x- don't really care about brand or other features.

     

    Alternately, if someone is looking for a LGA1156 i7 cpu and 4x8Gb (16Gb)redline RAM- I may just sell the components.

  5. It may just be incompatibility with that card.  As I said- some others have had major issues getting some older nvidia cards to work.  If you can get your hands on a newer card- to test- you may prove that to be true.  I don't have anything that old to test with unfortunately.  Sorry.

  6. Some have had a lot of luck doing that- but it never worked for me- always gave me a code 43 error in device manager if I installed with VNC, then removed it- and added a GPU.  You could give it a shot.  I have installed windows 10 maybe 100 times while testing different setup processes- would have saved me a bunch of time if I had just sys-prepped a vm image.

     

    My experience was that if at any point I added VNC as the primary video- the GPU would give me code 43 when I added it.

  7. You do need to boot a machine with that GPU installed to dump the bios from the card.  I did this by installing windows 10 on a separate drive that is dedicated to a bare metal install.  You can always wipe the drive and add it back to the array when you are done.  Just install win 10 bare metal and dump your bios with GPUZ.  Alternately you might be able to find one on techpowerup.com that will work.

     

    For that matter, for burning in the machine- I usually stress test with bare metal before I start an unraid build.  Once you have the bios dumped, you can remove the nvidia header from it, then save it as a .rom that you can point to that file in your xml for the VM.

  8. Why don't you try to blacklist the GPU and pass a bios file to it for that machine?- that may work better.  I had to do that for the GPU that unraid wanted to use for itself.  Just a thought.  Also, you may need to pass a modified bios to that machine even without blacklisting it.    

  9. 18 minutes ago, ich777 said:

    Yep, Terraria server and Terraria server with TShook is on it's way and will drop today or on monday (must create two dockers because for TShook is the package mono required so the container is pretty big ~250MB, the Vanilla Terraria container has only a size of ~60MB).

     

    It will took some time since i don't own the game 7 days to die and a game purchase is required for a linux server...
     

     

    Same for Conan exiles, don't own the game and can't buy every single game, i will make in the next few days a Donate button on my dockers so that i can raise funds for new game dockers...

    If you PM me your steam ID I may be generous ;) 

    • Like 1
  10. I would also love a 7 days to die option for this.  Looks like someone created one just for that- but would love to have one plugin to rule all my game servers. Ark is on there- so that is great, and TF2 is a staple.

     

    I guess the other one that would be useful is Conan exiles.

    • Like 1
    • Upvote 1
  11. I am guessing it is the GTX 650 that is having the issue.  I would try swapping slots on those cards and testing to see if the 650 can provide video for any of the VMs.  I know that some of the older nvidia cards are a real pain to get working.  May I suggest that you run headless instead of dedicating a card to unraid? It's kind of a waste to dedicate a GPU to unraid since the only real advantage is managing the server without a network. That is how I am currently setup- my last VM is setup to take over unraid's host card when it boots.  Then you just manage unraid from the web gui or ssh.  I currently have 2x gtx960 and 2x rtx2070s setup with 4 game stations on a 1920x with no issues.

  12. I am running the Asrock taichi x399 with 4 GPUs and have had no real issues.  I started with 4x 960s but have since replaced the 2 bottom cards with RTX 2070s and couldn't be happier.  Getting within 3% of bare metal performance even with the 8x slots.  The bottom card gets the best airflow so I have a nice overclock on that card.  I use the top card for unraid unless all 4 gaming workstations are in use- then it takes over the top card on boot.  It shouldn't matter where your favorite GPU sits, since 8x and 16x slots perform very close to the same.  My preference is to have the GPU with the best airflow- be the primary for my VM.  I use web gui or ssh to manage the array after the 4th machine boots.

    • Like 1
    • Upvote 1
  13. 40 minutes ago, mikeyosm said:

    Was thinking of using the Intel gfx onboard for light gaming. Otherwise, I'll have to wait for new AMD refresh and build something from that - might be more cost effective.

    Yeah- I was assuming you were talking about an all around gaming machine.  Light gaming should be pretty solid.  I think nuhll has the right idea above- that 1050 is a pretty good candidate if you get a passive version and have decent airflow.

  14. Pretty sure that is impossible.  That mini ITX board only has a single PCIE slot- so only one GPU can be setup.  So it will only really be suitable for a single gaming PC.  Did see that double height DIMM- but seems like a real niche item.  I actually got a 256Mb double DIMM like that back in the celeron 300 days- it worked as expected but did cause clearance issues with other things.

  15. 6 hours ago, Benson said:

    QNAP OS just will all up or all down, no matter put them in different pool.

    Not true.  The model he has- has "qtier" which is exactly what I am talking about.  And if you look at the linked article in my previous post, it tells you exactly what to do to find what things are spinning your disks. If you use that, and qtier to create your hot/cold storage pools, you could effectively prevent disks from spinning for specific things.

    • Upvote 1
  16. I guess I am not understanding why they all have to be spun up all the time.  Usually, they spin down if not in use actively.  With that many bays, couldn't you just break up your array to "hot" and "cold" storage to prevent those from being spun up unless accessing?  There is an article on their site to troubleshoot that issue here:

    https://www.qnap.com/en-us/how-to/faq/article/why-are-my-nas-hard-drives-not-entering-standby-mode/

     

    You could even have a cache setup that would allow your hot data to reside on ssds so drive spin would only happen when cache needs new data.

  17. Now that is a qnap that sounds suitable to run unraid- only the higher end ones really make sense to run it on.  Out of curiosity, why did you decide to run unraid on it?  It seems like most functionality you are using would be available through the qnap os.  I could understand if there were limitations that you were trying to overcome- but can't really see any benefit since dockers are already available with qnap units.

×
×
  • Create New...