bat2o

Members
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

0 Neutral

About bat2o

  • Rank
    Newbie

Recent Profile Visitors

302 profile views
  1. Thanks @sonic6. I also found that this approach was recommended earlier in this forum too. Thanks for the pointer. Looks like I might need to get a new router, because I cannot see a way to set this up on my system and couldn't find any recommendations on the web. My setup is a Deco TP-Link mesh network behind a technicolor C2100T modem/router. Both of these are quite limited. I'll keep searching. I had to use this setup for SWAG because centurylink's routers don't let you change the incoming WAN port to a different LAN port (443 -> 1443), unless you can identify th
  2. I'm using swag for nextcloud. It works great when I connect via an external network. When I'm on the same network the connection times out. I followed the tutorial by spaceinvader one (https://youtu.be/I0lhZc25Sro) and am using duckdns.org for my WAN IP. I have used the subdomains approach with duckdns.org and then setup my own domain. Both work externally, but time out when I'm on the same LAN as my unRAID server. I'm guessing it is a DNS issue, but that is beyond me. I haven't been able to find anything to fix this issue, nor have I seen anything in the logs to troubleshoot. Let
  3. Replaced my motherboard with: ASRock - X570 Phantom Gaming 4 Now it works great.
  4. I have a B450 TOMAHAWK MAX, and it works well except I am having trouble passing through a second GPU.
  5. Attached are the xml files. For your reference Tumbler is meant for GPU1 (29:00) and Pod is meant for GPU2 (25:00). Throughout my trials I have also done new VM xml files too. You are correct that it corrupts my vdisk files. After crashes I usually have to replace it with my back-up versions. This could be a possibility I'll look into. Though I don't believe it is because I was running the Tumbler VM on GPU1 for over a month with no issues. And created the Pod VM through VNC during that time. I only started seeing these issues when I was trying to setup the Pod VM t
  6. I have conducted all the trials. I was able to run unRAID in legacy mode, tried another GPU (Saphire RX580), and ran the primary and primary and secondary GPU with vbios files. All with similar results. I still believe it has something to do with how the unRAID is handling the address. For instance, my latest attempt resulted in disabling the parity drive (diagnostics below). For this latest attempt I did try running both GPUs and compare them in the log file when unRAID OS boots. They are similar but address 25:00 has this line in it: Don't know what that
  7. Updates to additional attempts. I removed GPU1 (Address 29:00) from its PCI-E x16 slot and placed GPU2 into it. It ran a VM with GPU passthrough very well and ran it for about an hour. I then removed GPU2 from that slot (Address 29:00) and placed it in its original slot (Address 25:00), and I did not reconnect GPU1. The VM with GPU passthrough worked well (ran it for an hour), but crashed like previous attempts when I tried shutting down the VM (diagnostics included). Because GPU2 worked well in address 29:00 and not in 25:00, I believe it has to do with how unRAID and the motherbo
  8. I am having trouble passing a second GPU. My first GPU passthrough works great. Throughout my trials it disconnects the ability to write to the “domains” disk (for my system is it my cache drive), where I have to do a shutdown of the system and usually reformat the drive in the array. Here is my system: Motherboard: Micro-Star International Co., Ltd - B450 TOMAHAWK MAX (MS-7C02) Processor: AMD Ryzen 7 3700X 8-Core @ 3.6 GHz GPU1: XFX Radeon RX 580 8 GB (Graphics: [1002:67df] 29:00.0 / Sound: [1002:aaf0] 29:00.1) GPU2: SAPPHIRE Radeon RX 550 DirectX 12 100