bat2o

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

1216 profile views

bat2o's Achievements

Noob

Noob (1/14)

1

Reputation

  1. The current version doesn't have an AppData path. You could setup your own share to manage the minetest config files. Below is my setup.
  2. You download modes from: Mods - ContentDB (minetest.net) Place them in the /mods/ folder Restart the server Edit the 'world.mt' file (/worlds/world), by changing the value to 'true'
  3. Use this syntax qemu-img resize --shrink vdisk1.img 60G
  4. I have the same issue, where my router doesn't allow NAT loopback or hairpinning. To access nextcloud on my home network, type the localhost:444, which then redirects it to the nextcloud.mydomain.com (like you indicated). After that first redirect I change the "nextcloud.mydomain.com" with "localhost:444" in the url and it works.
  5. I have been using GPU/HW passthrough, but at times these systems are not working. Thanks to these forums and Squid I've found out that my IOMMU is being disabled. If I go to Tools>System Devices it says "PCI Devices (No IOMMU Groups Available)". This is fixed by going into the bios and enabling IOMMU. However, for some reason my system will disable it, and I will have to go into the bios to re-enable the IOMMU. Does anyone know why my bios would change this setting? Especially without me making the change. I am using a X570 Phantom Gaming 4 motherboard.
  6. Thanks @sonic6. I also found that this approach was recommended earlier in this forum too. Thanks for the pointer. Looks like I might need to get a new router, because I cannot see a way to set this up on my system and couldn't find any recommendations on the web. My setup is a Deco TP-Link mesh network behind a technicolor C2100T modem/router. Both of these are quite limited. I'll keep searching. I had to use this setup for SWAG because centurylink's routers don't let you change the incoming WAN port to a different LAN port (443 -> 1443), unless you can identify the incoming/remote IP. For my SWAG setup I forward 443 (C2100T) to the mesh network (Deco TP-Link) router, and there I'm able to port forward 443 to 1443. Since I couldn't find a way to turn on NAT Loopback on either router, I tried a different approach. I assigned all the dockers to a specific LAN IP (192.168.0.xx) so I can do local DNS records through Pi-hole. I had to assign them their own LAN IP since it wouldn't let me use ports. This worked for some dockers, but not for nextcloud. It gave an internal error.
  7. I'm using swag for nextcloud. It works great when I connect via an external network. When I'm on the same network the connection times out. I followed the tutorial by spaceinvader one (https://youtu.be/I0lhZc25Sro) and am using duckdns.org for my WAN IP. I have used the subdomains approach with duckdns.org and then setup my own domain. Both work externally, but time out when I'm on the same LAN as my unRAID server. I'm guessing it is a DNS issue, but that is beyond me. I haven't been able to find anything to fix this issue, nor have I seen anything in the logs to troubleshoot. Let me know what other information you need to diagnose this issue.
  8. Replaced my motherboard with: ASRock - X570 Phantom Gaming 4 Now it works great.
  9. I have a B450 TOMAHAWK MAX, and it works well except I am having trouble passing through a second GPU.
  10. Attached are the xml files. For your reference Tumbler is meant for GPU1 (29:00) and Pod is meant for GPU2 (25:00). Throughout my trials I have also done new VM xml files too. You are correct that it corrupts my vdisk files. After crashes I usually have to replace it with my back-up versions. This could be a possibility I'll look into. Though I don't believe it is because I was running the Tumbler VM on GPU1 for over a month with no issues. And created the Pod VM through VNC during that time. I only started seeing these issues when I was trying to setup the Pod VM to GPU2. VM_XMLs.zip
  11. I have conducted all the trials. I was able to run unRAID in legacy mode, tried another GPU (Saphire RX580), and ran the primary and primary and secondary GPU with vbios files. All with similar results. I still believe it has something to do with how the unRAID is handling the address. For instance, my latest attempt resulted in disabling the parity drive (diagnostics below). For this latest attempt I did try running both GPUs and compare them in the log file when unRAID OS boots. They are similar but address 25:00 has this line in it: Don't know what that means, but it is different than 29:00 where the primary GPU is located. tower-diagnostics-20200727-1901.zip
  12. Updates to additional attempts. I removed GPU1 (Address 29:00) from its PCI-E x16 slot and placed GPU2 into it. It ran a VM with GPU passthrough very well and ran it for about an hour. I then removed GPU2 from that slot (Address 29:00) and placed it in its original slot (Address 25:00), and I did not reconnect GPU1. The VM with GPU passthrough worked well (ran it for an hour), but crashed like previous attempts when I tried shutting down the VM (diagnostics included). Because GPU2 worked well in address 29:00 and not in 25:00, I believe it has to do with how unRAID and the motherboard are handling that address. By the way, in the IOMMU grouping 25:00 is in it own group. So I updated the bios to the most current version (Version: 7C02v37; Release Date: 2020-06-15). At first it booted unRAID just fine and I tried running the VM and the system crashed again (didn't get a diagnostic file). After system reboot unRAID wouldn't load. Investigating why it wouldn't load I discovered many files disappeared from the flash drive. Restored flash drive to an backup version, but still unRAID wouldn't load. I updated the syslinux configuration (syslinux.cfg) to the default to see the boot-up display on my monitor (attached is a photo of that boot screen). It was indicating similar issues as the log files after the VMs crash (i.e., iommu ivhd0: Event logged [IOTLB_INV_TIMEOUT device=25:00.0 ...). So I removed the GPU from that slot (address: 25:00), and unRAID is able to boot. Syslinux configuration: kernel /bzimage append initrd=/bzroot I have also been trying to boot my system in legacy mode, but haven't been successful. From what I understand, my bios settings allows a legacy boot (Bios Mode: CVM ; Boot Mode: Legacy + UEFI). Though looking at my boot options my system only recognizes the flash drive as a UEFI USB drive. batcave-diagnostics-20200723-1912.zip
  13. I am having trouble passing a second GPU. My first GPU passthrough works great. Throughout my trials it disconnects the ability to write to the “domains” disk (for my system is it my cache drive), where I have to do a shutdown of the system and usually reformat the drive in the array. Here is my system: Motherboard: Micro-Star International Co., Ltd - B450 TOMAHAWK MAX (MS-7C02) Processor: AMD Ryzen 7 3700X 8-Core @ 3.6 GHz GPU1: XFX Radeon RX 580 8 GB (Graphics: [1002:67df] 29:00.0 / Sound: [1002:aaf0] 29:00.1) GPU2: SAPPHIRE Radeon RX 550 DirectX 12 100414P4GL 4GB (Graphics: [1002:699f] 25:00.0 / Sound: [1002:aae0] 25:00.1) In order to get GPU1 to work I had to include the GPU’s vfio-pci.ids into the syslinux configuration. vfio-pci.ids=1002:67df,1002:aaf0,1002:699f,1002:aae0 When I added the GPU2 vfio-pci.ids (1002:699f,1002:aae0) to the syslinux configuration, unRAID won’t boot in the PCIe ACS override of downstream: pcie_acs_override=downstream. So I am running ‘both’: pcie_acs_override=downstream,multifunction. Below are a list of settings I tried. Each resulted in disabling my ability to write to the ‘domains’ share. - Changed domains to reside on disk1 - Changed VMS machine to Q35 per spaceinvader one (https://www.youtube.com/watch?v=QlTVANDndpM&t=509s). Initial trials were with machine i440fx. - Added the ‘Graphics ROM BIOS’ for GPU2 - In VM XML file added multifunction=’on’ per spaceinvader one (https://www.youtube.com/watch?v=QlTVANDndpM&t=509s). - Updated to unRAID 6.9.0-beta25 I have not tried the following - Changed the ‘Server boot mode’ to Legacy. I cannot get the system to boot in legacy. - Swap GPU location on the motherboard - Tried another GPU - Update bios Each trail had varying levels of success, but eventually resulted in similar outcomes. The setup that resulted in the system working for the longest period had the following settings and the diagnostic file name: - Machine: Q35-4.2 - vbios: NA - Multifunction: on - unRAID OS: 6.8.3 - Diagnostic file: batcave-diagnostics-20200721-2302.zip Here is my latest trial, which didn’t get far. - Machine: Q35-5.0 - vbios: NA - Multifunction: on - unRAID OS: 6.9.0-beta25 - Diagnostic file: batcave-diagnostics-20200723-0918.zip Here are some forums that indicate similar issues. Seems like this person has a similar issue, but no resolution shared. https://forums.unraid.net/topic/86519-unraid-680-1660-gtx-gpu-passthrough/ This person was able to fix it with a bios update (not unRAID). https://www.reddit.com/r/VFIO/comments/g5hi4k/going_mad_help_needed_with_gpu_passthrough/ This person got it to work by changing their GPU. https://forums.unraid.net/topic/79134-ryzen-internal-graphics-passthrough/ batcave-diagnostics-20200721-2302.zip batcave-diagnostics-20200723-0918.zip