Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Too many factors to narrow down but I think unlikely to be the docker. Static IP, port forwarding, router settings, network settings etc.
  2. Obvious thing: is it "MOVIES" or "Movies" or "movies"? Folder names on non-Windows systems are case sensitive.
  3. You have 2 choices: Pass through a USB controller (either onboard (Ryzen / TR4 mobo has at least 1 available) or a PCIe card) to your VM. Any USB devices plugged into a passed-through controller will automatically attach to the VM (that has the controller). Install the libvirt hotplug USB plugin. You can manually attach USB devices to the VM after VM boot.
  4. It won't. Once a GPU is passed through to a VM, it cannot be passed back to Unraid GUI.
  5. First and foremost, there is no guarantee with PCIe pass through so take ALL recommendations with a grain of salt. I generally recommend Gigabyte motherboards to new Unraid users for one feature that is the ability to pick which PCIe x16 slot to boot with. It's called "Initial Display Output" in the BIOS. That would simplify things when you need to dump vbios i.e. no need to physically swap GPU. Boot with slot 1 -> dump vbios for slot 2 -> reboot to BIOS to boot with slot 2 -> dump vbios for slot 1. It also gives you flexibility of GPU placement e.g. if card too long, too wide, too big etc. The RX 570 is one of the known bad child of reset issue so the ability to NOT boot with it even if it's on the 1st PCIe slot is priceless. On a side note, for Ryzen and 2x GPU, make sure you get a mobo with 3x PCIe x16 slots to give you some future flexibility too. And for the 2080 Super, it has 4 devices (GPU + HDMI Audio + 2 USB devices) that have to be passed through together - other graphic cards usually only have 2 devices. This tends to catch new users off-guard. You need to stub the USB devices for them to show up on the VM template (the easiest method is to install VFIO-PCI Config plugin and use it to select the devices and then build VFIO-PCI.CFG file then reboot). Good luck.
  6. Try changing the vdisk bus to sata. Or delete the old vdisk and create a new one. Also ensure your unassigned device mounted disk has enough free space and is not having problems or high io etc.
  7. Change this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> to this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev>
  8. How do you pass the NVMe through? Tools -> System Devices -> copy-paste the PCI Devices and IOMMU Groups section. (use the forum code functionality - the </> button next to the smiley button) Also attach the xml of your VM.
  9. Perhaps it's a remmina settings issue. It works using default settings on Windows RDP.
  10. You have to manually suspend if that settings doesn't work. Alternatively, see my post below for the docker settings that would allow Plex to run at the same time as BOINC without being starved by BOINC.
  11. The 8080 port is for the web interface, not the RDP port. See my reply above if you want to connect directly via RDP.
  12. In the docker template, click "Add another Path, Port, Variable, Label or Device" then Container port = 3389, host port = 33890 (or whatever port you want to use RDP to access) Then in RDP just type [IP of your server]:33890
  13. Try Rosetta@home. It's interesting how the Fah work is handed out. My GPU slot immediate has work but my CPU slot (32 thread) took several hours to get just 1 work assigned and then idle for quite a while now. Rosetta in contrast constantly gives me work. Edit: right after saying the above, my CPU fah slot has new work assigned. Maybe I should complain more LOL
  14. To everyone who has performance issues / unresponsiveness / lag while BOINC docker is running: Go to Docker template, click on "Advanced View" (upper right corner, on the same level as "Update Container") to see the advanced view and then: Select the cores you want BOINC to use and make sure to NOT select (a) CPU 0 + HT and (b) any core isolated to VM. Add --cpu-shares=64 to the Extra Parameters box. This will ensure BOINC does not starve other dockers when running (while allowing BOINC to always run full steam when other dockers are not running). Optionally add --memory=[#]G to limit the docker RAM usage to [#] gigabytes. This will prevent your other apps and VM getting killed if BOINC uses too much RAM. Screenshot as an example - core 16-31 are selected for BOINC and memory limited to 16GB.
  15. There is unlikely to be a cure. COVID-19 is caused by a virus so focus would be on vaccine and treatment (to alleviate symptoms). And then we have the side burden of dealing with anti-vaxxers and idiots but the no amount of Rosetta@home can help with that.
  16. I just use the same cores reserved for my Plex and use cpu-share to ensure when Plex needs CPU power, it gets 80+%.
  17. There is no read cache feature in Unraid. If it's just a single stream (i.e. not multiple simultaneous 4k streams) then it's more likely than not a lack of transcoding power (or potentially a failing drive if it's always file from the same drive).
  18. Will it appear in the Apps store as a new docker or an update to the current one?
  19. What's the requirement to get a GPU to run with F@H docker? I'm guessing I need to install Unraid Nvidia?
  20. I installed RDP BOINC and joined the Unraid Rosetta team. 🖖 Note to others: you can open docker port 3389 for the RDP BOINC docker so you can access directly using RDP from Windows (instead of going through the GUI).
  21. Are you on the latest BIOS? Tools -> Diagnostics -> attach zip file.
  22. Navi reset but is a known issue around here. It's best if you compile your own kernel with the navi patch. Alternatively, a forum member has compiled a custom kernel with various patches including navi with optional kernel upgrade too. Link below, try at your own risk.
×
×
  • Create New...