testdasi

Members
  • Posts

    2812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. VFIO-PCI binding will automatically try to bind everything in the same IOMMU group as the device you tick (because it would be pointless to bind part of an IOMMU group). Your NIC is in the same group as your NVMe so ACS override is the only way. By the way, there's a VFIO-PCI.cfg file in the config folder of your flash - deleting that file will undo all the binding. Graphic card without drivers is a myth. Always install Nvidia driver to verify your install, including rebooting your VM to verify reset issue. You might want to watch SpaceInvader One tutorials on Youtube which probably will help with things.
  2. Firstly you can only use the ones with port-forwarding and not all the PIA's ovpn files. The ones without port-forwarding enabled would have issues ranging from low speed to outright no network. I think right now Canada and Germany are consistently working. The other are hit and miss. I think the docker will only use openvpn.ovpn file so what you can do is have a folder with all the port-forwarding enabled ovpn and do a script to randomly pick one file and copy over to the appdata ovpn. So let's say you have the ovpn in /mnt/cache/appdata/deluge/PIA then the script would be something like this. #!/bin/bash src='/mnt/cache/appdata/deluge/PIA' des='/mnt/cache/appdata/deluge/openvpn' ls $src/*.ovpn | sort -R | tail -1 | while read file; do cp --force "$file" "$des/openvpn.ovpn" done docker restart deluge That's how I can switch quickly when one goes down.
  3. Remote your vfio-pci.ids line, install VFIO-PCI config plugin (look in the app store), use that plugin (Settings -> VFIO-PCI CFG) to bind only the device that you need. That's the only way to do it when you have multiple of the same ID. FYI, in 6.9.0-beta25, the plugin is built into Unraid and the binding can be done in Tools -> System Devices so the vfio-pci.ids method probably isn't required anymore.
  4. Why not? What does it say in the port-forwarding config page of your router?
  5. Yep. For local networking, IPv6 is a liability (e.g. VPN leaks) and much harder to deal with (e.g. 192.168.0.1 is a lot easier to remember than blablabla:blablabla:blablabla:blablabla). And in terms of port-forwarding, it depends on your router but it's pretty normal to have Internet <--- IPv6 ---> router <--- IPv4 ---> docker. Internet-router and router-device are 2 independent networks.
  6. You have a few shares with Cache = Yes so start with running the mover to clear out data from cache to array. 120GB is very small so I recommend you don't use Cache = Yes. Then run trim from command line: fstrim -a An even more drastic measure if you have false out of space error is to run balance (Main -> Cache -> Balance). This will rewrite everything on your SSD so will take some time to run and add to your write cycles.
  7. Let's try to fix your Windows VM since I'm more familiar with that. From your diagnostics, you only have the RTX 2070 as the single graphics card in the system. Hence, I don't think you need the Unraid boot GUI / console (i.e. you access and configure Unraid remotely through the network). If it isn't the case then don't proceed as the tweaks below will cause you to require configuring Unraid remotely through the network with another device. Now: Update to 6.9.0-beta25. You are on 6.9.0-beta1 which is neither here nor there in terms of datedness. Just note that if you intend to revert back, you should not format any SSD with Unraid while in 6.9.0-beta25 because of the new 1MiB alignment, which isn't backward compatible with older versions. 6.9.0-beta25 has vfio.pci config already built-in so no need to have that plugin anymore. Just go to Tools -> System Devices to check your vfio binding. If everything is fine then just uninstall the vfio-pci config. Reboot Unraid and retest your VM with and without vbios. New kernel sometimes fixes funky issues. Obviously no need to proceed further if it works. Boot Unraid in legacy mode (Main -> Flash, scroll to the bottom to check your "Server boot mode", if it's not "Legacy" then untick Permit UEFI boot mode, save, reboot and check again that it's in Legacy). If it doesn't boot, you will need to change your motherboard BIOS settings e.g. changing boot order. If it still doesn't boot then you need another PC to run the make_bootable .bat file (but I don't think you will need it if you created the USB stick the official way). Retest your VM with and without vbios. Obviously no need to proceed further if it works. Start a new template with the same settings but pick Q35-5.0 machine type and NO vbios rom. Do NOT autostart. Just save. Now edit the newly created template in xml mode. With vendor id, use exactly 12 characters e.g. 'a0123456789b'. Using 'none' may not work in some cases so I would suggest to use 12 characters instead. Under <features> or above </features> add this. <kvm> <hidden state='on'/> </kvm> <ioapic driver='kvm'/> (note the "or" above) Do the multifunction edits (similar to your Arch Linux VM) Test your VM to see if it fixes things, with and without vbios. Obviously no need to proceed further if it works. Main -> Flash -> scroll to the Syslinux Configuration section. Add this between "append" and "initrd=/bzroot" only on the 2 lines below "label unRAID OS" and "label unRAID OS GUI Mode" (i.e. don't touch the Safe Mode lines). video=efifb:off Then save, reboot. You should see nothing after "Loading bzroot... OK". In other words, after this step, you will only be able to interact with Unraid GUI / console through the network i.e. using another device with a browser e.g. a laptop / tablet / phone etc. Test your VM again. Again with and without vbios rom. Obviously no need to proceed further if it works. If it still doesn't work then you will need to borrow another graphics card to boot Unraid with this new card (i.e. NOT the 2070) in order to dump your own vbios rom file of the 2070. That's the only way to guarantee that you have the right vbios rom. Then test it again with this new vbios rom. Also test whether your VM would work if Unraid boots with your borrowed graphics card. If that's the case, you probably have no choice but to buy a cheap low-end graphics card for Unraid to boot with in order for your VM to work properly with the 2070. e.g. the GT710 is a popular choice that goes for around £30/$30. That's as much as I can help.
  8. You already rebooted so there isn't any useful info in the syslog. Wait till next time you have issue and extract diagnostic. Also your cache drive is very full. That can cause false out of space error and funky issues especially if you do a lot of write to cache.
  9. Please attach diagnostics. Tools -> Diagnostics -> attach zip file to your next post.
  10. You can't run pfsense as a docker but as a VM. You need to pass through the NIC to the PFSense VM to do the routing. Your Unraid server essentially becomes a router in that case. Configuring PFSense VM is quite a bit more involving though + you need to invest in an Intel NIC with multiple port + a free PCIe slot. You should try using router first though. My hunch is you won't need any kind of "acceleration". For network-based stuff, I always prefer to run it on the router instead of offloading, unless the apps have native support for VPN (which I don't think you shield/firestick do).
  11. There is no such thing as "VPN acceleration". That sounds like snake oil to me. What the device does is simply offloading processing power from the router, which is usually heavily under-powered (and even underclocked to keep cool). In fact, the device looks like a typical PFsense box. It would "accelerate" VPN as much as your a GTX 1070 would "accelerate" gaming if compared to a GT 710, no funky juice needed. Your "VPN needs" is a rather vague term. Perhaps list out the activities you want to be behind a VPN so we can help you get what you need. I am currently utilising docker --net=container parameter to route my PiHole docker through an OpenVPN docker. See guide box quoted below. You can basically route any docker through a VPN using that method. There are also plenty of dockers in the CA store with VPN integrated so you can use them too. That includes a Privoxy (i.e. http proxy) with VPN to route your web-browsing. Guide post:
  12. Did you dump your own vbios? Wrong vbios has been known to cause funky issues, which probably is why you can't find any info.
  13. It's not unusual to be able to boot into Windows without the 2 USB controllers but sooner or later it will stop working with error code 43. That has already been reported on the forum. What you are doing is the software equivalent of splitting the card in half.
  14. The RTX has 4 functions, unlike the 1070 which only has 2 (GPU + HDMI audio). You did the binding right but you missed the 2 USB controllers (08:00.2 and 08:00.3) in your xml.
  15. I don't think you need ipv6 for the docker. Your public ipv6 is for things to go to your router. The port mapping from your router to Plex docker doesn't need ipv6.
  16. Did you do any tweak to your VM? Upgrading from 1070 to 2070 is NOT a drop-in replacement. Always a good idea to attach diagnostics to all your queries (Tools -> Diagnostics -> attach full zip file to your next post).
  17. See quoted post below for more info. If you don't have docker with custom IP then you don't need it. If you have docker with custom IP but on a different bridge from Unraid (e.g. Unraid on br0, docker on br1) then you don't need it. If you have docker with custom IP on the same bridge as Unraid AND you need Unraid to connect to that docker (e.g. Pi-hole) then you need it. Considering you have 4 eth, you can very certainly have a separate bridge for any docker with custom IP.
  18. No. Enable only if you have a use for it.
  19. 3. Read speed is limited by the speed of the drive on which the file is stored. Write speed in reconstruct write (aka turbo write) is limited by the slowest disk in the array. Write speed in normal mode is roughly half the slowest disk in the array. Cache only improves write speed but never read. 4. No dedup. It aint ZFS
  20. The 4 virsh virsh start virsh reboot virsh shutdown virsh destroy --graceful shutdown / reboot may or may not work depending on client; hence, virsh destroy to kill the VM (--graceful to at least attempt to clear cache)
  21. Your diagnostics only has log from 9:19 am today i.e. you rebooted? You have a bond of 4 eth into a bridge and it looks to me like some of the eth aren't reliable. I would suggest to simplify things instead e.g. do you really need 4 eth bond?
  22. I recommend Jellyfin. Emby and Plex block some features (most notably hardware transcoding) behind pay wall.
  23. No. AMDs APU can't be passed through to a VM. Also no, you shouldn't just get a dedicated GPU and expect that it can be passed through to a VM. If having VM with passed through graphics is critical to you, you should consider having 1 low-end card (preferably single-slot width) to boot Unraid with + 1 card for each VM that may run concurrently (i.e. you said 2 VM so 2 cards, preferably not wider than double slots - a lot of cards are "2.5" slots so they block stuff). That would mean all 3 slots in a typical Ryzen non-APU motherboard would be occupied (and I recommend getting a Gigabyte motherboard so you can put the low-end single-slot on the bottom most slot and boot Unraid with it).