Leaderboard

Popular Content

Showing content with the highest reputation on 03/24/20 in all areas

  1. 2 points
  2. Umm, is this how skynet gets started?!? We were so preoccupied with whether or not we could, we didn’t stop to think if we should. 😜
    1 point
  3. I have 4 or 5 shares I set up for my media: 2 "TV" shares and 2 or 3 "Movies" where between the TV and Movies share I have folders set up to allow me to export them to the plex container as either Read-Only or Read/Write permission. For example, "TV" share has a "wtv" folder (my migrated recordings from windows media center) which I do not want plex to "accidentally" delete - this is exported as RO. Then I have a "recordings" folder which I allow plex to save TV shows to and that gets exported as RW. I guess my point is that you can export multiple "Paths" to the plex container rather than your entire user directory. You just add another path when configuring the container and set the host path to your unraid-side media share/folder (i.e. /mnt/user/Movies) and the container path to what you want it to appear as within plex (i.e. /Movies). You'll then need to configure your Library in plex and tell it to look in "/Movies" for your movie library and repeat for each exported path. Thats because you exported your directory "/mnt/user/" to the container as "/data", as seen in the screenshot under Container Path. You can rename it to anything you'd like and that is what it will appear as in plex. Just my 2 cents about how I have it set up and what can be done
    1 point
  4. You could always run the procedure manually Format a stick as FAT32 Make sure that its named UNRAID Download the zip file from Limetech's website Extract the contents of the zip onto the flash drive Right click on make_executable and run it as administrator done
    1 point
  5. Just a FYI... the issue has corrected itself. I believe this was an error related to the massive growth they have seen over the past few weeks.
    1 point
  6. Feel free. There's only 1 ground rule. Criticisms aren't allowed (CA's been a 5 year project, and while its super stable, if I was going to do things all over again, it would not be programmed the way that it is (especially the code from the first year or so)
    1 point
  7. Which is why I am asking - to show a demand
    1 point
  8. Then it looks like it unfortunately can't be passed through then.
    1 point
  9. You added a rom file (vbios). Was it there last time you successfully passed through the GPU? You also added 2a:00 which is the USB device. Were you able to succesfully pass it through before adding the on-board sound card?
    1 point
  10. AMD Radeon HD 7870 Windows 10 VM Was able to FINALLY update to latest AMD Adrenaline Drivers! for over a year I've refused to update my AMD GPU Drivers.....the last time I tried, it just plain wouldn't work just like many of you have experienced. Well, with all this recent downtime I decided to give it another try and I managed to figure it out. I figured out a couple of tricks along the way (you guys may already know them). Anyway, this is all from memory based on what I just did.....so I hope it is accurate enough for people. The tricks in the troubleshooting section are probably the most interesting to people. And ultimately....will I continue to update my AMD drivers like this every time? HELL NO. Way too much work. This is probably the last time I update them. Preparation: PLEASE PLEASE PLEASE BACKUP YOUR VIRTUAL MACHINE BEFORE CONTINUING. THIS SHOULD BE A GIVEN. Download Latest AMD Driver Software for your GPU Run and it will extract to the C:\AMD. Most of us mapped our Windows User profile documents to our unraid array. The AMD installer doesn't like that (so dumb!) and will now give you an error about Mapped Storage. Just quit out of it. We'll address that now by simply creating another temporary user on Windows. Start Menu --> PC Settings --> Accounts --> Family & Other Users --> Add Someone Else to this PC. Just go through the prompts saying you don't have a Microsoft account or live account or whatever until it lets you just add another plain user. Once done, be sure to make that user an Administrator. Download Display Driver Uninstaller (DDU) HERE. Go ahead and extract it somewhere on the VM (not a network mapped location). Log Out of your primary user profile. Log in to the new user you created just to initialize it. When it is setting up, just uncheck all the bullcrap and click next Uninstalling current AMD display drivers: Open msconfig (Start Menu ---> type 'msconfig' and you'll see it). Go to Boot tab and Select "Safe Boot" and select "Network" option under it. Click Apply. Now when you reboot, Windows will automatically enter Safe Mode. Reboot Windows. Give it 5 minutes.....Once it reboots, login as that new temporary user. See Troubleshooting if you just plain don't see anything (black screen) You are in Safe Mode. Run the Display Driver Uninstaller....setting it to Device Type GPU and Device AMD. Everything else default. Click the Clean and Restart option. Give it 5 minutes....Once it reboots, login as that new temporary user. See Troubleshooting if you just plain don't see anything (black screen) Now go to C:\AMD and run the setup.exe (run as administrator). It will now install and should complete. Screen might flicker/change and mouse movement might get weird. Try your best to use the keyboard/mouse/whatever to get to reboot the Windows Box the natural Start Menu way....but if you can't, use the Unraid Web UI and Stop the VM. Force Stop if regular Stop doesn't work. Windows Box boots back up (see Troubleshooting if it doesn't). Log in to your Primary user, open msconfig again and uncheck the "Safe Boot". Apply. Reboot. Windows Box boots up (see Troubleshooting if it doesn't). Log into your Primary user....verify latest AMD Adrenaline Drivers are there. You can delete that temporary user now. Troubleshooting: Throughout doing this, there were a few times where I would hit the wall......black/blank screen and cannot get past it. It was frustrating and I experimented a bunch....just a lot of trial and error. These are the things I tried to get around this.....I tried them in this order too: With the VM off.....Go to the Unraid Web GUI VM tab, edit the VM. Scroll down and make VNC your Graphics Card. Then click the + symbol and add your AMD GPU as the 2nd Graphics Card. Click update. Start VM. Click on the VM and open VNC for it. Hopefully that gets you past the blank/black screen. I originally had my Win10 VM on i1440fx-2.1.0 for the "Machine". I changed it to the latest Q35 version and that helped me out. Again, use VNC if necessary. If that doesn't work, you'll need to do more. Make sure the VM is off. For whatever reason (beyond my knowledge)...something gets messed up with the unraid vm configuration. All you need to do is just create a new VM template and use the exact same settings as your previous one. To be clear, you won't be losing anything in creating this new template....you are pointing it at the exact same vdisk. Have it create and run the VM. Use VNC and hopefully you can get in to move forward with the process. Update: If you do have to create a new VM template...be sure to use the same UUID value....just because Windows Activation becomes tied to the Machine UUID
    1 point
  11. Yes you can. In fact, unless you have actual issue (e.g. Oculus Rift randomly disconnecting if connected through libvirt i.e. virtual usb) and/or require true hot-plug then you can just use the virtual USB device. If you install the "libvirt usb hotplug" plugin, you can "warm plug" USB devices to the virtual USB of any VM after VM boot. No need to reboot the VM so not cold plug but you still have to manually replug the device through the Unraid GUI so not true hot plug -> hence "warm plug".
    1 point
  12. Nobody is going to download each of those individual files you attached. Attach the complete diagnostics zip file to your NEXT post.
    1 point
  13. The parity will show as invalid until you have successfully built parity. It sounds as if you did not leave it long enough to complete.
    1 point
  14. Thanks for the heads up. I removed SOLD OUT from the title.
    1 point
  15. No prob. Once you installed Windows then install the Nvidia driver (download from their website) + untick Geforce Experience component of the driver if you don't need it. Once the Nvidia driver has been installed, reboot your VM, make sure the GPU still works after reboot then shut the VM down. Then go to the Unraid GUI (through the network - you can even use your phone if there's no PC around) to start the VM again and then make sure the GPU still works. Then reboot the whole server, then start the VM (again make sure the GPU still works). If it still works after all the above steps then congrats, you just managed to pass through the RTX 2070. Then take that working xml and save it somewhere safe (in case you need to return to it in the future). Only then should you start working on passing through the onboard audio (together with the 2070 HDMI Audio). (remember that regardless whether you pass through the onboard audio or not, the 2070 HDMI Audio MUST always be passed through together with the GPU).
    1 point
  16. No. No. Nothing. If the include exclude section no longer matches what is physically installed, you will need to update it manually. If you move a disk containing a share to a slot that was previously excluded by that share, new files written to that share will no longer be written to that disk slot, instead they will be written to the disk slot that is listed in the include / exclude rules, following the other allocation rules of split level and minimum free space. If you mean the global include / exclude, strange things can happen, especially if you move a disk to a slot that is globally excluded, the files, while still on the disk, will not be visible in the user shares. I don't recommend using include / exclude unless you know what you are doing and why. The parity disk(s) are built from the new list of data disks.
    1 point
  17. I would lean towards the Aorus Master. Unless you need Wifi for your baremetal, the Wifi adapter is useless with Unraid. I also have found the onboard Wifi unstable when passed through to the VM albeit on my X399 mobo and not X570. In that case, you might as well have 2 wired LAN. Your VM will connect through a bridge to the wired LAN. Each VM has its own virtual adapter (100Gb apparently!) connecting to the same bridge. I also bridged both of my wired LAN adapters together so if there's problem with 1 port, I just unplug and replug to the other port. With regards to USB controller, you definitely need to pass through a controller to your Windows VM if you need to use the Oculus Rift. It would randomly disconnect if connected through libvirt usb.
    1 point
  18. You selected 2f:00.4 which is the onboard audio instead of 24:00.1 (the 2070 HDMI Audio). As I said, you need to pass through all 4 devices to stand a fighting chance to pass through the 2070. Also, you need to be less ambitious for now. Let's try to get the 2070 to work first and then work on the onboard audio. Basically one device at a time. Try this new xml. I still think you'll need the vbios for it to work but keep our fingers crossed. Beside changing the device, I also changed your hyper v vendor_id tag to give it a 12-character dummy value. I found this to work better than none with regards to error code 43. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>1ee9bb8d-0c00-3e32-97f0-4f92781514f2</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>12</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='12'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='14'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='16'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='18'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='20'/> <vcpupin vcpu='10' cpuset='10'/> <vcpupin vcpu='11' cpuset='22'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-4.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/1ee9bb8d-0c00-3e32-97f0-4f92781514f2_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='0123456789ab'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='12' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.160-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:ad:47:1e'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x24' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x24' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x24' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x24' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc336'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc539'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> </domain>
    1 point
  19. You won't be able to pass the 2 usb controllers to different vms since they are in the same iommu group. That is a great use for your extra pcie slot, a usb card. You can pass both internal usb controllers to windows and a pcie usb card to mac, for example. And you don't need multiple nics for what you are describing. By default VMs use a virtual bridge to communicate directly with the physical Ethernet port. Unraid and all your VMs can use the same NIC.
    1 point
  20. So what I ended up doing to fix it was migrated from the old LimeTech docker to the official PlexInc docker, but following Spaceinvader One's video here:
    1 point
  21. This is really not the recommended way to solve your issues. rc.docker gets updated with new versions of Unraid and this will certainly break old versions. It is always possible to make a feature request and do a proposal for improvement (sorry didn't go through all details here, so not sure what the actual changes are)
    1 point
  22. If you can pass through RTX 2070 after binding and without vbios then you don't need it. But it's still recommended to dump and use your own vbios because it helps with stability. Just from experience, the RTX 2070 usually needs a vbios though. Watch SpaceInvade One tutorial on Youtube for more details on dumping vbios.
    1 point
  23. You MUST pass through all 4 devices together. They are all part of your RTX 2070. Easiest way to bind devices is to install the VFIO-PCI Config plugin from the app store then Settings -> VFIO-PCI.CFG -> tick the 4 devices in group 28 -> click Build VFIO-PCI.CFG -> reboot Note 1: if you add / remove PCIe device, make sure you disable the vfio-pci.cfg first because the pci bus would change. Then 24:00.0 is in GPU, 24:00.1 is in sound card (audio device), 24:00.2/3 is in Other PCI device section of the VM template. You are likely also to need to dump your vbios and/or boot Unraid with the "Geforce 210". Watch SpaceInvade One tutorial for more details on dumping vbios. Note 2: given you have 2 GPU, do not download from Techpower Up but dump your own instead.
    1 point
  24. You have to bind and passthrough ALL devices in that group, not only the GPU itself. The other devices are part of your GPU.
    1 point
  25. That doesn't look like a permission problem, but maybe I'm missing something. I assume you tried touching the file to make it appear new? Did you also look for dot files in the output directory to see if something was left lying about from a previous failed attempt? I think that touching the file is sufficient to invalidate any dot files. What am I talking about...? Right now my encoder is running and I have this: root@Tower:/mnt/user/Video/rip# find output/* -type f | grep "Yesterday (2019).mkv" output/1080/.RsjOZG/Yesterday (2019).mkv I've see those files get left if the encoding failed, or (naturally) I restarted the container.
    1 point
  26. Next release. That's my next big chunk of work.
    1 point
  27. Yeah, it's deprecated in favour of the lsio boinc app. It does still work however, just its an older version of boinc within it
    1 point
  28. I was having this same issue, unable to open the history page, user page, etc. You can roll back to that last version that worked pretty easily by changing the docker image pulled. In my case I'm using v2.1.44-ls35 which seems to be in a good state.
    1 point
  29. I just resolved my issue. From the dockers console I ran the following commands which allowed me to login. /usr/local/openvpn_as/scripts/sacli --key "vpn.server.daemon.enable" --value "false" ConfigPut /usr/local/openvpn_as/scripts/sacli --key "vpn.daemon.0.listen.protocol" --value "tcp" ConfigPut /usr/local/openvpn_as/scripts/sacli --key "vpn.server.port_share.enable" --value "true" ConfigPut /usr/local/openvpn_as/scripts/sacli start
    1 point