GHunter

Members
  • Posts

    378
  • Joined

  • Last visited

Everything posted by GHunter

  1. Drives arrived quickly and packed nicely. Very trustworthy and was a great experience. Thanks! Would definitely buy from @Yousty again!
  2. Thanks. Yeah, I'll take all of them for $100. PM me to tell me how you wish to be paid and I'll reply with my shipping details.
  3. If you still have these, I'm interested in taking them all. Do you have smart data for them? Thanks.
  4. Good to know. I tried 2 powered hubs that i had and neither one worked. I'll have to try another brand i guess.
  5. You can run HDMI and USB cables from your unRaid server to your living room to connect to the TV and USB devices. I get mine from Monoprice.com. USB over ethernet works well too. I have a 75ft run from my server to the living room and it works well. VNC or RDP won't give you the performance you need for gaming. You'll want separate USB runs for each device you need connected to in the living room as a USB hub won't work.
  6. You can use group policies for users in Windows VMs to disable shutdown so your users can not shut down the VM. I always disable sleep and hibernate functions in my VMs too. This would leave a reboot as the only option left for VM users.
  7. Nested virtualization support was removed due to problems with users that were running antivirus programs in their VM's. There may have been other problems with it enabled, but I'm not sure. If you use the User Scripts plugin, there is a script to turn on / off nested virtualization. You could also enable it manually with the following commands: For Intel CPU's: modprobe -r kvm_intel modprobe kvm_intel nested=1 For AMD CPU's: modprobe -r kvm_amd modprobe kvm_amd nested=1
  8. Are you using HDMI or Displayport cables from the GPU to the monitor? I setup my Linux Mint VM quite a while ago but I think I had to use Displayport cable to get it working until I updated the video drivers. Might give that a try. Oh, and I used Seabios and Q35 when I setup my VM.
  9. No spikes during file transfers. CPU usage graph in resource monitor just bounces around 34 to 44 percent. I tried to run Crystaldiskmark the other day and it could not detect a hard drive. Probably because I use a vdisk instead of passing through an SSD like you are doing. If you have something else you'd like me to try, let me know and I'll give it a shot. My VM XML only has a manual edit for emulator pinning. Syslinux has pins isolated for VM use only. LT is working on v6.6 which will be on the latest linux kernel and have updated QEMU and libvirt.. This might give you better performance and stability on the latest AMD CPU's. I'd be anxiously waiting for the RC to see if it helps you. VM XML <domain type='kvm'> <name>Emby Theater - Living Room</name> <uuid>7ea7321d-aaea-4d52-a39f-4c86eb882ba3</uuid> <description>Living room VM running Windows 10 and Emby Theater.</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='6'/> <emulatorpin cpuset='0,4'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.11'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/7ea7321d-aaea-4d52-a39f-4c86eb882ba3_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='1' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/VMs/Emby Theater - Living Room/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Programs/DreamSpark/Windows 10/Windows10All.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Programs/Virtualization ISOs/Stable/virtio-win-0.1.141.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='qemu-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:c0:96:af'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x147a'/> <product id='0xe042'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> syslinux.cfg default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 2 label unRAID OS menu default kernel /bzimage append isolcpus=1,2,3,5,6,7 pcie_acs_override=downstream initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append isolcpus=1,2,3,5,6,7 pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest
  10. @Griz Change your AppData Config Path to: /mnt/user/Docker/binhex-krusader I had the same problem and this fixed it.
  11. As a follow up, I transferred 10 gigs of large files and then 10 gigs of small files back and forth between my VM and unRaid. My CPU usage showed between 34 and 44 percent. No stuttering in my VM during file transfers.
  12. I doubt this would be an issue but you never know. I could run a few tests on my setup this afternoon to see if I have similar problems. I'm in the middle of a parity check right now. My Win 10 VM's were created new a few months ago all on Q35 and UEFI each with their own GPU. They all use a raw image for the vdisk which resides on an SSD cache drive formatted as XFS. My hardware is in my signature. Maybe this might provide useful info. I don't know. But I'll try it in a few hours. My VM's are mostly used to run Emby Theater to other rooms in the house and they have no problems streaming Bluray quality movies at the same time.
  13. This would be nice to have, especially if the IP is dynamic. I was curious about this and we already know the mac address for each VM as it is in each vm's xml file, so we can probably get the ip using arp command or something similar.
  14. Ah, OK. Thanks for pointing that out. I know some plugins are good about handling that problem for us and others aren't. Since this is @Squid plugin, I should have known better ?
  15. Paste your code into a text editor like Notepad++ or EditPad Lite and double check it. There is something extra in there when you copy and paste. Also, convert to UNIX/Linux Line Feed instead of Windows Carriage Return. This is a common problem when copying and pasting Linux scripts using Windows PC's.
  16. Updated from 6.5.3-rc1 last night. No issues to report. Thanks guys!
  17. My difference in parity check speeds on unRAID v6.5.3-rc1 vs. unRAID v6.5.2 is within 2 minutes. It completed in 10 hours and 29 minutes. It's always been fairly consistent throughout the different unRAID 6 versions.
  18. Stop the VM, then left click on the VM name (not the icon). Then you'll see your disk devices listed. In the capacity column, the size can be clicked on and edited.
  19. If you're running unRAID v6.5.2, then the resize function will work in the GUI. Here is the command to do it if you want to do it via the command line: fallocate -l 100G "/mnt/user/Windows 10/vdisk1.img" Change the 100G to the size you need and the path to your vdisk image. Note that you should only be increasing the size as decreasing it can cause problems.
  20. VM 1: Windows 10 VM with OVMF and Q35 2.11 for Living Room. CPU pair 2/6 (Host Passthrough) with 4GB RAM. Passthrough nVidia GT 730 and Formosa USB Remote Control device (QEMU XHCI USB 3 Controller). Boot to Tianocore times: unRAID 6.5.2 5 seconds unRAID 6.5.3-rc1 4 seconds VM 2: Windows 10 VM with OVMF and Q35 2.11 for Master Bedroom Room. CPU pair 3/7 (Host Passthrough) with 4GB RAM. Passthrough nVidia GT 710 and Phillips USB Remote Control device (QEMU XHCI USB 3 Controller). Boot to Tianocore times: unRAID 6.5.2 7 seconds unRAID 6.5.3-rc1 6 seconds VM 3: Windows 10 VM with OVMF and Q35 2.11 for Development. CPU pair 2/6 and 3/7 (Host Passthrough) with 8GB RAM. Passthrough nVidia GTX 750 ti and Logitech Mouse/Keyboard (QEMU XHCI USB 3 Controller). Boot to Tianocore times: unRAID 6.5.2 12 seconds unRAID 6.5.3-rc1 6 seconds Hardware in my signature is up to date. I'll do a parity check tonight and post my results compared to previous unRAID version.
  21. Yeah. It is the sorting preference and dictates what order they are started and stopped. Stopping is done in reverse order. The VM's page also has the same sorting feature.
  22. Thanks. I don't' have any idea where the root of the problem lies and have reported it in multiple places.