Oldslugger

Members
  • Posts

    38
  • Joined

  • Last visited

Everything posted by Oldslugger

  1. This maybe of help to those who have found themselves either with a corrupted vdisk or their Windows OS suddenly decides to format the disk or does anything funky with it. In my scenario, I had a stable but slow Windows 10 VM with a Samsung NVME and GTX1080ti passthrough and a Vdisk on the array, no matter what I tried I could never achieve bare metal speeds, at its best it was twice as slow. I persisted with it because I had other VMs running. One day, after a Windows update and forced reboot, it would not boot and would freeze and could not be fixed by start up repair. The only way I could boot it was to remove the Vdisk. I stupidly did not backup the Vdisk as I presumed all the data was already backed up to my amazon drive, forgetting that Amazon drive changed their service and broke my folder sync. I tried to attach the Vdisk to another VM and during boot up, Windows 10 decided to write over the disk and re-labelling it. To my horror when the VM booted up, the drive was completely empty. I feared all my work had been wiped, and as it wasn't a physical disc, I would not be able to recover anything from it. After a quick google search, I found some data recovery programs, and tried out Easeus Data Recovery Wizard Professional. The software is a quick download and install, to say I was blown away when I ran it is an understatement. The massive relief to see files appearing on a drive which had nothing according to WIndows was out of this world. The great thing is you download the trial version, see if it works and then pay for a licence once you know are ready to recover. The whole process of discovery and restore took about a day, but I got my data back from a situation I thought was hopeless. Edit: limetech deleted link After 2 years with Unraid, I will go back to bare metal and look for other alternatives as the performance hit has become significant.
  2. i watched the new machine load up and it didn't perform a format. It booted up quickly.
  3. I have been running a Windows 10 VM with NVME passthrough as my boot and OS drive and a second Vdisk to store my files. Whilst editing another W10 VM, I accidentally assigned my second Vdisk to that VM whilst my main machine was shut down as I was using an iPad to set it up. I didn't realise it until the other machine booted up and I went to check the drives, and could see that the new VM had changed the drive name and all the original folders were gone. I stopped the machine, and now cannot add the Vdisk to my main machine as Unraid gives me an error, Anyone know if the change is permanent and can it be fixed?
  4. I have a GTX 1080Ti and managed to pass it through without any issues, I did not need the rom file. Try deleting the lines highlighted in black and let KVM reassign them to see if that works for you.
  5. CPU is 2 x Intel Xeon E5 2630 V3, I passed through 4 paired cores, 24 Gigs of DDR4 ECC Ram as well as the NVME and GTX1080ti. What I cant understand is that there is no increase in fps when I change down from UHD to 1080p even though I am running the latest firmware. I changed both CPU and GPU to performance and nothing really changes other that GPU utilisation drops from 90%+ to around 40%.
  6. Hi, I have watched Gridrunner's videos and managed to passthrough an NVME as my OS drive and a GTX1080Ti to a windows 10 VM. The performance is really poor, I cannot get above 60fps in games such as BF1 and Far Cry5 or any other older games. At times the fps can get above 200fps but when the game loads it drops down to between 25-50fps regardless of whether I use 1080p or UHD. I have installed the latest NVIDIA software without any issues. As the NVME is passthrough, I can load windows 10 normally without the Unraid VM environment and the card performs as expected, as soon as it boots from Unraid performance just dies, I have no idea why this is the case, any help would be appreciated.
  7. When I say slow, I mean really really slow, currently taking 10 minutes to open a photo app whch normally takes seconds!
  8. Since one of my Windows 10 VM was crashing intermittently, i thought I would try 6.4 RC18f, so far it has been stable but all VMs are running very slowly despite one VM with NVME passthrough and 24Gig of RAM. Anyone experiencing similar issues or am I just unlucky with my config?
  9. Have you tried editing the VM XML with the following: So when you press edit VM you should see both ports checked on the Other Pci devices drop down list.
  10. Over the last 2 days my Windows 10 VM keeps crashing intermittently. The VM is mounted on an NVME which is passthrough along with AMD W7100 video card. I have updated all softwards and it keeps doing it. Here is the log file before it crashed again.
  11. Urgent Help Needed My Unraid box has been running along nicely with the acceptable quirk that I could never turn off my W10 NVME machine without the need to reboot the whole system. After much nagging by Windows I updated my WIndows 10, it went to shut down and normally I would reboot the system and get it going again, this is now not happening. I have no idea which part of my VM config has been messed up, but one constant thing is that when I try to assign primary Vdisk as a block NVME device it keeps giving me an error that it was not found even though it appears in my unassigned devices as well as when I use the root@Tower:~# udevadm info -q all -n /dev/nvme0n1 which gives me I can boot my other VMs which are on SSDs My XML is Any help would be much appreciated
  12. Thank you all for your replies. my system is: Asus Z10PE D16WS 64GB DDR4 ECC Ram Asus HD5450 ( wife's windows 10 VM), some how Samsung EVO 840 500GB is now showing up as QEMU disk AMD Firepro W7100 (my windows 10 rig) managed to passthrough Samsung 950 pro 512GB NVME but disappears or restart or shut down Windows 10 VM with no gpu used for Blue Iris security camera suite this goes into pause mode constantly, accessed from my rig via RDP I used the new OVMF which was the only way I could get the NVME to boot now, though I got it to boot in the past without this new bios. The NVME is attached to PCIE card which I checked was resettable, so not sure why it is not resetting.
  13. I am not an IT person, but by following guides and searching through the forums I have managed to build an Unraid system that serves 3 Windows 10 VMs, 2 of those used to have their SSD & NVME drives passed through and recognised as such by windows, I am not sure what has happened when I did various Unraid upgrades they became QEMU disks though I haven't made any changes to the syslinux or xml files. Following Gridrunner's helpful videos and subsequent discussions I have managed to passthrough my NVME drive and use it as a boot drive for Windows 10, the issue I have is that if I restart Windows 10 the drive disappears and the only way I can get it back is to reboot the whole Unraid server, any help would be much appreciated. Here are my files: Devices Syslinux XML
  14. I managed to passthrough an SSD and NVME to 2 different windows 10 VMs which were working great with W10 recognising them and I was able to install Samsung Magician and drivers. Since I updated to 6.33 these drives are now listed as QEMU hard disks. The only reason I noticed was because I was getting high latency on youtube videos and when checked with latency progs it identified the drive issues. I have tried to redo passthrough but still getting QEMU drives instead of SSD, any suggestions?
  15. Thank you very much Squid that did the trick!!!
  16. I stupidly assigned my network card which is used by Unraid to a VM which has autostart enabled and now I can't edit the VM to make changes because as soon as I start the array the VM starts and I lose the network card and can't make changes via the webUI. I tried to locate the VM xml but cannot find it I tried locating the libvirt autostart file without success. Looked in /etc/libvirt-/qemu/network/autostart Would appreciate some urgent help please.
  17. Hi, I have been running Unraid with 2 VMs for 6 months, but had a major crash recently which killed one of the VMs. I would like to clone the remaining windows 10 VM and move it to a physical machine, how can I do this?
  18. Hi, I have been running Unraid with 2 VMs for 6 months, but had a major crash recently which killed one of the VMs. I would like to clone the remaining windows 10 VM and move it to a physical machine, how can I do this?
  19. I have 2 onboard LAN which usess Intel I210 and a 4 port card using I350. unRAID recognises all 6 ports and all are in different IOMMOU groups, but I can only bond the 2 onboard LANs. How can I get unRAID to use the 4 port card as well?
  20. I ran into a similar issue with my AMD W7100, until I noticed that the indicator lights on my keyboard was acting strange. I did a lot of experiments and found that even though the screen was black, windows was still trying to load but the GPU did not get an activation signal. My fix was: 1) Force stop VM 2) Change to VNC, either of 2 things will happen, you will either boot straight into Windows as normal or you get an EFI boot screen. If you get the boot screen type EXIT, then choose Boot Device and then highlight either EFI device or EFI device 1 and it will boot up windows 3) Shutdown windows in VNC 4) Restart your unRAID 5) Edit VM to boot up with your GPU and it should now kick in and boot as normal This is an issue with some AMD cards not shutting down properly, it could also apply to your card also, worth a try.
  21. My experience: HD6450 - worked straight out of the box, no tweaking required. Used i440fx & Seabios, can handle as many cores and as much memory as you want to throw at it Firepro W7100 - was very frustrating until I realised you can't force stop, but easy once you know what to look out for. Use OVMF & i440fx. No memory or core restriction either
  22. My system has been stable but a little slow, so I bought a TP Link TL-SL 2218 managed 1gb switch. My ASUS Z10PE has 2 onboard Intel I210 NIC, using just 1 I can get around 60MB/s transfer, with 2 bridged I get 160MB/s. I bought a 4 pport I350 based card but can not get it to bridge at all, I have been using LACP and the switch only sees the onboard NICs as being in a LAG group. Any ideas as to what I have done wrong?
  23. I have the same as you 512GB, but nowhere near as fast :-(
  24. Will give that a go too, thank you Saarg
  25. As nobody has answered this, I have finally managed to figure out how to do it. The HD6450 card was a breeze to do using SEABIOS and i440fx, it works very well and is how I thought unRAID would work. The Firepro W7100 was a dog in comparison, after wasted days and a lot of hair pulling I finally realised what was happening and made some changes. The Firepro will only work with OVMF & i440fx, you must set up the VM and do all the software installation via VNC, once everything is set power off the VM and then re-edit and use OVMF & i440fx, this way you can assign as many cores and memory as you wish (I have 16 cores & 32G RAM). When you start up the VM you can then update the driver via device manager. The only issue with this card and possibly similar Radeon cards is that when you stop the VM it will not reset the card, so when you restart you get a black screen. At this point on many occasions I thought it had crashed, but it hasn't. You will need to force stop the VM and switch to VNC. You will then see the UEFI boot screen, what you need to do is type exit and go to boot menu, from there choose boot device and highlight other device and it will boot up into windows 10 normally. Then just use windows to shut the VM down and re-edit the VM to use the GPU once more, this time when it starts, the GPU starts up and everything is normal again. I am sure there are scripts out there to automate this, but I am not a programmer and I am just happy to have a functional VM which utilises the full power of the firepro and can run at 4K. I have also managed to use LACP which has nearly doubled my transfer rate from 40-60MB/s to over 110MB/s, which is extremely useful for large video and photo images. I hope this helps anyone that has struggled like me with these Firepro cards. <domain type='kvm' id='3'> <name>Tuan 10</name> <uuid>e66559aa-092f-b006-1486-d5fa01e45ac9</uuid> <metadata> <vmtemplate name="Custom" icon="windows.png" os="windows"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='9'/> <vcpupin vcpu='10' cpuset='10'/> <vcpupin vcpu='11' cpuset='11'/> <vcpupin vcpu='12' cpuset='12'/> <vcpupin vcpu='13' cpuset='13'/> <vcpupin vcpu='14' cpuset='14'/> <vcpupin vcpu='15' cpuset='15'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type> <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='16' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/vdisks/Tuan 10/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/ArrayVdisks/Tuan 10/vdisk2.img'/> <backingStore/> <target dev='hdd' bus='virtio'/> <alias name='virtio-disk3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/Win10_1511_English_x64.iso'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/virtio-win-0.1.110.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:ac:25:bb'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Tuan 10.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x05dc'/> <product id='0xb049'/> <address bus='4' device='6'/> </source> <alias name='hostdev2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x0bda'/> <product id='0x0307'/> <address bus='3' device='14'/> </source> <alias name='hostdev3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x056a'/> <product id='0x0027'/> <address bus='3' device='8'/> </source> <alias name='hostdev4'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x1b1c'/> <product id='0x1b20'/> <address bus='3' device='3'/> </source> <alias name='hostdev5'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x045e'/> <product id='0x0748'/> <address bus='3' device='13'/> </source> <alias name='hostdev6'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain>