Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About Nigel

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Two things I have found. If your VM images are in a share with Copy on Write set to Auto, create a new share with this disabled, and move/copy the images to the new share. Update your VMs to point to the new images in the new share. This seems to reduce the write rate considerably. I made these changes on all 3 VMs, and 2 out of the 3 reduced to practically nothing for an idle machine, as you'd expect. The 3rd VM was still stubbornly writing out at a constant 3MB/s (~253GB/day). I finally tracked this down to the Origin game client. As soon as I signed out or shutdown the origin client, the excessive writes disappeared. Lesson learned for point one - always read the tool tips as it says to set this to No for images. I still think there is something weird going on, because the Windows VM was not registering 3MB/s of data being written by Origin - rather 0.1MB/s. My suspicion is that some type of IO activity is being expanded dramatically at the filesystem layer for btrfs pools. This is on the new build server. Now I've got to get some outage on my older server to try and stop the crazy writing there and see if I can replicate the above success. Edit: Also noticed the temp on my SSD pool has dropped by 2 degrees since making these changes.
  2. This has been running since last night (12 hours roughly) So 2 Windows 10 VMs responsible for 583G in 12 hours.
  3. Brand new server (6.8.3) and suddenly noticed 111 million writes to my SSD pool (2 * 2TB WD Blue 3D NAND) after little more than a week. Power on hours : 205 (8d, 13h) 233 NAND_GB_Written_TLC -O--CK 100 100 --- - 1393 234 NAND_GB_Written_SLC -O--CK 100 100 --- - 6789 Both drives are almost identical (as you'd expect). I only have some Windows 10 VMs running, which I stopped and restarted. It seemed to write a lot during windows boot up and then calm down. However over time, the rate has increased again. No dockers are running, only VMs. I do not think this is exclusively a docker issue. The loop2 process is nowhere to be seen on iotop, it's the VMs (which are idle most of the time). This is a disaster in the making and will toast the drives long before the 5 year warranty expires. Where is LT on this thread? Is it worth a new one? This is surely the single most important thing to look at right now, as customers out there with cache pools might be silently ruining their hardware.
  4. Wow - this thread is a monster, and I have just gone through hours and hours of getting a simple Nvidia GT710 passed through to a Windows 10 VM. I'm pretty experienced with building VMs with GPUs, so this came as a surprise. I certanily won't buy another HP for using with Unraid! In the end, here is what I recommend which got me working. For clarity, I have a Proliant DL360e Gen 8 running Unraid 6.8.3 Before you do anything, follow the instructions from 1812 to obtain the HP Proliant build of unraid. Seriously - don't waste time on RMRR or Operation Not Permitted (what I got). Try the HP Proliant build first as it sorted everything out for me straight away at the Unraid level. Doing a manual patch when you want to upgrade in the future is nothing compared to the pain if you don't use this IMHO. https://github.com/AnnabellaRenee87/Unraid-HP-Proliant-Edition Set your boot options in Syslinux configuration (replacing the PCI devices where necessary) : pcie_acs_override=id:10de:128b,10de:0e0f vfio_iommu_type1.allow_unsafe_interrupts=1 (possibly you don't need the ACS override - I haven't dared take it out as I have a working solution) The main problem I experieced was the code 43 error in Windows. No matter what I tried, I could not find a solution, until I read on the forums about GOPupd. Firstly I didn't have much luck until I did the following... Installed the GT710 in my Windows 10 desktop Ran GPUZ to export the BIOS (my model was not available at techpowerup). Used GOPupd to inject the UEFI code into the bios file (https://www.win-raid.com/t892f16-AMD-and-Nvidia-GOP-update-No-requests-DIY.html) Used HxD and the instructions from SpaceInvaderOne to remove the extra lines at the top of the rom file. Copied this rom file to my Unraid box. Moved the GT710 back into the proliant Added the GPU & the Sound Card portion into the VM using the new rom file Once I had this ROM file, it was all good. I can operate the VM as much as I do in my other servers with GPU passthrough. No restrictions or issues found yet. Thanks to all contributors on this forum and elsewhere, or I would have been lost.
  5. Did you try the HDMI dummy plug as was suggested by the previous poster? If you want to run headless, you need this.
  6. Bit of a necro here...but I was trying the same and came across this. For the benefit of any new visitors... I managed to get this working. Steps to recreate: Create a Windows 10 VM using the standard unraid VNC. Connect the display link adapter to a USB port with a display emulator dummy plug connected. Pass the USB device through to the VM. Install the display link drivers. Go to display settings and make display 2 (your display link screen) the primary. Shutdown the VM. Edit the VM in XML view to remove the Unraid VNC, which is the graphics and video tags. Save and start the VM. The VM now has the display link adapter as primary, and you can access to the supported resolutions given by your display emulator and display link adapter with a suitable VNC software solution (I use realvnc). Performance is also better on RDP and VNC. But it's not the same as installing a hardware GPU obviously!
  7. Just wanted to say I ended up just building another Unraid server, but your suggestion did find a lot more information than I had, so thanks!
  8. Apologies for a rather late reply - I was looking to see the support level of a Quadro K4000 on the forums and came across this. I use GPU passthrough on Unraid extensively using GeForce cards, and they all need a display emulator dummy plug to function. The symptoms you describe is the missing dummy plug. They cost a few £ on eBay and Amazon in the UK. Just plug it in, and you should be able to see the display output. Edit: Further reading suggests the 4000 is too old and may still not work. Anything Kepler onwards should be fine. You will still need a display emulator!
  9. I have a VM on my unraid box that I wanted to temporarily move to running on either my desktop via Virtual Box, or on an ESX hypervisor. It might be a google fail, but I cannot find any instructions anywhere on how to do this. The only threads I find are for importing into Unraid, not exporting out of it. Has anyone done this or know of a method?
  10. I suspect you'd have a far better experience getting a thin client micro PC running linux or something. It's actually got me interested in a little hobby to find out how cheap and small I could build a 4k game streaming client!
  11. Hardware in my signature . It's a rack mount server that I used for crypto mining during the good years, and now have retired and upgraded it for more interesting use running games and other VMs. I game just fine at 4k - playing Assassins Creed Odyssey at the moment. The moonlight app streams over ethernet, and it uses the hardware compression capability of the card to compress at the server end, and decompress at the client end. So it's not quite as perfect quality as connecting directly over display port, but it still looks great to me. You can use any old GFX card in your PC that supports h264 or HEVC decompression. Bandwidth for 4k is 80Mbps, but the moonlight app allows to use up to 150Mbps (which I do). If you have a gigabit network in your house it's fine. If you run over power line, I can't comment as that might introduce latency. If you need to connect devices like a USB joystick, you can use USB over ethernet (my son uses a steering wheel for example). Software is called VirtualHere, and you'd just treat your PC as the server, and your VM as the client. My server is in my garage, so my office is free from noise and heat in the summer.
  12. Your question on whether it is mature enough...it sure is. I have 3*1080ti each allocated to a gaming Windows 10 VM for myself and my kids. Check out the excellent videos from Space Invader One on how. I game in 4k, the kids in 1080p. To game, you can use the Moonlight application (free) which uses Nvidia streaming. You can use remote desktop by sharing the mstsc.exe application via the Nvidia Geforce app. You might also need a display emulator (really cheap) plugged into a display or hdmi port on the GPU so the VM thinks there is a monitor attached to it.
  13. Bump in case anyone new has any ideas?
  14. I'm trying to import some Oracle Enterprise Linux VMs and they won't boot beyond grub. If I use the native kernel (UEK) it just hangs, but if I try and boot the red hat kernel version I get more information, as per below. I've tried starting as VMDK and converting to img but no luck with either (same problem). I think it's probably down to the boot being on an LVM, but this is an assumption. VM is Q35-3.0 and Seabios. Source VMDK is scsi type, and I've tried both that and SATA to boot the VM, same error each time. The problem I get is visible below. This is a working VM on ESXi exported via OVF directory structure, so I know the source is sound. Any help would be massively appreciated.
  15. Hi Jon, I did some more experimentation, and discovered that indeed the hardware raid was completely ignored by unraid, so I skipped on from that idea pretty quickly. I've now settled on 2*2TB rotational (data+parity) with 2*1TB SSD in the cache giving me 1TB usable. I figured the cache pool would be a little weird with an odd number of drives (5), so I just went with protected rotational (which spin down most of the time). After migrating gaming over to the server I have freed up 2*240GB SSD drives from other PCs and they have also gone into the cache, giving me 1.2TB of mirrored space. I really like the capability of adding differing sizes to the pool - very flexible. I'm assuming that replacing these smaller drives in the future will be easy if we need more space, but I'm currently running at 750GB free space on the SSDs so I'm comfortable for the moment. I've also got some other more work related VMs to build which will use the SSD for boot and rotational for storage, so those drives will be used too. Everything is nice and quick, and performance is pretty darn close to bare metal from a user experience. Pretty happy customer with unraid right now. Can see a lot of mixed opinions on the internet about it, but the GPU passthrough has made it just about as perfect as possible. If ESX allowed passthrough of consumer GPUs, they would have had my custom, so their loss. Now to figure out how to migrate vmdk to unraid images...