Jump to content

billington.mark

Members
  • Posts

    362
  • Joined

  • Last visited

Everything posted by billington.mark

  1. After seeing that lstopo\hwloc had been added in the GUI on 6.6.1 I decided to give it a go today... The server boots and the array comes up (including docker and VMs), but the login screen never loads properly, and each time it tries, there's a noticable 0.5second stutter in VMs. unraid-diagnostics-20181003-1319.zip video of what the gui is doing on the screen: https://youtu.be/Gw_U-OLKIIw
  2. Also, if someone can post (or PM if its not allowed) a direct link to the 6.6rc1 download, (as i dont have access to an unraid GUI), that would be very useful! EDIT: got the URL out of the plg
  3. Update: Resolved this by renaming /boot/config/plugins/dynamix.plg to dynamix.xxx Upgraded via the GUI, and on boot up no VMs start. Frustrating as i run my home router on a pfsense VM hosted by unraid. 'virsh list' on the command line just sits there and doesnt list domains. Couldnt find anything significant in the syslog when i went through it. Decided to roll back to 6.5.3, and now array wont start at all, so im going to be in trouble when the other half comes home! diag zips attached. tower-diagnostics-20180905-1313 (rollback to 6.5.3).zip unraid-diagnostics-20180905-1244 (Post upgrade to 6.6rc1).zip
  4. Can you paste your syslinux config (with the device stubbed), and post another diag zip after trying to start up the Vm with the device passed through?
  5. Can you post your VMs XML? Also, have you stubbed the NVME device? from looking at your logs, it doesnt look like it has been.
  6. I think they're getting at the point that you won't be able to boot from an nvme device as it won't be detected as a bootable device in the bios... Which if you're using it with unraid, that's not an issue. That's how I read that response anyway.
  7. Page 4-10 of your motherboard manual.... Clearly says that bifurcation is supported on a couple of your PCIe slots. Check your BIOS settings and see if you have the option to change things on your x16 slots.
  8. So its arrived and fitted, and.. Success! You DO NOT need an x299 based motherboard for this card to work, only a montherboard\cpu capable of splitting an x16 slot into 4x4x4x4x. This was found in my PCIe settings in the BIOS. Even better, each NVMe device is still in its own IOMMU group: IOMMU group 47: [144d:a802] 81:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM951/PM951 (rev 01) IOMMU group 48: [144d:a808] 82:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 Only thing i had to to after installing was to update a VMs XML which has the 970evo passed through to it with the new PCIe address. So, ive free'd up a PCIe slot, and have the option to add 2 more NVMe devices in the future without much hastle at all.
  9. From my google-fu on this, it seems PCI-e bifurcation is a feature in most CPUs from the last few years, its just a case of your BIOS giving you the option to configure the x16 slot(s) to 4x4x4x4x, which only seems to be available in higher-end gaming or workstation motherboards. Its a public holiday in the UK today, so I'll not get this delivered until later in the week to test unfortunately.
  10. I get the option to change the lane layout for my x16 slots (x16 or x8x8 or x4x4x4x4), so im hoping its supported.... Ive ordered the part and i'll report back once ive fitted it. For anyone curious this is the setup im using: ASRock - EP2C602-4L/D16 2x E5-2670
  11. So I spotted this today... a single slot x16 PCI-e expansion card for 4x NVMe drives: https://www.scan.co.uk/products/asus-hyper-m2-card-pcie-30-x16-4x-m2-pcie-2242-60-80-110-slots-intel-vroc-support-for-asus-x299-moth https://www.asus.com/uk/Motherboard-Accessory/HYPER-M-2-X16-CARD/ Does anyone have one of these running on a non-X299 board? Would be very very useful if thats the case! At that price im tempted to just get it and see if it works anyway... Ideally it would still split each NVMe into its own IOMMU group too.. but i think im asking a bit much there!
  12. Like any high profile system you rely on... BACKUPS are key. Data on the disks (other than cache) are protected by parity. Backing up other things like VM XMLs, Docker templates, DockerAppData, PluginData, etc, can easily be automated by plugins. On top of that, I have screenshots printed out of which drives are in which 'slots' in the UI, so i can make sure everything is the same if I needed to start from scratch and didn't have access to any data on my drives (in case of a USB failure). I'm pretty confident i could be back up and running as-is without any issues within an hour if my USB drive decided to give up.
  13. You could also try and use an newer version of the ovmf bios: Download from here https://www.kraxel.org/repos/jenkins/edk2/ - edk2.git-ovmf-x64-xxxxxx.noarch.rpm extract with 7zip and pull out the OVMF_CODE-pure-efi.fd and OVMF_VARS-pure-efi.fd files, put them in a share somewhere like /mnt/user/VMData/KMDS_Ephesoft/ assign them in the XML manually (replacing the existing values): <os> <type arch='x86_64' machine='pc-q35-2.9'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/VMData/KMDS_Ephesoft/OVMF_CODE-pure-efi.fd</loader> <nvram>/mnt/user/VMData/KMDS_Ephesoft/OVMF_VARS-pure-efi.fd</nvram> </os>
  14. Some interesting info here regarding Virtualisation on Ryzen: https://www.redhat.com/archives/vfio-users/2017-March/msg00005.html Seems some patches will be introduced to get things working nicely in the future. Worth mentioning that any patches applied will need to wait for LimeTech to implement in updates. whether that updates the the Kernel, libvirt or QEMU...
  15. the dmidecode command returns EP2C602-4L/D16 I'll have a read through and see what i can come up with Mark
  16. Hi Guys, I dont seem to have the option for fan control in the plugin settings, the enable drop down is disabled. My fans and RPM is showing correctly when running ipmi-sensors -t fan Any ideas where to start with this? I have an ASRockRack EP2C602-4L/D16
  17. Is there a way to build the install ISO on something other than MacOS?
  18. Is there a way to make the installer ISO on linux or windows? can someone link me...
  19. ARM Visualization could maybe open the door for an Android TV VM too... hmm!
  20. Ive recently done a server rebuild and come to the point I want to reinstall this plex docker... is it still necessary to set up the temp\transcode folder to reside in memory so it doesnt fill up the docker image and have faster read\write when transcoding? if so, what variables and settings need to change? Thanks all
  21. If this was me, id back up what you wanted to put back on cache post pool change into the array temporarily, reduce to one device, reformat the cache drive and copy data back. Obviously beforehand, disable VMs and/or docker so nothing tries to write to cache while you're reformatting and copying data back. The GUI has always seemed to be geared to adding more disks to the array, or more disks to cache, but not the other way around. EDIT: See linky below
  22. see here: https://wiki.archlinux.org/index.php/KVM Seems its possible, but not sure if the way its implemented inside UNRAID will allow this to be done, proceed at your own risk
  23. Thats interesting... did you modify the drivers in any way or just run through the normal setup?
  24. I had this battle this weekend. it seems the latest drivers have more checks in them which detect you're running a VM and refuse to start. If you're exclusively using the GUI to manage VM options, make sure you've disabled hyper-v in the advanced features. After that, add this to your XML (repace the <features> part with the below). you can set the vendor_id value to whatever you want. avoid using anything KVM,QEMU,UNRAID related as this value is checked by the nvidia driver once windows starts up.: <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> <kvm> <hidden state='on'/> </kvm> </features> After that's done, driver version 368.81 was the latest i could get to install and start. Later drivers didnt work as they must have additional checks to see if you're running a VM. Worth mentioning that i was experimenting with fresh installs of windows each time to work out exactly what i needed to do. I didnt have the battle of having to remove old or downgrade nvidia drivers each time I made a change, so just adding this to XML and installing the driver in a current "code 43" VM might not work. Good luck
×
×
  • Create New...