-
Posts
362 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by billington.mark
-
[6.6.1] GUI doesnt ever finish loading, causes stutter in VMs
billington.mark commented on billington.mark's report in Stable Releases
Changed Status to Solved -
[6.6.1] GUI doesnt ever finish loading, causes stutter in VMs
billington.mark commented on billington.mark's report in Stable Releases
Can confirm this is resolved in 6.6.2 Thanks all :) -
[6.6.1] GUI doesnt ever finish loading, causes stutter in VMs
billington.mark commented on billington.mark's report in Stable Releases
Also, behavior is the same in safe-mode boot -
[6.6.1] GUI doesnt ever finish loading, causes stutter in VMs
billington.mark commented on billington.mark's report in Stable Releases
Xorg.0.log.oldXorg.0.log Xorg log(s) attached. its just creating these over and over again. I assume every time the screen refreshes (as shown in the original post) -
[6.6.1] GUI doesnt ever finish loading, causes stutter in VMs
billington.mark commented on billington.mark's report in Stable Releases
No change. regardless of legacy or UEFI boot. -
After seeing that lstopo\hwloc had been added in the GUI on 6.6.1 I decided to give it a go today... The server boots and the array comes up (including docker and VMs), but the login screen never loads properly, and each time it tries, there's a noticable 0.5second stutter in VMs. unraid-diagnostics-20181003-1319.zip video of what the gui is doing on the screen: https://youtu.be/Gw_U-OLKIIw
-
[6.6.0-RC1] No VMs start after upgrade - UPDATE:FIXED
billington.mark commented on billington.mark's report in Prereleases
Also, if someone can post (or PM if its not allowed) a direct link to the 6.6rc1 download, (as i dont have access to an unraid GUI), that would be very useful! EDIT: got the URL out of the plg -
[6.6.0-RC1] No VMs start after upgrade - UPDATE:FIXED
billington.mark posted a report in Prereleases
Update: Resolved this by renaming /boot/config/plugins/dynamix.plg to dynamix.xxx Upgraded via the GUI, and on boot up no VMs start. Frustrating as i run my home router on a pfsense VM hosted by unraid. 'virsh list' on the command line just sits there and doesnt list domains. Couldnt find anything significant in the syslog when i went through it. Decided to roll back to 6.5.3, and now array wont start at all, so im going to be in trouble when the other half comes home! diag zips attached. tower-diagnostics-20180905-1313 (rollback to 6.5.3).zip unraid-diagnostics-20180905-1244 (Post upgrade to 6.6rc1).zip -
VM CPU Assignment when creating VMs
billington.mark replied to billington.mark's topic in Feature Requests
Wow... didnt realise it was 2 years ago i posted this!! There's so much more that could be done on this front too... like how much memory is available to allocate based on NUMA node and cpu assignment, which PCIe devices are associated with which NUMA node too, so the correct CPU cores can be assigned. Only an issue for 2+ CPU builds though. basically.. NUMA node info in the GUI on top of currently assigned CPUs. -
Can you paste your syslinux config (with the device stubbed), and post another diag zip after trying to start up the Vm with the device passed through?
-
Can you post your VMs XML? Also, have you stubbed the NVME device? from looking at your logs, it doesnt look like it has been.
-
So its arrived and fitted, and.. Success! You DO NOT need an x299 based motherboard for this card to work, only a montherboard\cpu capable of splitting an x16 slot into 4x4x4x4x. This was found in my PCIe settings in the BIOS. Even better, each NVMe device is still in its own IOMMU group: IOMMU group 47: [144d:a802] 81:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM951/PM951 (rev 01) IOMMU group 48: [144d:a808] 82:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 Only thing i had to to after installing was to update a VMs XML which has the 970evo passed through to it with the new PCIe address. So, ive free'd up a PCIe slot, and have the option to add 2 more NVMe devices in the future without much hastle at all.
-
From my google-fu on this, it seems PCI-e bifurcation is a feature in most CPUs from the last few years, its just a case of your BIOS giving you the option to configure the x16 slot(s) to 4x4x4x4x, which only seems to be available in higher-end gaming or workstation motherboards. Its a public holiday in the UK today, so I'll not get this delivered until later in the week to test unfortunately.
-
So I spotted this today... a single slot x16 PCI-e expansion card for 4x NVMe drives: https://www.scan.co.uk/products/asus-hyper-m2-card-pcie-30-x16-4x-m2-pcie-2242-60-80-110-slots-intel-vroc-support-for-asus-x299-moth https://www.asus.com/uk/Motherboard-Accessory/HYPER-M-2-X16-CARD/ Does anyone have one of these running on a non-X299 board? Would be very very useful if thats the case! At that price im tempted to just get it and see if it works anyway... Ideally it would still split each NVMe into its own IOMMU group too.. but i think im asking a bit much there!
-
Like any high profile system you rely on... BACKUPS are key. Data on the disks (other than cache) are protected by parity. Backing up other things like VM XMLs, Docker templates, DockerAppData, PluginData, etc, can easily be automated by plugins. On top of that, I have screenshots printed out of which drives are in which 'slots' in the UI, so i can make sure everything is the same if I needed to start from scratch and didn't have access to any data on my drives (in case of a USB failure). I'm pretty confident i could be back up and running as-is without any issues within an hour if my USB drive decided to give up.
-
Display using thunderbolt 3 for each users.
billington.mark replied to Yoon Hur's topic in VM Engine (KVM)
you might need to have an actual graphics card passed through to the VM for windows installation, and to install drivers but i dont see why it wouldnt work? what would the benefit be of using a thunderbolt 3 card as apposed to a normal gpu? -
Optical drives have always been problematic. The cheat way is to get a sata to usb cable and plug that into a passed through usb controller, or pass through that usb port.
-
Yes, you will. Otherwise unraid will assign a driver to it and it'll be "locked" and not able to be passed through. By stubbing the device, it'll make it available to be passed through. see here (instructions are for a network card, but its the same principle): Stubbing you'll need to: stub_ids=10de:17c8,10de:0fb0 you can get to edit the syslinux.cfg file in the gui by selecting the "flash" disk from the dashboard. If you dont feel comfortable editing it yourself, paste it here.
-
Try and boot from a windows ISO and run a startup repair. boot partitions on the NVMe disk will be different if you originally installed while the device was on the VIRTIO bus. If that doesnt work, you'll need to attach the nvme as a secondary disk on another vm to drag files off you need, then do a fresh install.
-
Check to see if you need to apply the MSI fix. Its usually the case that it needs to be enabled for passed through graphics and sound on i440fx VMs. Instructions are in the WIKI
-
You could also try and use an newer version of the ovmf bios: Download from here https://www.kraxel.org/repos/jenkins/edk2/ - edk2.git-ovmf-x64-xxxxxx.noarch.rpm extract with 7zip and pull out the OVMF_CODE-pure-efi.fd and OVMF_VARS-pure-efi.fd files, put them in a share somewhere like /mnt/user/VMData/KMDS_Ephesoft/ assign them in the XML manually (replacing the existing values): <os> <type arch='x86_64' machine='pc-q35-2.9'>hvm</type> <loader readonly='yes' type='pflash'>/mnt/user/VMData/KMDS_Ephesoft/OVMF_CODE-pure-efi.fd</loader> <nvram>/mnt/user/VMData/KMDS_Ephesoft/OVMF_VARS-pure-efi.fd</nvram> </os>