-
Posts
311 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by Warrentheo
-
-
I have an EVGA GTX1070, and have 6.7.3-rc2 installed... Has not given me an issue, I also would not expect the slightly updated linux kernel that came in rc1 to cause that sort of issue... Not much has changed in rc 1 & 2...
-
an MSI GT240 should not need the ROM file added for it, only the newer GTX 10 series and RTX cards should need that... Older cards should boot with no special consideration... You do still need to pass the entire IOMMU group, which means you need to make sure you also pass through:
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT215 [GeForce GT 240] [10de:0ca3] (rev a2) Subsystem: Micro-Star International Co., Ltd. [MSI] GT215 [GeForce GT 240] [1462:1995] 01:00.1 Audio device [0403]: NVIDIA Corporation High Definition Audio Controller [10de:0be4] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] High Definition Audio Controller [1462:1995]
Don't know if you did or not...
I would recommend moving this to the troubleshooting thread though, since this should not be an issue with 6.7.3 (mine is working fine, and nothing changed in 6.7.3 should affect this (updated kernel and docker revert))... If you do need to repost this, I would recommend that you include your VM XML file with the diagnostics as well...
-
<hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/domains/msi1030.dump'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev>
Try removing
xvga='yes'
from the XML line for your card...
From what I have read, this setting is no longer supported, so this is my best guess...
Next best guess is to ask where you got the Video BIOS file for the card, try re-dumping it directly from the UnRaid commandline for this specific card if it was not gotten from there in the first place...
Third best option, try using the Nvidia plugin for UnRaid (Read the documentation for it before you do)
And last option is "Create" a new VM with the same options as the current one, and just point it to the same drives and image files as the current one, this resets all the settings and sometimes fixes some of these types of issues...
This is what the line for my EVGA GTX1070 passthrough reads like:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/cache/system/EVGA_GTX1070_FTW_DT_DUMPFROMUNRAIDCOMMANDLINE.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x01' function='0x0'/> </hostdev>
Edit: This is minor, but renaming your VBIOS file from ".dump" to ".rom" lets is show up in the VM Creation WebUI for UnRaid...
-
*********************************************
EDIT: Turns out my system really is having a GPU issue, will deal with RMA and retest this fix...
*********************************************
Adding this caused several other issues... It booted but was glitchy at first, but still didn't allow bluetooth usb passthrough... During playing a game with it on it suddenly crashed out, and the VM would no longer boot, it seems it breaks my daily driver Win10 VM with the webgui error message:
Execution error internal error: Unknown PCI header type '127'
Which from what I can see on other posts means that it is not able to do GPU passthrough correctly anymore:
At first I thought I had killed my video card somehow (didn't think it could be bluetooth related), and tried several things to fix the issue, manually rebuilding the VM XML, didn't work, completely rebuilding the libvirt image and then manually rebuilding the VM's xml's didn't help either.. I decided just for troubleshooting that I would undo the bluetooth settings:
append fbcon=rotate:1 module_blacklist=bluetooth,btbcm,btintel,btrtl,btusb BIND=01:00.0 pci=noaer initrd=/bzroot
and switched it back to:
append fbcon=rotate:1 BIND=01:00.0 pci=noaer initrd=/bzroot
and after a reboot, I was able to get my daily driver back... I have attached a diagnostics and vm xml file from just before switching off the blacklist in sysconfig and rebooting...
qw-diagnostics-20190324-1905.zip
-
I have been trying to kill this driver for a while, and have been unable to find a workable solution... I even reported the question General support:
and have been unable to get an answer... The normal solutions I have found on the internet are not working under UnRaid apparently... I am willing to try and help any way I can, but I am stuck...
We need to have a way in place to kill this if needed before 6.7.0 rolls out... Thank you in advance, I really like UnRaid and use it everyday, and your help on this is appreciated 🙂
-
Added that to my go file (Reversed the order, because when I tried to type them manually, they said that bluetooth required some of the sub modules, so I tried to unload the sub modules first) :
...... rmmod btusb rmmod btrtl rmmod btintel rmmod btbcm rmmod bluetooth # Start the Management Utility /usr/local/sbin/emhttp &
and it causes these lines right after the winbindd -D part of the boot process:
rmmod: ERROR: Module btusb is not currently loaded rmmod: ERROR: Module btrtl is not currently loaded rmmod: ERROR: Module btintel is not currently loaded rmmod: ERROR: Module btbcm is not currently loaded /var/tmp/go: line 15: $'r\357\273\277mmod': command not found
Edited the go file with nano from the web cli, so I don't think it is a Windows text file issue...
According to the internet, I added this to my syslinux config as my next attempt:
... bluetooth.blacklist=yes ...
That showed up as this error in my syslog:
bluetooth: unknown parameter 'blacklist' ignored
Can't seem to find a way to kill this for now... Not a huge issue for me for now, but currently will require me to rollback to 6.6.7 to be able to use Bluetooth in my VM for now (nothing I seem to find will let me attach it without it being (Code 10) for now...)
-
I actually do pass through a 5-port USB card to the VM as well, but every port is in use... My motherboard has a lot of ports, but none of them can be passed through for various reasons... I used the passthrough method for less important devices that were always attached so that I could use some of the motherboard ports...
One question, does this mean that using a USB Bluetooth in docker versus vm's are mutually exclusive? Are the command line commands to switch between them? And bottom line, how much of this can be automated in WebUI?
For now, what do I add to my syslinux to kill this driver for now?
-
Changed Priority to Minor (It only locks up when I attempt to attach the device)
-
Changed Status to Closed
Changed Priority to Other
-
I read in the patch notes that:
QuoteNew vfio-bind method. Since it appears that the xen-pciback/pciback kernel options no longer work, we introduced an alternate method of binding, by ID, selected PCI devices to the vfio-pci driver. This is accomplished by specifying the PCI ID(s) of devices to bind to vfio-pci in the file 'config/vfio-pci.cfg' on the USB flash boot device. This file should contain a single line that defines the devices:
Currently I have the following on my Syslinux Configuration:
kernel /bzimage append vfio-pci.ids=10de:1b81,10de:10f0 pci=noaer modprobe.blacklist=nouveau initrd=/bzroot
Which passes through my nVidia card as well as blacklists it from the nouveau driver... If I am reading that correctly, everyone with passthrough will need to change these after 6.7.0 to something like:
kernel /bzimage append BIND=01:00.0 pci=noaer modprobe.blacklist=nouveau initrd=/bzroot
Am I reading that correctly? Or does this only affect vfio from the booted command line?
-
Syslog Highlights:
...... Line 2320: May 7 22:47:04 qw root: Fix Common Problems: Error: Array is not started ...... Line 2375: May 7 22:47:04 qw root: Starting go script Line 2376: May 7 22:47:04 qw root: Starting emhttpd ...... Line 2425: May 7 22:47:06 qw kernel: mdcmd (29): import 28 Line 2426: May 7 22:47:06 qw kernel: mdcmd (30): import 29 Line 2427: May 7 22:47:06 qw kernel: md: import_slot: 29 empty Line 2428: May 7 22:47:06 qw emhttpd: import 30 cache device: (nvme0n1) Samsung_SSD_960_EVO_500GB_S3EUNB0J430332H Line 2429: May 7 22:47:06 qw emhttpd: import 31 cache device: (nvme1n1) Samsung_SSD_960_EVO_500GB_S3EUNB0J502744F Line 2430: May 7 22:47:06 qw emhttpd: import flash device: sda Line 2431: May 7 22:47:08 qw emhttpd: Starting services... ......
And so the error is being posted before the array is even probed, let alone started... That would seem to point to an error with FCP plugin version mis-match... But that doesn't explain why the original error occurred with the VM creation tab not being able to detect the disks... I wish I had saved the original syslog from that first boot...
-
I re-did the update, and again experienced the same error with FCP... However this time it didn't result in the same issues with VM creation (attached screenshot showing it working as normal)...
I also attached my diagnostics... I will in the meantime be reverting again to 6.5.1 stable...
4 hours ago, Squid said:Assuming FCP isn't miles out of date, that means that when the tests ran (10 minutes after the Array Started event), /mnt/user didn't exist anymore and the state according to emhttp was not "STARTED"
The array not started error was showing immediately after login prompt showed on the unRaid console prompt, it definitely didn't take 10 minutes... While this could be an issue with FCP version mismatch with the new unRaid, I still feel this is unRaid trying to run stuff on a partially, but not fully initialized array... With my first upgrade to 6.5.2-rc1, this resulted in the VM tab being told that the array was started, but was still unable to parse the list of possible vdisk locations...
-
Working on other stuff, will attempt to replicate soon...
When booting the first time I got a "Fix Common Problems" Red Notification for "Array Not Started"... Array seemed to take way longer to boot than normal, but did eventually boot up... Issue may be related... My current working theory is that part of unRaid booted like the array was started before it actually was...
-
I also have same issue, 2 data + 1 Parity, everything seems working, but displays that one disk is invalid...
-
5 hours ago, jonp said:
Ok, we are going to add the option to select SCSI as a bus type to storage devices. This will also automatically generate the XML for the virtio-scsi controller that you'll need to talk to these devices. We are NOT adding the discard='unmap' option directly to the GUI VM editor yet. That may be something we do in the future. For now, users will have to use the XML editor mode for the VM to add that special option.
The other remaining missing option is "USB" that just uses the default USB hub as its controller... Don't know many uses for it, but it might be trivial to add it as well at the same time... I will leave whether to add it or not up to you...
-
I understand, just letting you guys know the way I use it, as a sort of goal to work toward sense...
To my understanding there are really two main use cases, the default VirtIO bus for those who want minimum storage system overhead, or virtual SCSI bus, mainly in order to enable VM trim/discard... There might be reasons for going with VM SATA bus over one of those, but what they are, I don't know... IDE can stay for legacy support...
-
so far for me when I create or edit a VM with the web gui, I immediatly open virt-manager, switch the disk back to being SCSI (Which when I do it this way, it also adds the SCSI controler, and discards the un-needed Virtio controler) I also change the disk cache option to be power-outage safe by changing if from the default of "WriteBack" to what I feel should be the default, "WriteThrough"... After this, I can't even change the "Discard" option from Virt-Manager, so I have to do a simple manual XML edit to have the "Discard" = "Unmap" option to enable the VM to TRIM/Discard the vdisk file...
All this just to get the XML to read:
<driver name='qemu' type='raw' cache='writethrough' discard='unmap'/> <source file='/mnt/cache/domains/MainDeck/MainDrvWin10.SCSI.raw.img'/> <target dev='sda' bus='scsi'/>
Having SCSI just as an option in the WebGUI would shorten quite a bit of this, I put in a feature request for this here:
-
On 3/31/2018 at 5:22 AM, L0rdRaiden said:
OFFTOPIC.
that interface is qemu? can you manage/configure unraid VM with that? how?
That is a Virt-Manager docker container... Much easier method of configuring VM's, and allows a lot more options than the WebGui... That said it also includes some minor options that are being deprecated, and other options that should not be used...
That said it makes using host hardware mostly automatic instead of the near insanity of trying to do that from the text editor in the webgui...
-
I had that usb drive plugged into a hub, that is connected to a USB controler, that is connected to a Windows 10 VM... So apparently unraid sees the drive, then it boots the VM which yanks the drive away from unraid...
-
4 minutes ago, limetech said:
Please attach diagnostics.zip
Edited primary post and attached there...
Unraid OS version 6.8.0-rc6 available
-
-
-
-
-
in Prereleases
Posted
I also have this bug, rc5 to rc6, VM's show fine until one gets launched (either from GUI or CMDL), then it goes blank... I only have one VM, and so I am willing to rebuild the libvirt partition for testing if needed...
qw-diagnostics-20191118-1759.zip