bastl
-
Posts
1267 -
Joined
-
Last visited
-
Days Won
3
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by bastl
-
-
"placement='auto'" uses NUMAD which isn't available yet in Unraid.
<memory mode='strict' nodeset='1'/> or <memory mode='preferred' nodeset='1'/>
Should work in theory. I have a VM set to strict and to use RAM from node 1 only and "numastat -c qemu" shows with this setting it only uses RAM from node 0. Weird.
Maybe it starts counting the nodes with 1?
Nope!
With "nodeset='2'" it complains that there isn't a node 2.
-
I had a couple VMs configured as SATA with the vdisks sitting on the BTRFS formatted cache drive. I stumbled across the info how to make Windows aware that the storage device is an SSD to support trim. Converting the controller to SCSI is the only way to get this to work. I'am relativly sure i have done that process in some older versions of unraid but I can't tell which one.
-
Just to keep track which version i have tested it if other users have the same issue
-
retested on 6.6.4. still an issue
-
It could be anything. GPU is the only GPU in the system, passthrough not working on the first slot, OVMF or Seabios, with or without vbios. Dumped BIOS or downloaded, edited or not. Installed with VNC or not. Installed using 1 core ore more. Unraid version BIOS settings, BIOS version .... there are soooo many things that can help to solve that issue. There is no one click solution that works for everyone
-
@ks2016 did you tried to use a vbios for the card you passing through? Another thing for some users installing Windows with attached GPU and without ever use of VNC is the only way to prevent the code 43. You can try this. Also what happens if you restart the running VM via the unraid gui?
-
Stupid question, but do you have a monitor connected to the GPU you passing through or do you manage it after the VNC install via RDP or other remote software?
Edit:
I'am asking cause i had some weird behaviour of one of my Win10 VMs in the past where if i start the VM with a 1050ti passed through without a monitor connected it brings the code 43 error. With monitor on startup this won't happen. I now have my main VM with a 1080ti connected via displayport to my monitor and the second VM with the 1050ti via HDMI to the same monitor. This works now even if the monitor is set to DP, the 1050ti notice that a HDMI display is connected and starts fine.
-
Is that a AMD system?
-
Sry, saw your post to late
-
Try the commandline i posted above and report if this also shows the behaviour.
-
To be clear, you only get the code 43 if you restart the VM and as soon as the VM is powered down and you start it from that state, the Nvidia driver loads fine without the error??? I never saw someone reported that behaviour.
-
Please try to restart your VM from the command line inside the VM and check if the Nvidia driver comes up without code 43.
shutdown /r /f /t 0
-
2 hours ago, limetech said:
Except they didn't include diagnostics.zip
😣
-
Maybe a good solution in case users attach a second vdisk with data or games and to prevent them to accidently delete this disks.
-
I can confirm this. Created a Win10 VM with 3x1GB vdisks. Deleting the VM+Disks option only removes vdisk1.
-
I can't really reproduce that behaviour on my 1950x. No matter which machine type i use or what cores i give to a VM. I never noticed extrem long boot times. The only thing i had a couple times is, the first core maxed at 100% during boot. VM will never finishing boot in this case and only happens if i passthrough a PCIE device. For example creating a fresh windows VM with a GPU passthrough sometimes the VM shows this behaviour on boot if i give more than 1 core to the VM. The bug is reported a couple times and the fix for that is, install Windows with only 1 core and add more at a later state.
EDIT:
<numatune> <memory mode='strict' nodeset='0'/> </numatune>
This can also cause your issue. You telling the VM only using RAM connected to the first node/die but as tjb_altF4 already stated you mixing cores from both dies.
-
Same for me. Restarting Win7 VM and Fedora VM works properly now with 6.6.3
-
4 hours ago, eschultz said:
Looks to just be isolated to Linux VMs.
It's also happens when restarting a Windows 7 VM from the webui for me.
-
Wo hast du denn geschrieben, dass du das VisualC++2010 Redistributable bereits getestet hast bzw. die Daten in das Setup zu integrieren sind? Nebenbei bemerkt kannst du nicht so einfach benötigte Software anderer Firmen in dein Setup integrieren.
Where did you posted it that you already tried the VisualC++2010 Redistributable? Btw. it's not that easy to implement ressources from another company in your setup file.
-
-
That warning also existed in earlier 6.xx builds.
invalid argument: Failed to parse group 'tss'
-
Same issue for me. Upgrade from 6.6.1 went fine except that a VM that autostart with Unraid first shows as started and as soon as i restart the VM no matter if inside the VM or within UnRaid it shows as "not started".
-
Damn. Now i have no idea why your system only starts up in Gui mode. You can test to put your Vega in another PCIE slot. It's in the first one i guess. Your IOMMU groups might change and you have to reassign the card to the VM. Another idea, people reported if you don't passthrough the audio part of the card to the VM the reset bug is gone. Maybe worth a test. Also the next kernel version 4.19 will have a fix for that bug. Check this forum thread
There might be also a workaround for the AMD reset bug worth checking out. I can't test it cause i have no vega card but it looks promissing. Reseting the card inside the VM before shuting it down via script should work however windows update and the automated restart after it might ignore the script. Keep that in mind. Check out that step-by-step guide, it might be a good solution till unraid switched to kernel 4.19
https://forum.level1techs.com/t/linux-host-windows-guest-gpu-passthrough-reinitialization-fix/121097
-
Can you post your Syslinux configuration? Click on Main than flash and scroll down. I bet there is an extra parameter for GUI boot in that config which is missing for the non gui boot entry
[6.7.0 rc1] GUI bug
-
-
-
-
-
in Prereleases
Posted · Edited by bastl
I see the same thing but it's not new with 6.7. Earlier versions showed the same for me. Everything works fine so far. Can't reproduce it, when it happens. Sometimes it shows "starting services" forever, sometimes not.
Edit: After writing this i changed the permission of one of my shares from "private hidden read/write" to "private hidden read only" for one of my users after i deleted some files and it disappeared.