david279
-
Posts
827 -
Joined
-
Last visited
-
Days Won
2
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by david279
-
-
Yeah i just had this happen to me after doing nothing overnight. All dockers and VMs were still working but couldnt get to the gui. Had to kill and restart nginx. Added a diagnostic....
-
Im seeing this on the cache page for my main cache drive. I have another cache pool that does not show this but shows the correct usage info.
-
I had a hard time getting VMs with a GPU passed through past a black screen when using UEFI boot for some reason. GPUs used where a rx 580, GT 1030 and RTX 2070. Its the one reason i never tried UEFI again. Just my personal experience.
-
Install went well here on my ryzen system...nvidia drivers all installed as well. One thing i can't click any dockers or vm on the dashboard. Nothing happens....
-
Gotcha but you can not create a afp share.
-
Apple dropped afp from Big Sur. So update accordingly...
-
-
-
-
Did you change the network bridge type in your WIndows VM from virtio to virtio-net?
- 1
-
4 minutes ago, J89eu said:
Yes I am
Add this to the end of your VMs xml before the last domain line.
Its a QEMU 5.0 bug that effects windows VM.
<qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/> </qemu:commandline>
- 1
-
58 minutes ago, J89eu said:
Anyone getting Kernel Security Check Failure when trying to boot from Q35 5.0 with this update on a Windows 10 VM?
You using a ryzen processor?
-
1 minute ago, ich777 said:
I use 'br0' for all my VM's and half of my Docker Containers.
I mean the model type for the ethernet bridge
<interface type='bridge'> <mac address='52:54:00:36:3e:6d'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
I should look like that.
- 2
-
You have any VMs using virtio for the network bridge? Change the network from virtio to virtio-net.
-
42 minutes ago, alturismo said:
ok, seems i was too fast, i guess its another issue why this happens
when i read changelog again
webgui: VMs: change default network model to virtio-net
this virtio-net does not exist here, i only have those
so i have chosen the virbr0 which is 192.168.122.0/24, whatever this is about cause i never added it.
so, how can i add this virtio-net to unraid ?
as note, when i add a new vm i also dont have the option, its still default to br0 and i can choose from br0, br1 or virbr0 only.
Just go into the xml for the vm find the network section and edit virtio to virtio-net.
-
3 minutes ago, mikeyosm said:
Just read this....
https://blog.christophersmart.com/2019/12/18/kvm-guests-with-emulated-ssd-and-nvme-drives/
Anyone tried this and measured performance compared with passthrough nvme?
I tried this recently with a widows VM. Read performance was way higher than virtio blk/scsi but the write performance was about the same. I just setup a basic windows VM so i didnt test gaming or anything like that.
-
1 minute ago, Can0nfan said:
Hi @limetech great work team
I didn’t seen anything about the missing VM’s issue from RC6 I’m still running RC5 because of that any word if this is resolved in RC7??
Revert libvirt from 5.9.0 to 5.8.0 because 5.9.0 has bug where 'libvirt_list_domains()' returns an empty string when all
In the notes above. I think this is the fix.
- 2
-
13 minutes ago, darthcircuit said:
Is there any way of manually updating qemu to 4.1 in RC5 or upcoming builds? I won't be using qcow2, especially compressed. I just pass through the whole controller for my windows 10 vm. Since 4.0.1 doesnt allow me to use the previous patch:
<qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline>
Which means my pcie lanes are only running at 1x speed. That will be a problem.Are you using a AMD GPU? if not you could switch the machine type back to i440fx and that would give you the correct speeds.
-
The config file is at /etc/wireguard/ you can edit it all you want. You just need to stop start wireguard for the setting to take effect.
-
Want to use "remote tunneled access" to use wireguard like a VPN right?
-
Im using the same motherboard as the OP and i haven't had a ryzen crash in a very long time. First this bug is pretty much on the first gen ryzen chip and setting the typical current option in the bios should solve it. I had a 1600x and was booted for months without a problem. I have a 2700x now still using the same motherboard and still no crashes. Im also on the newest F25 bios that i flashed friday, I was on F24 before that with no issues.
-
Everything all good with my 2700x system after updating. Changed VMs to qemu 3.1 and its booted right up. usb pcie card passthrough still working along with all the other things.
-
1 minute ago, limetech said:
What's the "AMD reboot bug"?
I think he's talkiung about the problem with rebooting VMs with certain AMD GPUs. They get stuck and you have to reboot the host to get back into the VM.
-
Just updated to rc3 and no more warnings in about the web terminal when i use it. Will say this is fixed.
[6.12] Unraid webui stop responding then Nginx crash
in Stable Releases
Posted
I have a cloudflared tunnel pointed at some docker services. Wonder if that is part of the issue.