bastl
-
Posts
1267 -
Joined
-
Last visited
-
Days Won
3
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by bastl
-
-
@johnnie.black Quick question, how is copy-on-write set for your appdata and system share?
-
3 hours ago, S1dney said:
reading though some comments it's not entirely clear where (and if) btrfs (with or without encryption and/or pooling) has a relation with this problem also, my guess is yes.
I use a single btrfs drive unencrypted and have the same issue.
3 hours ago, S1dney said:There has been a lot of reports of certain docker containers (like the official Plex) writing a lot also, so it's easy to mistake that and this bug, also since it's possible you're affected by both of them
Earlier I already reported my findings. For me it doesn't matter which docker is up and running or if all dockers are stopped. As soon as I enable docker in Unraid I see that increased writes. Dissabling docker itself, Boom, problem disapears. Enabling it with no container running, tada, writes from loop2 are back with 2-5mb/s. Most docker containers people are reported, for myself I don't even use. No Plex, no download managers. Sure, you can reduce the amount of writes by disabling a docker, but it doesn't change the behaviour. Containers like unifi, netdata or nextcloud for example will always produce writes if some monitoring is enabled or mobile devices randomly connecting and checking for new files. Let's hope someone will figure this out. Maybe the next Unraid with a newer docker engine will already have a fix for this. Who knows.
-
14 hours ago, mf808 said:
pinpoint it to a handful of containers
I have the same issue and not using a single one of these containers. Even with all my containers turned of I see the same 3-5mb/s writes to the cache. The only thing helps is to completely dissable docker to stop it.
-
I stumbled across this thread and quickly checked my server.
Power on hours: 1,119
Data units written: 180,027,782 [92.1 TB]
I'am not using the cache that much. 1 Linux VM that I use from time to time and a couple dockers. Server on idle doin pretty much nothing produces up to 5GB writes in 10 minutes. It varies, sometimes only 1GB sometimes i see numbers up to 5GB in 10-15min. Disabling all the dockers brings it down close to 0 writes. It doesn't matter which Docker I'am starting, after a couple minutes a couple gigs are written to the NVME cache drive by loop2. Tried it with only DuckDNS and only Bitwarden running.
-
Not 100% sure, but I guess the docker image pather always included the *.img extension. I started with unraid over 2 years ago and never changed anything in my docker config and never had to by any Unraid update.
-
@dalben Check how the shares for "appdata" and "system" are configured. I bet they don't exist on your cache device. Adjust your paths like the following:
- 1
-
Doesn't happened for me on 6.8.0 and started yesterday with the upgrade to 6.8.1. I think we had that issue somewhere in an 6.6 or 6.7 builds. Can't remember if it was a RC or stable, but it got fixed really quick.
-
@joelones I found a solution for the Pfsense issue. Looks like the cpu-mode "host-passthrough" is the culprit. Either switch to "Emulated QEMU64" or emulate a Skylake CPU for example.
Not sure what changed in the current Qemu version which caused this issue. This might be also the reason why a couple people are reporting problems with their VMs won't boot up after an update of Unraid. Maybe something @limetech should know and have a deeper look into.
I'am currently on RC9 and don't know when this issue started. As mentioned earlier, I fire this VM up only when I need it for testing and can't really remember on which Unraid build I used it the last time. Never had an issue with 'host-passthrough' before.
-
Set your powerplan in Windows to high performance and dissable hibernate and check if this helps.
powercfg /hibernate off
- 1
-
15 hours ago, joelones said:
I've experienced the same yesterday. Started up an older Pfsense VM I use from time to time to play around with and test some configs. I never had issues with that VM. I only use virtual devices for it, nothing directly passed through. I tried to restore some older backups, tried different machine type versions but nothing worked. I get the exact same halt screen on Pfsense as joelones posted. Not sure on which Unraid version i fired up the VM the last time, but within the VM nothing was changed.
-
37 minutes ago, J89eu said:
Anyone have any issues with Q35 VMs breaking with each RC update? Becoming a bit of a pain but I manage to get it working each time... Need to get it working on RC7 still though. Essentially the VM starts but I get no screen output on my GPU passthrough.
Are you talking about "every single RC" for 6.8? The Qemu version changed a couple times and with a downgrade from 4.1 to 4.0 is one of a reasons why maybe VMs broke for you if you have setup a Q35-4.1 and after a downgrade that version was above the highest suported Q35-4.0. I'am running my Win10 VM as Q35 for quite a while now and had no issues so far on all RCs. I think libvirt is downgraded in the current release, maybe this is causing it for you.
-
Except for maybe finding some security flaws, I don't understand why someone wants to discuss some bugs, fixes or maybe features not in puplic, where everyone can participate in an solution and can also provide help. But whatever, as far as my experience is with discovering bugs, the forum always provided me a solution and in all cases limetech itself listened if I had issues in an RC version.
Describe your issue so it is reproducable and a solution will be found. 👍
- 2
-
26 minutes ago, jbartlett said:
<numatune>
<memory mode='interleave' nodeset='0,2'/>
</numatune>
Ok this setting I also looked into back than working through all the RedHat optimisation guides, but I never used it because I never handed over more than 1 node to a single VM.
-
@testdasi I know it always depends on the workload. My question was if there is a benefit to "trick" VM into thinking it runs on multiple nodes
<numa> <cell id='0' cpus='0-13' memory='6291456' unit='KiB'/> <cell id='1' cpus='14-27' memory='6291456' unit='KiB'/> </numa>
compared to a VM with the same core count without this tweak. How will windows react to this, does it change anything?
Without this lines unraid will automatically assign RAM to the VM. With this setting, you forcing it into a dual node config with 6GB each. Or am I wrong?
-
@jbartlett Is there any benefit to emulate a virtual multi node topology? I never used more than a full physical node for a single VM so I never tweaked it like this.
-
5 minutes ago, PSYCHOPATHiO said:
Just reporting the same bug, all VMs were working fine - VMs running are (FreeBSD & 2xWS2016) but once I started a win 10 VM all the VMs disappeared from the list but they are still accessible.
I have sufficient memory to run them all.
It's already a extra thread for this
Looks like it happens, if you startup ALL VMs configured on your server. Temporary fix is to create a small dummy VM which you don't boot up.
-
@cybrnook No I mean firing up ALL configured VMs. In your case only 1. For @jbartlett 2 are working on the third they disappear. For @dlandon on his screen only 2 are configured and they are gone by starting both.
-
I have 2 running right now and can startup a 3rd with no issues. But I can't fire up all of them at once because a couple share ressources. From all the screens posted, it sounds like this only appears if ALL configured VMs are spun up, is this right?
-
@dlandon Are both VMs setup to autostart with the server?
-
Update from RC5 went fine for me. Retested on the qcow2 on XFS corruption issue. Looks like it is solved with qemu 4.1.1. Install a guest OS in a qcow2 vdisk on a XFS disk doesn't cause vdisk corruption anymore. Manually compressing a qcow2 vdisk also produces no issues now.
One thing to mention what still exists, creating a Win10 VM with default settings enables Hyper-V by default but you can't change it to off aftwerwards. Nothing is change if you flip the switch and press update. It stays on.
-
Retested with RC6:
Installing qcow2 VMs on XFS array drives workes fine, same for BTRFS cache drive. No corruption found on the qcow2 vdisks so far with the same testings as before. Already existing qcow2 images with compression which got corrupted before in RC1-4 are shown no issues so far. Will have a look at it the next couple days. Compressing an uncompressed qcow2 also not producing corrupted vdisks. Looks like the patches on qemu 4.1.1 fixed my issues.
-
@jbartlett Just an idea: Switch the slot for your card. Maybe the one you are using is wired via the chipset to the cpu and limits the card. Other devices like usb or network cards often share the same x4 connection to the CPU. Maybe thats your bottleneck
-
@jbartlett Which "CPU Scaling Governor" are u using?
-
@jbartlett Maybe you have set some cores which are not directly attached to the memory? Did you manualy changed the topology for the VM?
<topology sockets='1' cores='8' threads='2'/>
I always do this for all my VMs to match the actual core/thread count. Default is always all selected cores and 1 thread.
[6.8.3] docker image huge amount of unnecessary writes on cache
in Stable Releases
Posted
@johnnie.black I've asked because I have my VMs also on a BTRFS subvol with daily snapshots to a UD device. No problems with that so far and only 1 VM is always powered on and sometimes up to 3 are running. COW on the VM share is also set to auto. But in my case the VMs aren't producing the constant writes. Sure they do some writing, but not that much.
My Appdata share also is set to Auto, but the System share with the docker and libvirt images are set to COW off. Not sure when I set it to off, or if it was default off back when I installed Unraid a couple years ago. Even with all dockers turned off, I see the writes. As soon as I disable Docker itself, the writes go down. So I asume for me it has something to do with the combination of Docker + System Share COW NO + BTRFS subvol