bastl

Members
  • Posts

    1267
  • Joined

  • Last visited

  • Days Won

    3

Report Comments posted by bastl

  1. @johnnie.black I've asked because I have my VMs also on a BTRFS subvol with daily snapshots to a UD device. No problems with that so far and only 1 VM is always powered on and sometimes up to 3 are running. COW on the VM share is also set to auto. But in my case the VMs aren't producing the constant writes. Sure they do some writing, but not that much.

     

    My Appdata share also is set to Auto, but the System share with the docker and libvirt images are set to COW off. Not sure when I set it to off, or if it was default off back when I installed Unraid a couple years ago. Even with all dockers turned off, I see the writes. As soon as I disable Docker itself, the writes go down. So I asume for me it has something to do with the combination of Docker + System Share COW NO + BTRFS subvol

  2. 3 hours ago, S1dney said:

    reading though some comments it's not entirely clear where (and if) btrfs (with or without encryption and/or pooling) has a relation with this problem also, my guess is yes. 

    I use a single btrfs drive unencrypted and have the same issue.

    3 hours ago, S1dney said:

    There has been a lot of reports of certain docker containers (like the official Plex) writing a lot also, so it's easy to mistake that and this bug, also since it's possible you're affected by both of them ;) 

    Earlier I already reported my findings. For me it doesn't matter which docker is up and running or if all dockers are stopped. As soon as I enable docker in Unraid I see that increased writes. Dissabling docker itself, Boom, problem disapears. Enabling it with no container running, tada, writes from loop2 are back with 2-5mb/s. Most docker containers people are reported, for myself I don't even use. No Plex, no download managers. Sure, you can reduce the amount of writes by disabling a docker, but it doesn't change the behaviour. Containers like unifi, netdata or nextcloud for example will always produce writes if some monitoring is enabled or mobile devices randomly connecting and checking for new files. Let's hope someone will figure this out. Maybe the next Unraid with a newer docker engine will already have a fix for this. Who knows.

  3. I stumbled across this thread and quickly checked my server.

     

    Power on hours: 1,119

    Data units written: 180,027,782 [92.1 TB]

     

    I'am not using the cache that much. 1 Linux VM that I use from time to time and a couple dockers. Server on idle doin pretty much nothing produces up to 5GB writes in 10 minutes. It varies, sometimes only 1GB sometimes i see numbers up to 5GB in 10-15min. Disabling all the dockers brings it down close to 0 writes. It doesn't matter which Docker I'am starting, after a couple minutes a couple gigs are written to the NVME cache drive by loop2. Tried it with only DuckDNS and only Bitwarden running.

  4. @joelones I found a solution for the Pfsense issue. Looks like the cpu-mode "host-passthrough" is the culprit. Either switch to "Emulated QEMU64" or emulate a Skylake CPU for example.

    Not sure what changed in the current Qemu version which caused this issue. This might be also the reason why a couple people are reporting problems with their VMs won't boot up after an update of Unraid. Maybe something @limetech should know and have a deeper look into.

     

    I'am currently on RC9 and don't know when this issue started. As mentioned earlier, I fire this VM up only when I need it for testing and can't really remember on which Unraid build I used it the last time. Never had an issue with 'host-passthrough' before.

  5. 15 hours ago, joelones said:

    I went from rc7 to rc9 and my pfSense VM does not boot. Had the "GSO" type bug prior. Passing in a Intel Quad NIC to my pfSense VM. Thoughts here?

    Screen Shot 2019-12-09 at 12.03.33 PM.png

    Screen Shot 2019-12-09 at 12.08.47 PM.png

    Screen Shot 2019-12-09 at 12.10.20 PM.png

     

    Guess it's a known thing:

     

    I've experienced the same yesterday. Started up an older Pfsense VM I use from time to time to play around with and test some configs. I never had issues with that VM. I only use virtual devices for it, nothing directly passed through. I tried to restore some older backups, tried different machine type versions but nothing worked. I get the exact same halt screen on Pfsense as joelones posted. Not sure on which Unraid version i fired up the VM the last time, but within the VM nothing was changed.

  6. 37 minutes ago, J89eu said:

    Anyone have any issues with Q35 VMs breaking with each RC update? Becoming a bit of a pain but I manage to get it working each time... Need to get it working on RC7 still though. Essentially the VM starts but I get no screen output on my GPU passthrough.

    Are you talking about "every single RC" for 6.8? The Qemu version changed a couple times and with a downgrade from 4.1 to 4.0 is one of a reasons why maybe VMs broke for you if you have setup a Q35-4.1 and after a downgrade that version was above the highest suported Q35-4.0. I'am running my Win10 VM as Q35 for quite a while now and had no issues so far on all RCs. I think libvirt is downgraded in the current release, maybe this is causing it for you.

  7. Except for maybe finding some security flaws, I don't understand why someone wants to discuss some bugs, fixes or maybe features not in puplic, where everyone can participate in an solution and can also provide help. But whatever, as far as my experience is with discovering bugs, the forum always provided me a solution and in all cases limetech itself listened if I had issues in an RC version.

     

    Describe your issue so it is reproducable and a solution will be found. 👍

    • Like 2
  8. @testdasi I know it always depends on the workload. My question was if there is a benefit to "trick" VM into thinking it runs on multiple nodes

        <numa>
          <cell id='0' cpus='0-13' memory='6291456' unit='KiB'/>
          <cell id='1' cpus='14-27' memory='6291456' unit='KiB'/>
        </numa>

    compared to a VM with the same core count without this tweak. How will windows react to this, does it change anything?

     

    Without this lines unraid will automatically assign RAM to the VM. With this setting, you forcing it into a dual node config with 6GB each. Or am I wrong?

  9. 5 minutes ago, PSYCHOPATHiO said:

    Just reporting the same bug, all VMs were working fine - VMs running are (FreeBSD & 2xWS2016) but once I started a win 10 VM all the VMs disappeared from the list but they are still accessible.

    I have sufficient memory to run them all.

    It's already a extra thread for this

     

    Looks like it happens, if you startup ALL VMs configured on your server. Temporary fix is to create a small dummy VM which you don't boot up.

  10. Update from RC5 went fine for me. Retested on the qcow2 on XFS corruption issue. Looks like it is solved with qemu 4.1.1. Install a guest OS in a qcow2 vdisk on a XFS disk doesn't cause vdisk corruption anymore. Manually compressing a qcow2 vdisk also produces no issues now.

     

    One thing to mention what still exists, creating a Win10 VM with default settings enables Hyper-V by default but you can't change it to off aftwerwards. Nothing is change if you flip the switch and press update. It stays on.

  11. Retested with RC6:

     

    Installing qcow2 VMs on XFS array drives workes fine, same for BTRFS cache drive. No corruption found on the qcow2 vdisks so far with the same testings as before. Already existing qcow2 images with compression which got corrupted before in RC1-4 are shown no issues so far. Will have a look at it the next couple days. Compressing an uncompressed qcow2 also not producing corrupted vdisks. Looks like the patches on qemu 4.1.1 fixed my issues.