
thecode
Members-
Content Count
15 -
Joined
-
Last visited
Community Reputation
3 NeutralAbout thecode
-
Rank
Member
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
-
Can someone comment what versions of Libvirt & QEMU are used in this release? 10x
-
systemd-detect-virt is detecting unraid VMs as QEMU and not KVM
thecode replied to thecode's topic in VM Engine (KVM)
For HassOS I submitted a poll request to support QEMU hypervisors, so that it also loads the guest agent for them. Since HassOS 4.11 (dated 3-Jul-2020) you should not have any problem with this (although the OS does not detect UNRAID as KVM hypervisor). When the VM is running, edit the VM in in UNRAID and view as XML, look for this part: <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/HassOS/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='chann -
Thanks, I searched before, but only found the Issue I linked to. Anyhow as I see it is still active on 6.9.0 beta 25 so I hope this get resolved before 6.9 release.
-
Today I had the 2nd occurrence of wsdd process using 100% CPU (1 core, every time a different one). Last time it happened after 70 days, this time after 12 hours since server restart. There are other reports about it here: I'm not sure if related, but there is a fix for a similar problem here: https://github.com/christgau/wsdd/pull/42 Thanks. tower-diagnostics-20201112-1116.zip
-
@limetech which version of wsdd is used in this beta? Today after 2 month uptime I noticed 1 core is at 100%, ... wsdd I don't know which version is used in Unraid but I found an old issue which was fixed: https://github.com/christgau/wsdd/pull/42 This was at version 6.8.3, but on this thread people are complaining also for 6.9 beta 25:
-
Today after 2 month uptime I noticed 1 core is at 100%, ... wsdd I don't know which version is used in Unraid but I found an old issue which was fixed: https://github.com/christgau/wsdd/pull/42
-
[6.8.3] docker image huge amount of unnecessary writes on cache
thecode commented on S1dney's report in Stable Releases
I suggest you take it in the Beta25 thread: -
[6.8.3] docker image huge amount of unnecessary writes on cache
thecode commented on S1dney's report in Stable Releases
Here is a nice explanation why alignment is important: https://www.minitool.com/lib/4k-alignment.html https://www.thomas-krenn.com/en/wiki/Partition_Alignment_detailed_explanation However I did not find a written reference why 1MiB. I did however worked on development of a Linux box with internal eMMC few years ago and I remember that the internal controller had very large erase block size, something between 1-3 MiB. This may explain again that if the FS is not aligned with the erase block size it will increase the number of data written to the flash. There is a little mention ab -
[6.8.3] docker image huge amount of unnecessary writes on cache
thecode commented on S1dney's report in Stable Releases
I'm using Netdata and tracking total LBA's via SMART. I write down the number once a day (sometimes more often sometimes less depends on free time) and I wrote a nice calculation in excel that tracks the daily GB written, but the graph posted above looks very detailed and may help to check influence of different settings quickly. -
[6.8.3] docker image huge amount of unnecessary writes on cache
thecode commented on S1dney's report in Stable Releases
@Dephcon what tool did you use for measuring this? I only used text tools until now. -
systemd-detect-virt is detecting unraid VMs as QEMU and not KVM
thecode replied to thecode's topic in VM Engine (KVM)
I have installed Fedora Linux on another machine and under it installed same OS (debian in this case, but it does not matter). On the VM running under Unraid the output of systemd-detect-virt is "qemu", on the VM running under Fedora it is "kvm" Unraid "virsh version": Compiled against library: libvirt 5.10.0 Using library: libvirt 5.10.0 Using API: QEMU 5.10.0 Running hypervisor: QEMU 4.2.0 Fedora "virsh version": Compiled against library: libvirt 6.1.0 Using library: libvirt 6.1.0 Using API: QEMU 6.1.0 Running hypervisor: QEMU 4.2.0 I also compared the VMs XM -
systemd-detect-virt is detecting unraid VMs as QEMU and not KVM
thecode replied to thecode's topic in VM Engine (KVM)
Thanks for the link, but it still doesn't explain why under unraid VMs are detecting QEMU hypervisor and not KVM. QEMU can work in two modes: 1. as an emulator under KVM only responsible for the emulation of devices (unraid usage). 2. a standalone type 2 hypervisor - performing software virtualization. Linux machines has an option to detect the type of virtualization used while it is not critical they can detect if they run under QEMU as an hypervisor or KVM. I encountered the usage of KVM as a condition in a service in few places, since QEMU is mostly never used as hy -
If I understand correctly, unraid uses KVM hypervisor, I noticed that systemd-detect-virt under Linux VMs is detecting the hypervisor as QEMU and not as KVM. Looking at the VM templates the domain type is KVM, I run into a service which was designed to run only under KVM environment and also noticed that HassOS (Home assistant operating system) which is based on Alpine Linux is running the qemu guest agent only if system is detected as KVM. I wonder if someone has some information about it and maybe it is possible to let the OS detect this as KVM by changing the VM template.
-
[6.8.3] docker image huge amount of unnecessary writes on cache
thecode commented on S1dney's report in Stable Releases
I'm having the issue with a non-encrypted BTRFS cache pool of two 500GB NVMe. Within a week I had about 1 TBW accumulated into the drive. I have moved the VMs & Dockers appdata to XFS SSD and monitored the data closely by taking samples in various points of the day into excel. To make sure there is no error in the TBW display I have calculated the TBW myself which shows that the TBW is correct. I had about 200GB per day on the BTRFS and about 14GB per day on the XFS SSD. -
[6.8.3] docker image huge amount of unnecessary writes on cache
thecode commented on S1dney's report in Stable Releases
I see that the topic is referring to docker running on cache, I'm not running any dockers on the cache (at least not ones that I can't stop for few days). After reading this thread I checked my setup (it is a new setup, first version used is 6.8.3) and noticed around 40mb/sec writes to the cache, a new drive already got 1.5TB written. I'm using two NVMes in a raid1 cache pool, When I tested the system I had only one drive and did not notice high writes, but I might have missed it. 2 VMs are running on the cache, Windows 10 with Blue Iris that store data on the Array, and HassOS (which us