thecode

Community Developer
  • Posts

    155
  • Joined

  • Last visited

Everything posted by thecode

  1. @Dephcon what tool did you use for measuring this? I only used text tools until now.
  2. I have installed Fedora Linux on another machine and under it installed same OS (debian in this case, but it does not matter). On the VM running under Unraid the output of systemd-detect-virt is "qemu", on the VM running under Fedora it is "kvm" Unraid "virsh version": Compiled against library: libvirt 5.10.0 Using library: libvirt 5.10.0 Using API: QEMU 5.10.0 Running hypervisor: QEMU 4.2.0 Fedora "virsh version": Compiled against library: libvirt 6.1.0 Using library: libvirt 6.1.0 Using API: QEMU 6.1.0 Running hypervisor: QEMU 4.2.0 I also compared the VMs XML files, they are almost identical, minor changes.
  3. Thanks for the link, but it still doesn't explain why under unraid VMs are detecting QEMU hypervisor and not KVM. QEMU can work in two modes: 1. as an emulator under KVM only responsible for the emulation of devices (unraid usage). 2. a standalone type 2 hypervisor - performing software virtualization. Linux machines has an option to detect the type of virtualization used while it is not critical they can detect if they run under QEMU as an hypervisor or KVM. I encountered the usage of KVM as a condition in a service in few places, since QEMU is mostly never used as hypervisor most implementations only check for KVM hypervisor. I find the following two articles explaining the differences: https://www.quora.com/Virtualization-What-is-the-difference-between-KVM-and-QEMU https://www.packetflow.co.uk/what-is-the-difference-between-qemu-and-kvm/ I'm going to setup a KVM under Fedora to understand if there are any differences.
  4. If I understand correctly, unraid uses KVM hypervisor, I noticed that systemd-detect-virt under Linux VMs is detecting the hypervisor as QEMU and not as KVM. Looking at the VM templates the domain type is KVM, I run into a service which was designed to run only under KVM environment and also noticed that HassOS (Home assistant operating system) which is based on Alpine Linux is running the qemu guest agent only if system is detected as KVM. I wonder if someone has some information about it and maybe it is possible to let the OS detect this as KVM by changing the VM template.
  5. I'm having the issue with a non-encrypted BTRFS cache pool of two 500GB NVMe. Within a week I had about 1 TBW accumulated into the drive. I have moved the VMs & Dockers appdata to XFS SSD and monitored the data closely by taking samples in various points of the day into excel. To make sure there is no error in the TBW display I have calculated the TBW myself which shows that the TBW is correct. I had about 200GB per day on the BTRFS and about 14GB per day on the XFS SSD.
  6. I see that the topic is referring to docker running on cache, I'm not running any dockers on the cache (at least not ones that I can't stop for few days). After reading this thread I checked my setup (it is a new setup, first version used is 6.8.3) and noticed around 40mb/sec writes to the cache, a new drive already got 1.5TB written. I'm using two NVMes in a raid1 cache pool, When I tested the system I had only one drive and did not notice high writes, but I might have missed it. 2 VMs are running on the cache, Windows 10 with Blue Iris that store data on the Array, and HassOS (which uses MariaDB inside). My first assumption was that it is related to the DB inside the HassOS VM, I have installed MariaDB as a docker and let it store on the cache, writing dropped from 40mb/sec to around 5-6 which is still high. On the other hand, the MariaDB only writes about 100-200Kb/sec on the array. Moving forward I moved the whole HassOS VM + Maria DB data to an unassigned SSD (xfs), cache writes dropped down to 1-2mb/sec which is still high, the Windows VM has most of it services disabled and I doubt it write so much data. Monitoring the HassOS VM + DB on SSD using LBA showed about 6GB for 12 hours (around 140kB/sec). The cache which has nothing on it besides the Win 10 VM has already accumulated more than 20GB of writes in the same 12 hours period. I am thinking of moving the Win 10 VM to the unassigned SSD also, but have no idea what should be the next step, my original plan was to use the brtfs mirror on the cache as a sort of fault tolerance, but l doubt it will live long with such high write rate.