dnLL

Members
  • Posts

    219
  • Joined

  • Last visited

Community Answers

  1. dnLL's post in Understanding cache reads/writes was marked as the answer   
    I did some digging thanks to the File Activity plugin combined with iostat.
     
    root@server01:~# iostat -mxd nvme0n1 -d nvme1n1 Linux 6.1.74-Unraid (server01) 03/17/2024 _x86_64_ (16 CPU) Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util nvme0n1 1.30 0.10 0.00 0.00 0.64 76.38 94.35 2.97 0.00 0.00 0.72 32.26 25.29 2.76 0.00 0.00 2.61 111.85 4.48 2.10 0.14 8.74 nvme1n1 1.32 0.10 0.00 0.00 0.63 76.47 98.80 2.97 0.00 0.00 0.51 30.81 25.29 2.76 0.00 0.00 2.44 111.85 4.48 2.09 0.12 8.56  
    So, writing around 3 MB/s consistently while looking with iostat. With the File Activity plugin, I noticed the following:
     
    ** /mnt/user/domains ** Mar 17 22:49:25 MODIFY => /mnt/cache/domains/vm-linux/vdisk1.img Mar 17 22:49:26 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 ... Mar 17 22:49:26 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 Mar 17 22:49:27 MODIFY => /mnt/cache/domains/vm-windows/vdisk1.img ... Mar 17 22:49:27 MODIFY => /mnt/cache/domains/vm-windows/vdisk1.img Mar 17 22:49:28 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 ... Mar 17 22:49:28 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 Mar 17 22:49:28 MODIFY => /mnt/cache/domains/vm-dev/vdisk1.img ... Mar 17 22:49:28 MODIFY => /mnt/cache/domains/vm-dev/vdisk1.img ** Cache and Pools ** Mar 17 22:44:13 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 ... Mar 17 22:44:29 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2  
    For instance, lots of writes coming from my Home Assistant VM running HAOS. That would be because the data from HA is hosted locally on the VM rather than on a NFS share. That also explains while I'm getting a lot of writes without necessary reading it. I might want to work on that and on similar issues with my other VMs.
     
    Thought I would post my findings just to help others with similar issues using the right tools to potentially find the problem. Obviously with the File Activity plugin, I don't get the actual quantity of data that is being written but it gives me a good idea.
  2. dnLL's post in In 2022, is it still good to increase log size? was marked as the answer   
    I'm surprised this hasn't been answered as it comes pretty high on search engines. Anyways.
     
    The important thing is to understand what we are dealing with here. For instance, here is what my /var/log looks like currently:
     
    root@server:~# df -h /var/log Filesystem Size Used Avail Use% Mounted on tmpfs 128M 105M 24M 83% /var/log root@server:~# du -ahx /var/log | sort -hr | head 105M /var/log 50M /var/log/syslog.2 38M /var/log/nginx/error.log.1 38M /var/log/nginx 8.3M /var/log/samba 7.3M /var/log/syslog 5.6M /var/log/samba/log.rpcd_lsad 2.7M /var/log/samba/log.samba-dcerpcd 1.4M /var/log/syslog.1 704K /var/log/pkgtools  
    So, what is going on here? Long uptime, mover logging enabled, and... a bunch of nginx errors. Unsure why but nginx crashed (the webUI became weirdly unresponsive), it took me a while to notice as I wasn't specifically monitoring the content of that log file and didn't visit the webUI in a while as well. Anyways, of course here cleaning things up would be good enough, at the same time 128M is a very small amount of space. You don't want your /var/log to fill up your memory, but you don't want to lose your syslog in the event of an issue similar to the nginx issue I had. Basically, as long as you have plenty of memory available, it's safe to expand it to say 512M or even 1G, by 2023's standards keeping 1G of logs isn't that bad. Using a proper syslog server would definitely offer better solutions in terms of long-time logging and archiving.