dnLL

Members
  • Posts

    216
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dnLL's Achievements

Explorer

Explorer (4/14)

12

Reputation

2

Community Answers

  1. I did some digging thanks to the File Activity plugin combined with iostat. root@server01:~# iostat -mxd nvme0n1 -d nvme1n1 Linux 6.1.74-Unraid (server01) 03/17/2024 _x86_64_ (16 CPU) Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util nvme0n1 1.30 0.10 0.00 0.00 0.64 76.38 94.35 2.97 0.00 0.00 0.72 32.26 25.29 2.76 0.00 0.00 2.61 111.85 4.48 2.10 0.14 8.74 nvme1n1 1.32 0.10 0.00 0.00 0.63 76.47 98.80 2.97 0.00 0.00 0.51 30.81 25.29 2.76 0.00 0.00 2.44 111.85 4.48 2.09 0.12 8.56 So, writing around 3 MB/s consistently while looking with iostat. With the File Activity plugin, I noticed the following: ** /mnt/user/domains ** Mar 17 22:49:25 MODIFY => /mnt/cache/domains/vm-linux/vdisk1.img Mar 17 22:49:26 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 ... Mar 17 22:49:26 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 Mar 17 22:49:27 MODIFY => /mnt/cache/domains/vm-windows/vdisk1.img ... Mar 17 22:49:27 MODIFY => /mnt/cache/domains/vm-windows/vdisk1.img Mar 17 22:49:28 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 ... Mar 17 22:49:28 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 Mar 17 22:49:28 MODIFY => /mnt/cache/domains/vm-dev/vdisk1.img ... Mar 17 22:49:28 MODIFY => /mnt/cache/domains/vm-dev/vdisk1.img ** Cache and Pools ** Mar 17 22:44:13 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 ... Mar 17 22:44:29 MODIFY => /mnt/cache/domains/vm-automation/haos_ova-8.2.qcow2 For instance, lots of writes coming from my Home Assistant VM running HAOS. That would be because the data from HA is hosted locally on the VM rather than on a NFS share. That also explains while I'm getting a lot of writes without necessary reading it. I might want to work on that and on similar issues with my other VMs. Thought I would post my findings just to help others with similar issues using the right tools to potentially find the problem. Obviously with the File Activity plugin, I don't get the actual quantity of data that is being written but it gives me a good idea.
  2. I have a RAID1 of cache NVME SSD drives. I originally set it up in BTRFS but recently changed for ZFS. I'm trying to understand why is my writes usage so much higher than reads. Currently both SSDs are sitting down at around 250 TiB write and 8 TiB read per the smart report with 395 days of uptime. On my cache are the VMs, the dockers and temporary cache storage for other shares. When I first installed my SSDs, I was plagued by an issue with dockers where I would get super high write usage (multiple TiB per day). This was eventually resolved. Now, it's more or less following the same curve with about 20 TiB of writes per month. That's about the size of my whole array (24 TB), which is obviously ridiculous. My questions are the following. 1- When the mover is invoked, should there be a 1:1 ratio between the previous writes and the files being moved turning into reads? 2- When loading VM disk files from the cache, I assume it reads the whole file? Does every read/write within the VM get transferred to the VM disk file (in this case my cache)? 3- Anything particular pertaining to dockers that I should be aware of concerning high writes usage? I'm using the folder option for docker storage. Sent from my Pixel 7 Pro using Tapatalk
  3. Thanks for pointing this out. Obviously when grepping the entire config, I'm not saying "delete every mention of X" but more "hey this might send you in the right direction". Might not necessarily be the dockers, could be a forgotten user script or whatever plugin, hence the wider grep. Sent from my Pixel 7 Pro using Tapatalk
  4. This happened to me as well but only when the network gets disconnected after boot. Reconnecting the network doesn't fix the issue, however rebooting with the network correctly plugged in will prevent the errors from showing in the logs. I've been having issues recently on 6.12.7 and 6.12.8 with Unraid crashing when the network gets disconnected (ie. router rebooting), so I was doing some tests (but wasn't able to reproduce sadly). Not sure any of these kernel errors are related to the crash issues but that's my only lead for now. It's probably important to note that I'm using IEEE 802.3ad dynamic link bonding over 2 interfaces (LACP) and it only happens when connection to both interfaces is lost simultaneously.
  5. Thank you!! I'm aware this is a 7-year-old thread but, I just wanted to report that this was my exact issue, I had an unused path in a docker trying to access my deleted share and it would just `mkdir` it when not existing, bringing back a share every damn reboot. To help find the issue (replace `share` by your share name): root@Tower:~# grep -R "/share" /boot/config You might have to exclude some results by adding an additional `| grep -v something` at the end, for example if you use the dynamix file integrity plugin you might get some false positives here.
  6. As someone who has never used the alternatives you mention, what it is about the Unraid UI that you don't like? Personally, I think the settings/tools pages could be improved, some stuff I never remember whether it's a tool or a setting (it's not always obvious) and the categories within both pages are not really helping The docker and VM pages are pretty good IMO. The dashboard is a lot better now as well. I guess the overall theme could be changed completely to a more traditional and modern vertical menu with a separate mobile UI (or dedicated official app) but that doesn't bother me too much.
  7. So how does this work exactly? When I reset my BMC it doesn't update the fans anymore. Nothing in the logs, am I supposed to see something every 30s?
  8. Great, very happy to read this. I checked with lsof and interestingly enough nothing is returned for /mnt/cache since every docker is stopped. I should have checked with lsof on /dev/loop2, thought about it too late. Anyways. Sent from my Pixel 7 Pro using Tapatalk
  9. Yup, same issue on Unraid 6.12.2 (while trying to reboot to install 6.12.3). I always manually stop my dockers/VMs before the array and rebooting, sadly the array wouldn't stop. Jul 14 23:16:52 server emhttpd: Unmounting disks... Jul 14 23:16:52 server emhttpd: shcmd (38816): umount /mnt/cache Jul 14 23:16:52 server root: umount: /mnt/cache: target is busy. Jul 14 23:16:52 server emhttpd: shcmd (38816): exit status: 32 Jul 14 23:16:52 server emhttpd: Retry unmounting disk share(s)... Jul 14 23:16:57 server emhttpd: Unmounting disks... Jul 14 23:16:57 server emhttpd: shcmd (38817): umount /mnt/cache Jul 14 23:16:57 server root: umount: /mnt/cache: target is busy. Jul 14 23:16:57 server emhttpd: shcmd (38817): exit status: 32 Jul 14 23:16:57 server emhttpd: Retry unmounting disk share(s)... Jul 14 23:17:02 server emhttpd: Unmounting disks... Jul 14 23:17:02 server emhttpd: shcmd (38818): umount /mnt/cache Jul 14 23:17:02 server root: umount: /mnt/cache: target is busy. Jul 14 23:17:02 server emhttpd: shcmd (38818): exit status: 32 Jul 14 23:17:02 server emhttpd: Retry unmounting disk share(s)... Umounting /dev/loop2 immediately fixes it.
  10. This is lovely. Can work as a user script as well. Be sure to read the parameters carefully on the docker hub webpage if you want to be able to stress test with or without saturating your CPU/memory completely.
  11. I would really love to hear more opinions on this, just watched @SpaceInvaderOne's video on how to easily reformat the cache pool. Been using BTRFS for a while.
  12. I'm surprised this hasn't been answered as it comes pretty high on search engines. Anyways. The important thing is to understand what we are dealing with here. For instance, here is what my /var/log looks like currently: root@server:~# df -h /var/log Filesystem Size Used Avail Use% Mounted on tmpfs 128M 105M 24M 83% /var/log root@server:~# du -ahx /var/log | sort -hr | head 105M /var/log 50M /var/log/syslog.2 38M /var/log/nginx/error.log.1 38M /var/log/nginx 8.3M /var/log/samba 7.3M /var/log/syslog 5.6M /var/log/samba/log.rpcd_lsad 2.7M /var/log/samba/log.samba-dcerpcd 1.4M /var/log/syslog.1 704K /var/log/pkgtools So, what is going on here? Long uptime, mover logging enabled, and... a bunch of nginx errors. Unsure why but nginx crashed (the webUI became weirdly unresponsive), it took me a while to notice as I wasn't specifically monitoring the content of that log file and didn't visit the webUI in a while as well. Anyways, of course here cleaning things up would be good enough, at the same time 128M is a very small amount of space. You don't want your /var/log to fill up your memory, but you don't want to lose your syslog in the event of an issue similar to the nginx issue I had. Basically, as long as you have plenty of memory available, it's safe to expand it to say 512M or even 1G, by 2023's standards keeping 1G of logs isn't that bad. Using a proper syslog server would definitely offer better solutions in terms of long-time logging and archiving.
  13. Still a problem 5 years later. Moved back to a XFS docker image because of this, subvolumes never get deleted, if you stop docker the subvolumes are just unmounted, best way is to delete them from your directory and restart from scratch... which sucks.
  14. Same issue, same fix. Lost 2 hours restoring backups trying to figure out why moving the vdisk was problematic. Unraid should really add a "type" field for people using the UI rather than the XML, it would make it so obvious. Unsure why they wouldn't support the creation of other types (such as qcow2) of vDisks anyways.
  15. Didn't think about that, thank you.