Unpack5920

Members
  • Posts

    10
  • Joined

Everything posted by Unpack5920

  1. after some hours investigation I found the reason for my issue: I'm using an USB3.2 enclosure IB-1817M-C31 of Icy-Box. After pulling the plug the waits and the CPU utilization were gone. I then decided to do a firmware update of the enclosure. This solved the issue for now.
  2. Me too. In Grafana I'm investigation exactly the same. But tools like iotop, htop, top or btop seem not to realize that. If the CPU is almost idle, it looks like this. In the moment the CPU gets some preasure this behavior vanishes. In case of CPU is again on low preasure the behavior comes back. Interestingly the behavior correlates with IO Waits It does not matter if Docker Service (with all the container) and VM Manager is running or not. It is always the exact same behavior. This is a six hour perspective: My setup: Unraid system: Unraid server Plus, version 6.12.5 Motherboard: Intel Corporation NUC12WSBi7, Version: M46422-303 Processor: 12th Gen Intel® Core™ i7-1260P @ 3762 MHz All drives are sleeping except my KINGSTON_SKC3000D2048G_50026B768671EA03 (nvme0n1) with ZFS Pool Eigther it is an kernel issue or it is some kind of ghosty "organizer" process. Tried intel_pstate=passive in syslinux.cfg -> did not have any effect on this behavior
  3. Thank you Manfreezer, I can confirm, it is working for me now with 6.12.3. root@TOWER:/# mount | grep tmpfs | grep docker tmpfs on /var/lib/docker/containers type tmpfs (rw,relatime,inode64) tmpfs on /mnt/cache/system/docker/containers type tmpfs (rw,relatime,inode64)
  4. Thank you for your reply. Sounds good. Reagarding GUI/CLI for ZFS setup i'll use the GUI as far as possible. Regarding docker: I've my own file structure. I don't use the unRAID docker capabiliies (appdata-folder,...). root@jupiter:/mnt/user/docker/filebrowser# ll total 12 drwxrwxrwx 1 nobody users 86 Jan 1 22:11 ./ drwxrwxrwx 1 nobody users 4096 Jun 9 00:27 ../ drwxrwxrwx 1 nobody users 97 Jan 1 22:11 config/ drwxrwxrwx 1 nobody users 109 Jan 1 22:11 database/ -rw-rw-rw- 1 nobody users 1467 Mar 5 00:25 docker-compose.yml Any advice to wait for 6.12 or to start now using ZFS Master and other plugins?
  5. In this Youtube-Video (@27:00) of @SpaceInvaderOne I saw one ZFS drive within the Unraid Array. This made me think … Unraid Array Pool +----------------------------------------+ +----------------+ | | | | | | | | | +---------+ +---------+ +--------+ | | +----------+ | | | HDD| | HDD| | HDD| | | | NVME| | | | | | | | | | | | | | | | 14 TB | | 14 TB | | 5 TB | | | | 2 TB | | | | | | | | | | | | data | | | | parity | | data | | mirror | |ZFS send | | | | | | | | | | & |<--+---------+--+ | | | | | | | | data | | | | | | | | XFS | | XFS | | | | | | | | | | | | | | ZFS | | | | ZFS | | | | | | | | pool2 | | | | pool1 | | | +---------+ +---------+ +--------+ | | +----------+ | | | | | +----------------------------------------+ +----------------+ Goal energy and cost efficient way to have a parity for my cache pool usage of zfs data compression fast and easy delta data mirroring (zfs send) usage of zfs snapshots in pool get an additional parity within the unraid-array for free easy expandable cache-pool better performance? One-Time-Setup Limit ZFS memory usage to 16GB (set in go file) > sysctl -w vfs.zfs.arc_max=17179869184 Create a ZFS Pool with a single 2TB NVME as Cache > zpool create zfs-cache-pool <disk-name> Create a ZFS Pool with a single 5TB HDD within the Unraid-Array > zpool create zfs-array-pool <disk-name> Create a dataset “docker” within the zfs-array-pool > zfs create zfs-array-pool/docker Create a dataset “docker” within the zfs-cache-pool > zfs create zfs-cache-pool/docker Enable dataset compression > zfs set compression=lz4 zfs-cache-pool/docker Enable trim for the ssd > zpool set autotrim=on zfs-cache-pool Cron each hour create a snapshot of the docker dataset > zfs snapshot zfs-cache-pool/docker@<snapshot-name> in the evening copy changes of zfs-array-pool/docker to the zfs-array-pool zfs send zfs-cache-pool/docker@<snapshot-name> | zfs receive zfs-array-pool/docker Check the data-transfer > zfs list -r zfs-array-pool/docker my Questions Does this work and makes sence? Do I miss something (major downsides)? Does this work with Unraid 6.12?
  6. Issue starting iotop Thank you in advance for support! # iotop libffi.so.7: cannot open shared object file: No such file or directory To run an uninstalled copy of iotop, launch iotop.py in the top directory # ls -l /usr/lib64/libffi* lrwxrwxrwx 1 root root 15 Nov 4 22:31 libffi.so -> libffi.so.8.1.2* lrwxrwxrwx 1 root root 15 Nov 4 22:31 libffi.so.8 -> libffi.so.8.1.2* -rwxr-xr-x 1 root root 43432 Oct 24 20:38 libffi.so.8.1.2* iotop-0.6-x86_64-2_SBo.txz >> up-to-date latest NerdTools 2022.11.01
  7. Hi, the blacklist is not a good idea. If i915 is in use, you should ensure that Plex is stopped before you make the changes. Try these lines in the console: rmmod i915 echo "options i915 enable_fbc=1 enable_guc=3" > /etc/modprobe.d/i915.conf modprobe i915 cat /etc/modprobe.d/i915.conf should show you the new settings for the driver which will be loaded with "mod probe i915" Looks like HW-Acceleration works fine for you. The settings in the "go" files should ensure, that the setting will be automatically done after reboot. Which kind of error-message do you get?
  8. Issue starting iotop Thank you in advance for support! # iotop libffi.so.7: cannot open shared object file: No such file or directory To run an uninstalled copy of iotop, launch iotop.py in the top directory # ls -l /usr/lib64/libffi* lrwxrwxrwx 1 root root 15 Nov 4 22:31 libffi.so -> libffi.so.8.1.2* lrwxrwxrwx 1 root root 15 Nov 4 22:31 libffi.so.8 -> libffi.so.8.1.2* -rwxr-xr-x 1 root root 43432 Oct 24 20:38 libffi.so.8.1.2* iotop-0.6-x86_64-2_SBo.txz >> up-to-date latest NerdTools 2022.11.01
  9. Small Feature-Request Could you please sort the User Script List by name. I use script names like: backup-share_docker backup-share_appdata cleanup-delete_ds_store power-force_sleep ... Sorting would group them accordingly Thank you in advance.
  10. Hi folks, just want to share this one: I had big issues to get HW acceleration working. This is my setup: ASUS PN41 with Intel Celeron N5100 (very nice mini PC) Docker setup and permissions of /dev/dri were correct but never the less HWA was not working. I spent hours investigating this. Finally I got it. Do following, if you have a similar setup: rmmod i915 echo "options i915 enable_fbc=1 enable_guc=3" > /etc/modprobe.d/i915.conf modprobe i915 Then check Plex again Works now like a charm for me. For persistence add the "echo" line into /boot/config/go Good luck! ------- I forgot a mutual thing: Inside the docker container I also installed "vainfo" to check if the GPU and driver is working as expected. This seams to install some drives which help to enable HW acceleration. apt install vainfo (within the docker) To make this persistent check this URL for the linuxserver.io container: https://docs.linuxserver.io/general/container-customization This is my script I start on each container start: #!/bin/bash echo ##### Install vainfo packages dpkg -l vainfo > /dev/null 2>&1 || ( apt update && apt install -y vainfo && apt clean )