capino

Members
  • Posts

    43
  • Joined

  • Last visited

Everything posted by capino

  1. After some more digging, I found this pull request in the VIM GitHub repository. https://github.com/vim/vim/pull/12996 I also found a newer version of VIM (9.0.2127) on https://packages.slackware.com/. This version of VIM needs libsodium version 1.0.19 (which is also to be found on https://packages.slackware.com/. But after manually installing these two packages. I got the following errors: vim: /lib64/libc.so.6: version `GLIBC_2.38' not found (required by vim) vim: /lib64/libm.so.6: version `GLIBC_2.38' not found (required by vim) This suggests to also upgrade glibc to version 2.38. But, I'm not sure how this will affect the working of UnRaid itself.
  2. Did you manage to fix this problem? Since some time I also have this issue with vim (9.0.1672) on UnRaid 6.12.4
  3. Reinstalling the plugin seems to resolve the problem. (without reboot in between) During the reinstall, driver version v535.104.05 has been downloaded. I still have to reboot the server. This will be done somewhere this weekend since I'm not at home at the moment.
  4. Just updated the plugin to the latest version (2023.08.31) But cannot update to any driver version anymore. There is a deprivation notification now. When I try to update to latest I get the following message:
  5. I did some quick testing on 6.12.2 and it seems to be fixed. When opening a docker console only one pts session is allocated. This allocation seems to close when exiting the session by running the exit command on the first try (In 6.11.x I sometimes had to run the exit command twice). The pts allocation is also closed when closing the browser tab without exiting the session by running the exit command.
  6. When using a webbrowser to open a docker container console, multiple pty's are allocated, but not all are unallocated on close. This can lead to not being able to open a pty when the amount of allocated pty's gets to high with the following message: OCI runtime exec failed: exec failed: unable to start container process: open /dev/ptmx: no space left on device: unknown A reboot is needed to free up the allocated PTY's A workaround is to increase the max pty's by running: sysctl -w kernel.pty.max=8192 I have tested this in FireFox, Brave, Safari and Chrome.
  7. I did some additional testing and it looks like, when a docker container console session is opened in a browser, multiple Pty's are allocated, but only one gets unallocated on close. When I open Unraid Pty console session only 1 is allocated. But sometimes the browser windows is not closed on running exit (a new Pty is opened). Running exit in this second (new) session keeps both allocated. Using ssh through a remote terminal seems (for now with some quick tests) not to keep Pty's allocated.
  8. After doing loads of google searches, I finally found the correct syntax and a workaround. I seamed like the pts sessions are not correctly closed or unallocated. the amount of pseudo terminals is so high bij running: ls /dev/pts | wc -l I see there are 3432. As a workaround, I upped the amount by running. sysctl -w kernel.pty.max=8192 I'm not sure if this is because of something I did while creating my self build container, or this is due to an old kernel bug (https://lkml.org/lkml/2009/11/5/370)
  9. After testing a self build container, I'm no longer able to open the console of any of the running dockers. I already removed the self build container and all unused volumes and images. When trying to connect to the console of a docker, I get the following error: OCI runtime exec failed: exec failed: unable to start container process: open /dev/ptmx: no space left on device: unknown I also have tried to stop an start docker, but this did not help. All running dockers seem to be running without problems. Does anybody have any idea what could have created this issue and how to resolve this? P.S. These are the values of docker container sizes: Total size 29.3 GB 7.05 GB 2.19 GB
  10. I already use "/mnt/pool_name/share" for most configurations in Dockers. Except for systems where the data is fragmented over the Disk array. Last night I had stopped the Duplicati docker. So this could not be the docker that creates this problem. I looked like the problem last night started at the moment the "Auto Update Applications" app did a check on available updatable dockers. The auto update had not started yet.
  11. I thought everything was working as expected, but just a few minutes ago all load went up again and a there were a lot of processes in uninterruptible sleep. After stopping Docker (/etc/rc.d/rc.docker stop) all load went down again. I had just a moment to watch docker stats, but there were no dockers running in high CPU. I was able to create a diagnostic which can be find underneath. I downgraded back to 6.9.2 optimus-diagnostics-20220616-0035.zip
  12. Just updated to 6.10.3 and in this version my problem seems to be resolved. I tested by running simultaneous: - duplicati backup to onedrive - running Parity-Check - streaming a video - and some other high I/O processes.
  13. After upgrading from 6.9.2 to 6.10.2 my server hangs during my backup using duplicati docker to OneDrive. During reading of the files (about 120GB an 1 million files) I noticed that there are processes in uninterruptible sleep. (ps aux with stat D). After a while the amount of processes in uninterruptible sleep are getting to a amount that the server becomes unresponsive. I noticed some of the Workers are in an uninterruptible sleep state. At that point, the only thing to do, is to stop the duplicati docker and everything comes back. If the duplicati docker is not stopped shortly after the load becomes to high, the server becomes unresponsive all to gather and only a hard reboot is possible to get unRaid working again. This same happend when upgrading to 6.10.0 so I downgraded back to 6.9.2 Since this also happend in 6.10.2, my server is back to 6.9.2 I imagine the problem has something to do with I/O. Is there a solution for this. Maybe restrict the I/O for the duplicati docker, or something overal?
  14. It had to do with the fact that Unraid cannot talk to dockers with a static IP. I changed Elasticsearch, MongoDB and Graylog to the host IP and now it works.
  15. I'm running unRaid 6.9.2 and tried the extra parameters, but nothing is landing In Graylog. --log-driver=syslog --log-opt tag="radarr" --log-opt syslog-address=udp://192.168.1.17:5442 My Graylog server is running as a docker on ip 192.168.1.17. When I do the same from docker on my MacBook, the logs are landing in Graylog. I also tried the GELF log-driver, but the same problem within UnRaid, but from MacBook it works. Does anybody have a solution for this?
  16. Thanks for the reply's. I disabled the mover tuning plugin and stopping the array went within minutes. Problem is resolved for now
  17. I will try that this afternoon. And let you know
  18. Attached is the diagnostics without the syslog. optimus-diagnostics-20210511-1748 zonder syslog.zip
  19. This morning I stopped my array, but it first started to move all data from my cache pool before stopping the array. I know that the mover has nothing to do with a parity check, but after the reboot as part of the update, the array stops, but the system is not shutdown correctly (probably cause some processes are killed during the reboot, cause not all umounts are done in time. So after the reboot the system does a parity check. Here is part of the syslog after I initiated the array to stop. May 11 07:13:22 Optimus root: stopping dockerd ... May 11 07:13:23 Optimus root: waiting for docker to die ... May 11 07:13:24 Optimus emhttpd: shcmd (5052298): umount /var/lib/docker May 11 07:13:26 Optimus cache_dirs: Stopping cache_dirs process 19517 May 11 07:13:27 Optimus cache_dirs: cache_dirs service rc.cachedirs: Stopped May 11 07:13:27 Optimus unassigned.devices: Unmounting All Devices... May 11 07:13:27 Optimus emhttpd: shcmd (5052299): /etc/rc.d/rc.samba stop May 11 07:13:27 Optimus emhttpd: shcmd (5052300): rm -f /etc/avahi/services/smb.service May 11 07:13:27 Optimus emhttpd: Stopping mover... May 11 07:13:27 Optimus emhttpd: shcmd (5052302): /usr/local/sbin/mover stop May 11 07:13:27 Optimus root: mover: started May 11 07:13:27 Optimus move: move: file /mnt/download_pool/downloads/complete/file1.mkv May 11 07:13:28 Optimus move: move: file /mnt/download_pool/downloads/complete/file2.mkv May 11 07:13:29 Optimus move: move: file /mnt/download_pool/downloads/complete/file.mkv ... May 11 08:04:47 Optimus move: move: file /mnt/download_pool/downloads/incomplete/file899 May 11 08:04:47 Optimus move: move: file /mnt/download_pool/downloads/incomplete/file900 May 11 08:04:47 Optimus root: mover: finished May 11 08:04:47 Optimus emhttpd: Sync filesystems... May 11 08:04:47 Optimus emhttpd: shcmd (5052303): sync May 11 08:06:05 Optimus emhttpd: spinning down /dev/sdc May 11 08:06:06 Optimus emhttpd: shcmd (5052305): umount /mnt/user0 May 11 08:06:06 Optimus emhttpd: shcmd (5052306): rmdir /mnt/user0 May 11 08:06:06 Optimus emhttpd: shcmd (5052307): umount /mnt/user May 11 08:06:06 Optimus emhttpd: shcmd (5052308): rmdir /mnt/user May 11 08:06:06 Optimus emhttpd: shcmd (5052310): /usr/local/sbin/update_cron May 11 08:06:06 Optimus emhttpd: Unmounting disks... May 11 08:06:06 Optimus emhttpd: shcmd (5052311): umount /mnt/disk1 May 11 08:06:06 Optimus kernel: XFS (md1): Unmounting Filesystem May 11 08:06:06 Optimus emhttpd: shcmd (5052312): rmdir /mnt/disk1 May 11 08:06:06 Optimus emhttpd: shcmd (5052313): umount /mnt/disk2 May 11 08:06:07 Optimus kernel: XFS (md2): Unmounting Filesystem May 11 08:06:07 Optimus emhttpd: shcmd (5052314): rmdir /mnt/disk2 May 11 08:06:07 Optimus emhttpd: shcmd (5052315): umount /mnt/disk3 May 11 08:06:07 Optimus kernel: XFS (md3): Unmounting Filesystem May 11 08:06:08 Optimus emhttpd: shcmd (5052316): rmdir /mnt/disk3 May 11 08:06:08 Optimus emhttpd: shcmd (5052317): umount /mnt/disk4 May 11 08:06:08 Optimus kernel: XFS (md4): Unmounting Filesystem May 11 08:06:08 Optimus emhttpd: shcmd (5052318): rmdir /mnt/disk4 May 11 08:06:08 Optimus emhttpd: shcmd (5052319): umount /mnt/disk5 May 11 08:06:09 Optimus kernel: XFS (md5): Unmounting Filesystem May 11 08:06:09 Optimus emhttpd: shcmd (5052320): rmdir /mnt/disk5 May 11 08:06:09 Optimus emhttpd: shcmd (5052321): umount /mnt/disk6 May 11 08:06:09 Optimus kernel: XFS (md6): Unmounting Filesystem May 11 08:06:13 Optimus emhttpd: shcmd (5052322): rmdir /mnt/disk6 May 11 08:06:13 Optimus emhttpd: shcmd (5052323): umount /mnt/disk7 May 11 08:06:13 Optimus kernel: XFS (md7): Unmounting Filesystem May 11 08:06:14 Optimus emhttpd: shcmd (5052324): rmdir /mnt/disk7 May 11 08:06:14 Optimus emhttpd: shcmd (5052325): umount /mnt/disk8 May 11 08:06:14 Optimus kernel: XFS (md8): Unmounting Filesystem May 11 08:06:14 Optimus emhttpd: shcmd (5052326): rmdir /mnt/disk8 May 11 08:06:14 Optimus emhttpd: shcmd (5052327): umount /mnt/disk9 May 11 08:06:18 Optimus kernel: XFS (md9): Unmounting Filesystem May 11 08:06:18 Optimus emhttpd: shcmd (5052328): rmdir /mnt/disk9 May 11 08:06:18 Optimus emhttpd: shcmd (5052329): umount /mnt/cache_pool May 11 08:06:19 Optimus emhttpd: shcmd (5052330): rmdir /mnt/cache_pool May 11 08:06:19 Optimus emhttpd: shcmd (5052331): umount /mnt/download_pool May 11 08:06:19 Optimus root: umount: /mnt/download_pool: target is busy. May 11 08:06:19 Optimus emhttpd: shcmd (5052331): exit status: 32 May 11 08:06:19 Optimus emhttpd: Retry unmounting disk share(s)... May 11 08:06:24 Optimus emhttpd: Unmounting disks... May 11 08:06:24 Optimus emhttpd: shcmd (5052332): umount /mnt/download_pool May 11 08:06:24 Optimus root: umount: /mnt/download_pool: target is busy. May 11 08:06:24 Optimus emhttpd: shcmd (5052332): exit status: 32 May 11 08:06:24 Optimus emhttpd: Retry unmounting disk share(s)... May 11 08:06:29 Optimus emhttpd: Unmounting disks... May 11 08:06:29 Optimus emhttpd: shcmd (5052333): umount /mnt/download_pool ... May 11 08:09:15 Optimus emhttpd: Unmounting disks... May 11 08:09:15 Optimus emhttpd: shcmd (5052366): umount /mnt/download_pool May 11 08:09:15 Optimus emhttpd: shcmd (5052367): rmdir /mnt/download_pool May 11 08:09:15 Optimus emhttpd: read SMART /dev/sdc May 11 08:09:15 Optimus root: Stopping diskload May 11 08:09:15 Optimus kernel: mdcmd (47): stop May 11 08:09:15 Optimus kernel: md1: stopping May 11 08:09:15 Optimus kernel: md2: stopping May 11 08:09:15 Optimus kernel: md3: stopping May 11 08:09:15 Optimus kernel: md4: stopping May 11 08:09:15 Optimus kernel: md5: stopping May 11 08:09:15 Optimus kernel: md6: stopping May 11 08:09:15 Optimus kernel: md7: stopping May 11 08:09:15 Optimus kernel: md8: stopping May 11 08:09:15 Optimus kernel: md9: stopping (Don't mind the filenames, they are fictional)
  20. I notice that when stopping my array, the system start the mover. This also happens when I reboot the server after an update. Which results in a parity check. Is it possible to stop this behaviour?