capino

Members
  • Posts

    43
  • Joined

  • Last visited

Posts posted by capino

  1. After some more digging, I found this pull request in the VIM GitHub repository.
    https://github.com/vim/vim/pull/12996

    I also found a newer version of VIM (9.0.2127) on https://packages.slackware.com/.

    This version of VIM needs libsodium version 1.0.19 (which is also to be found on https://packages.slackware.com/.


    But after manually installing these two packages. I got the following errors:
    vim: /lib64/libc.so.6: version `GLIBC_2.38' not found (required by vim)
    vim: /lib64/libm.so.6: version `GLIBC_2.38' not found (required by vim)

    This suggests to also upgrade glibc to version 2.38.
    But, I'm not sure how this will affect the working of UnRaid itself.
     

    • Like 1
  2. On 2/3/2023 at 7:05 PM, Nex said:

    Getting error after installing vim and trying to use it: vim: symbol lookup error: vim: undefined symbol: PL_current_context

    Did you manage to fix this problem?
    Since some time I also have this issue with vim (9.0.1672) on UnRaid 6.12.4

  3. 1 hour ago, ich777 said:

    Please try if you can download that file from your local PC: Click

     

    Nothing has changed in terms of the download routine.

     

    Please uninstall the plugin, reinstall it and see if that helps (a reboot in between would be also recommended).

    Reinstalling the plugin seems to resolve the problem. (without reboot in between)

    During the reinstall, driver version v535.104.05 has been downloaded.
    I still have to reboot the server. This will be done somewhere this weekend since I'm not at home at the moment.

  4. I did some additional testing and it looks like, when a docker container console session is opened in a browser, multiple Pty's are allocated, but only one gets unallocated on close.
    When I open Unraid Pty console session only 1 is allocated. But sometimes the browser windows is not closed on running exit (a new Pty is opened). Running exit in this second (new) session keeps both allocated.

     

    Using ssh through a remote terminal seems (for now with some quick tests) not to keep Pty's allocated.

  5. After doing loads of google searches, I finally found the correct syntax and a workaround.
    I seamed like the pts sessions are not correctly closed or unallocated.

    the amount of pseudo terminals is so high
    bij running: 

    ls /dev/pts | wc -l

    I see there are 3432.

    As a workaround, I upped the amount by running.

     

    sysctl -w kernel.pty.max=8192


    I'm not sure if this is because of something I did while creating my self build container, or this is due to an old kernel bug (https://lkml.org/lkml/2009/11/5/370)

     

  6. After testing a self build container, I'm no longer able to open the console of any of the running dockers.
    I already removed the self build container and all unused volumes and images.
    When trying to connect to the console of a docker, I get the following error:

    OCI runtime exec failed: exec failed: unable to start container process: open /dev/ptmx: no space left on device: unknown

    I also have tried to stop an start docker, but this did not help.
    All running dockers seem to be running without problems.

    Does anybody have any idea what could have created this issue and how to resolve this?

    P.S.

    These are the values of docker container sizes:
    Total size                                        29.3 GB                  7.05 GB                  2.19 GB

  7. I'm running unRaid 6.9.2 and tried the extra parameters, but nothing is landing In Graylog.

    --log-driver=syslog --log-opt tag="radarr" --log-opt syslog-address=udp://192.168.1.17:5442
    My Graylog server is running as a docker on ip 192.168.1.17.
    When I do the same from docker on my MacBook, the logs are landing in Graylog.
    I also tried the GELF log-driver, but the same problem within UnRaid, but from MacBook it works.

     

    Does anybody have a solution for this?

  8. This morning I stopped my array, but it first started to move all data from my cache pool before stopping the array.
    I know that the mover has nothing to do with a parity check, but after the reboot as part of the update, the array stops, but the system is not shutdown correctly (probably cause some processes are killed during the reboot, cause not all umounts are done in time. So after the reboot the system does a parity check.

     

    Here is part of the syslog after I initiated the array to stop.

    May 11 07:13:22 Optimus root: stopping dockerd ...
    May 11 07:13:23 Optimus root: waiting for docker to die ...
    May 11 07:13:24 Optimus emhttpd: shcmd (5052298): umount /var/lib/docker
    May 11 07:13:26 Optimus cache_dirs: Stopping cache_dirs process 19517
    May 11 07:13:27 Optimus cache_dirs: cache_dirs service rc.cachedirs: Stopped
    May 11 07:13:27 Optimus unassigned.devices: Unmounting All Devices...
    May 11 07:13:27 Optimus emhttpd: shcmd (5052299): /etc/rc.d/rc.samba stop
    May 11 07:13:27 Optimus emhttpd: shcmd (5052300): rm -f /etc/avahi/services/smb.service
    May 11 07:13:27 Optimus emhttpd: Stopping mover...
    May 11 07:13:27 Optimus emhttpd: shcmd (5052302): /usr/local/sbin/mover stop
    May 11 07:13:27 Optimus root: mover: started
    May 11 07:13:27 Optimus move: move: file /mnt/download_pool/downloads/complete/file1.mkv
    May 11 07:13:28 Optimus move: move: file /mnt/download_pool/downloads/complete/file2.mkv
    May 11 07:13:29 Optimus move: move: file /mnt/download_pool/downloads/complete/file.mkv
    ...
    May 11 08:04:47 Optimus move: move: file /mnt/download_pool/downloads/incomplete/file899
    May 11 08:04:47 Optimus move: move: file /mnt/download_pool/downloads/incomplete/file900
    May 11 08:04:47 Optimus root: mover: finished
    May 11 08:04:47 Optimus emhttpd: Sync filesystems...
    May 11 08:04:47 Optimus emhttpd: shcmd (5052303): sync
    May 11 08:06:05 Optimus emhttpd: spinning down /dev/sdc
    May 11 08:06:06 Optimus emhttpd: shcmd (5052305): umount /mnt/user0
    May 11 08:06:06 Optimus emhttpd: shcmd (5052306): rmdir /mnt/user0
    May 11 08:06:06 Optimus emhttpd: shcmd (5052307): umount /mnt/user
    May 11 08:06:06 Optimus emhttpd: shcmd (5052308): rmdir /mnt/user
    May 11 08:06:06 Optimus emhttpd: shcmd (5052310): /usr/local/sbin/update_cron
    May 11 08:06:06 Optimus emhttpd: Unmounting disks...
    May 11 08:06:06 Optimus emhttpd: shcmd (5052311): umount /mnt/disk1
    May 11 08:06:06 Optimus kernel: XFS (md1): Unmounting Filesystem
    May 11 08:06:06 Optimus emhttpd: shcmd (5052312): rmdir /mnt/disk1
    May 11 08:06:06 Optimus emhttpd: shcmd (5052313): umount /mnt/disk2
    May 11 08:06:07 Optimus kernel: XFS (md2): Unmounting Filesystem
    May 11 08:06:07 Optimus emhttpd: shcmd (5052314): rmdir /mnt/disk2
    May 11 08:06:07 Optimus emhttpd: shcmd (5052315): umount /mnt/disk3
    May 11 08:06:07 Optimus kernel: XFS (md3): Unmounting Filesystem
    May 11 08:06:08 Optimus emhttpd: shcmd (5052316): rmdir /mnt/disk3
    May 11 08:06:08 Optimus emhttpd: shcmd (5052317): umount /mnt/disk4
    May 11 08:06:08 Optimus kernel: XFS (md4): Unmounting Filesystem
    May 11 08:06:08 Optimus emhttpd: shcmd (5052318): rmdir /mnt/disk4
    May 11 08:06:08 Optimus emhttpd: shcmd (5052319): umount /mnt/disk5
    May 11 08:06:09 Optimus kernel: XFS (md5): Unmounting Filesystem
    May 11 08:06:09 Optimus emhttpd: shcmd (5052320): rmdir /mnt/disk5
    May 11 08:06:09 Optimus emhttpd: shcmd (5052321): umount /mnt/disk6
    May 11 08:06:09 Optimus kernel: XFS (md6): Unmounting Filesystem
    May 11 08:06:13 Optimus emhttpd: shcmd (5052322): rmdir /mnt/disk6
    May 11 08:06:13 Optimus emhttpd: shcmd (5052323): umount /mnt/disk7
    May 11 08:06:13 Optimus kernel: XFS (md7): Unmounting Filesystem
    May 11 08:06:14 Optimus emhttpd: shcmd (5052324): rmdir /mnt/disk7
    May 11 08:06:14 Optimus emhttpd: shcmd (5052325): umount /mnt/disk8
    May 11 08:06:14 Optimus kernel: XFS (md8): Unmounting Filesystem
    May 11 08:06:14 Optimus emhttpd: shcmd (5052326): rmdir /mnt/disk8
    May 11 08:06:14 Optimus emhttpd: shcmd (5052327): umount /mnt/disk9
    May 11 08:06:18 Optimus kernel: XFS (md9): Unmounting Filesystem
    May 11 08:06:18 Optimus emhttpd: shcmd (5052328): rmdir /mnt/disk9
    May 11 08:06:18 Optimus emhttpd: shcmd (5052329): umount /mnt/cache_pool
    May 11 08:06:19 Optimus emhttpd: shcmd (5052330): rmdir /mnt/cache_pool
    May 11 08:06:19 Optimus emhttpd: shcmd (5052331): umount /mnt/download_pool
    May 11 08:06:19 Optimus root: umount: /mnt/download_pool: target is busy.
    May 11 08:06:19 Optimus emhttpd: shcmd (5052331): exit status: 32
    May 11 08:06:19 Optimus emhttpd: Retry unmounting disk share(s)...
    May 11 08:06:24 Optimus emhttpd: Unmounting disks...
    May 11 08:06:24 Optimus emhttpd: shcmd (5052332): umount /mnt/download_pool
    May 11 08:06:24 Optimus root: umount: /mnt/download_pool: target is busy.
    May 11 08:06:24 Optimus emhttpd: shcmd (5052332): exit status: 32
    May 11 08:06:24 Optimus emhttpd: Retry unmounting disk share(s)...
    May 11 08:06:29 Optimus emhttpd: Unmounting disks...
    May 11 08:06:29 Optimus emhttpd: shcmd (5052333): umount /mnt/download_pool
    ...
    May 11 08:09:15 Optimus emhttpd: Unmounting disks...
    May 11 08:09:15 Optimus emhttpd: shcmd (5052366): umount /mnt/download_pool
    May 11 08:09:15 Optimus emhttpd: shcmd (5052367): rmdir /mnt/download_pool
    May 11 08:09:15 Optimus emhttpd: read SMART /dev/sdc
    May 11 08:09:15 Optimus root: Stopping diskload
    May 11 08:09:15 Optimus kernel: mdcmd (47): stop 
    May 11 08:09:15 Optimus kernel: md1: stopping
    May 11 08:09:15 Optimus kernel: md2: stopping
    May 11 08:09:15 Optimus kernel: md3: stopping
    May 11 08:09:15 Optimus kernel: md4: stopping
    May 11 08:09:15 Optimus kernel: md5: stopping
    May 11 08:09:15 Optimus kernel: md6: stopping
    May 11 08:09:15 Optimus kernel: md7: stopping
    May 11 08:09:15 Optimus kernel: md8: stopping
    May 11 08:09:15 Optimus kernel: md9: stopping
    

    (Don't mind the filenames, they are fictional)

  9. Once in a while I receive the following mail from my poste.io docker.

     

    subject: 
    Cron <root@mail> test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
    
    Message: 
    /etc/cron.daily/logrotate:
    pkill: cannot allocate 4611686018427387903 bytes

    Does anybody have any clue why?

  10. Since a few weeks i'm using GeoIP2, but after the last two container updates, GeoLit2-City.mmdb couldn't been found.

    In the container log I see the following message: [emerg] MMDB_open("/var/lib/libmaxminddb/GeoLite2-City.mmdb") failed - Error opening the specified MaxMind DB file in /config/nginx/nginx.conf:36. After manualy running .//etc/periodic/weekly/libmaxminddb everything works again.