dlandon

Community Developer
  • Posts

    10151
  • Joined

  • Last visited

  • Days Won

    19

Report Comments posted by dlandon

  1. 11 minutes ago, anotherdud3 said:

    I don't fully trust this now, I'll have to go a different way to transfer data, don't have the time to be checking terrabytes worth of data to see if it all copied over correctly and possibly adding duplicates I can't see too. 

    You shouldn't get duplicates because Unraid doesn't allow that.  The issue was with file listings, not with files not getting written or getting corrupted.  File handling was not an issue.

     

    Do a transfer and only copy missing files to save time if you can.  Be sure to upgrade to 6.12.10-rc.1 first.  It does not have the problem.

  2. 9 hours ago, anotherdud3 said:

    Could somebody just clarify, I've seen a few comments saying the data is still there on the drive and did reappear when rolling back, I've now updated to 6.12.10-rc1 but the data isn't there, files and folders still missing, unable to tell if they're there and hidden or not

    Feel like I'm going to be wasting another few hours checking to see if a full transfer has happened. 

     

    The best we can tell is that this problem manifests itself in files and folders not showing up in a remote mount in any listings on the mount.  Depending on the application accessing the mount and the actions taken by the applicaton when files and folders don't show, the results can be very weird.  For example, you transfer a file to a remote share and the application doesn't find the file causiung the application to act on a file not found.

     

    The files and folders seem to physically be there, but don't show in listings on the share.  At this point, we can't say for sure whether or not your files and folders are actually there short of upgrading to 6.12.10-rc.1 and looking through directories, or initiating your file transfers again just to be sure.

  3. I've put some time into troubleshooting UD to see if there is something in the way UD is mounting remote shares.  There doesn't seem to be anything that UD does or can do to cause this problem.

     

    I'm not one to play the blame game, but as we get into this I believe we are going to find it's related to a change in Samba or a Kernel change causing this.  In our beta testing for one of the recent releases, we found an issue with CIFS mounts where 'df' would not report size, used, and free space on a CIFS mount point.  When UD sees a CIFS mount with zero space, it disables the mount so UI browsing and other operations could not be performed.  UD assumes there is a problem with the mount.  It ended up being related to a 'stat' change in the Kernel failing on the CIFS mount.

     

    As has been said, this is related to remote mounts through UD.  It does not affect the NAS file sharing functionality through SMB.  I understand that this is an important functionality for many users and for the moment, downgrading to 6.12.8 is the answer.  We will release a new version of Unraid as soon as we have an answer.

    • Thanks 2
  4. 1 hour ago, Subasically said:

    I too am experiencing this issue using VSCode to SSH into my server to modify some bash file. A restart does fix it but I do not want to restart every time I want to edit some file using VSCode from a different computer.

    I see several things, but cannot offer any explanaton.  In the log:

    Jan 31 08:34:23 BASIC-CABLE shfs: error: strcpy_share_path, 455: No such file or directory (2): path too long: /CONTAINER ID   IMAGE                                 COMMAND                  CREATED         STATUS                    PORTS                                                                                                                       NAMES                    SIZE

     

    and the output of df shows:

    Filesystem            Size  Used Avail Use% Mounted on
    rootfs                 16G  1.7G   15G  11% /
    tmpfs                  32M  1.5M   31M   5% /run
    /dev/sda1              29G  981M   28G   4% /boot
    overlay                16G  1.7G   15G  11% /lib
    overlay                16G  1.7G   15G  11% /usr
    devtmpfs              8.0M     0  8.0M   0% /dev
    tmpfs                  16G     0   16G   0% /dev/shm
    tmpfs                 128M  396K  128M   1% /var/log
    tmpfs                 1.0M     0  1.0M   0% /mnt/disks
    tmpfs                 1.0M     0  1.0M   0% /mnt/remotes
    tmpfs                 1.0M     0  1.0M   0% /mnt/addons
    tmpfs                 1.0M     0  1.0M   0% /mnt/rootshare
    /dev/md1p1            5.5T  5.3T  168G  98% /mnt/disk1
    /dev/md2p1            5.5T  5.3T  175G  97% /mnt/disk2
    /dev/md3p1            5.5T  5.3T  171G  97% /mnt/disk3
    /dev/md4p1            5.5T  5.3T  177G  97% /mnt/disk4
    /dev/md5p1            3.7T  3.4T  341G  91% /mnt/disk5
    /dev/md6p1            3.7T  3.4T  322G  92% /mnt/disk6
    /dev/md7p1            3.7T  3.4T  338G  91% /mnt/disk7
    /dev/md8p1            3.7T  3.4T  325G  92% /mnt/disk8
    cache                 1.7T  256K  1.7T   1% /mnt/cache
    /dev/sdb1             257G  249G  7.6G  98% /mnt/docker-cache
    /dev/sdc1             489G   36G  454G   8% /mnt/plex-cache
    cache/data            1.7T  512K  1.7T   1% /mnt/cache/data
    cache/isos            1.7T   32G  1.7T   2% /mnt/cache/isos
    cache/unraid_scripts  1.7T  203M  1.7T   1% /mnt/cache/unraid_scripts
    cache/system          1.7T   30G  1.7T   2% /mnt/cache/system
    cache/domains         1.7T  128K  1.7T   1% /mnt/cache/domains
    cache/Music           1.7T   29G  1.7T   2% /mnt/cache/Music
    cache/Plex DVR        1.7T  1.8G  1.7T   1% /mnt/cache/Plex DVR
    cache/vm-disks        1.7T   16G  1.7T   1% /mnt/cache/vm-disks
    cache/temp            1.7T  128K  1.7T   1% /mnt/cache/temp
    cache/appdata         1.7T  5.1G  1.7T   1% /mnt/cache/appdata
    cache/iCloud-Photos   1.7T  7.7G  1.7T   1% /mnt/cache/iCloud-Photos
    cache/Pre-Rolls       1.7T  451M  1.7T   1% /mnt/cache/Pre-Rolls
    shfs                   37T   35T  2.0T  95% /mnt/user0
    /dev/loop2            270G   16G  255G   6% /var/lib/docker
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/ceaae5d21858eed21b3c02b988e671146808da6ac5659070f89479f8771981cf/merged
    /dev/loop3             10G  4.5M  9.5G   1% /etc/libvirt
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/1fe89828cb1992deac96043a165b0c2d257bee31fbdfba6cde3c12fa78108cbd/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/232e8202b591222a7e925454bd743b305391cf9b1ba5cf8e45f8776b0dd2d3c7/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/3f713ae84e4622598210b94b15793c3260aea4a692cc7d7226b55030c88d7a4f/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/7175475e611234ca5e3da0f392bc31f9db2ca0cddab1e36085a67b82094d12ee/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/39b7105f858ae7dfd92628e545a60bed09bf2fe79d22df47f2b715d566c0bbb1/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/9b4d538da85729710cdc271557e36a0118c0a00fa390174430df405af3f7c9e8/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/3bbc4e85d87c6c86fe05edc45febfd136d2e1cc08e61ea0e994b284db0fdc5c8/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/4ab32eea32431398b12171c4d5dc4b1f1e5284e29507fd487befacc6461e1750/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/71cbdeea3c3dd7e4ee0488597410796130d1a57a100cb50d67cf8365e89ba2a1/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/b9b2da94a87c92e6b24a4c5441d85b3b6b3a46a394f88a2fc7191ad27f1f875b/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/51e6dec694ebd54d74406ddfaecfb0e016166cc87f784b17f032982678de8700/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/b7bab57da934f09a97e8012dc9c48e9d2ac97a19edf5c6a22160b7c87b12eb00/merged
    overlay               270G   16G  255G   6% /var/lib/docker/overlay2/7ab691d92f572e5b56d725dad3662d095d7aaa71f9377b3eddd8a20f94d1e7f2/merged
    tmpfs                 3.2G     0  3.2G   0% /run/user/0

     

  5. 1 hour ago, grenskul said:

    Can we get this reclassified to urgent?

    This is a problem that forces you to restart your server. If it happens and you don't notice you WILL lose data.

    I'm here working on the issue as best I can.  I need more information like disgnostics so I can help with troubleshooting.  Unfortunately, we don't see anything common that can be acted on.  If we can find a common theme, we can turn it into an action item for the LT team.

  6. 1 hour ago, robertklep said:

    Here is what I'm seeing in your log:

    Dec 13 11:19:15 Unraid network: update services: 30s
    Dec 13 11:19:16 Unraid root: Installing /boot/extra packages
    Dec 13 11:19:18 Unraid root: Installing: vim-8.2.4256-x86_64-1: Vi IMproved ......................................................................... [  37M]
    Dec 13 11:19:18 Unraid root: Installing: libsodium-1.0.18-x86_64-3: Sodium crypto library ........................................................... [ 620K]
    Dec 13 11:19:18 Unraid root: Installing: mosh-1.4.0-x86_64-4cf: MObile SHell server and client ...................................................... [ 860K]
    Dec 13 11:19:19 Unraid root: Installing: protobuf-21.12-x86_64-1cf: Google's data interchange format ................................................ [  17M]

    and

    Dec 13 14:11:18 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.5:778 for /mnt/user/gaming (/mnt/user/gaming)
    Dec 13 14:11:23 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.5:777 for /mnt/user/gaming (/mnt/user/gaming)
    Dec 13 14:44:15 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:773 for /mnt/user/movies (/mnt/user/movies)
    Dec 13 14:44:15 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:779 for /mnt/user/tv (/mnt/user/tv)
    Dec 13 16:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:790 for /mnt/user/tv (/mnt/user/tv)
    Dec 13 16:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:794 for /mnt/user/movies (/mnt/user/movies)
    Dec 13 16:30:24 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:812 for /mnt/user/tv (/mnt/user/tv)
    Dec 13 20:28:37 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:818 for /mnt/user/tv (/mnt/user/tv)
    Dec 13 21:50:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:824 for /mnt/user/tv (/mnt/user/tv)
    Dec 13 21:50:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:828 for /mnt/user/movies (/mnt/user/movies)
    Dec 13 21:52:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:834 for /mnt/user/tv (/mnt/user/tv)
    Dec 13 21:52:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:838 for /mnt/user/movies (/mnt/user/movies)
    Dec 13 22:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:857 for /mnt/user/tv (/mnt/user/tv)
    Dec 13 22:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:861 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 00:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:881 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 00:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:885 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 00:01:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:899 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 00:01:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:903 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 02:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:921 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 02:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:925 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 02:01:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:931 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 02:01:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:935 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 04:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:953 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 04:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:957 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 06:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:975 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 06:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:979 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 08:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:1002 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 08:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:1006 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 08:20:10 Unraid monitor: Stop running nchan processes
    Dec 14 08:44:29 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:702 for /mnt/user/movies (/mnt/user/movies)
    Dec 14 08:44:29 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:707 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 10:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:717 for /mnt/user/tv (/mnt/user/tv)
    Dec 14 10:00:00 Unraid rpc.mountd[5019]: authenticated mount request from 192.168.23.15:721 for /mnt/user/movies (/mnt/user/movies)

    and then the system was told to shut down

    Dec 15 09:46:09 Unraid shutdown[11136]: shutting down for system halt
    Dec 15 09:46:09 Unraid init: Switching to runlevel: 0
    Dec 15 09:46:09 Unraid init: Trying to re-exec init
    Dec 15 09:47:41 Unraid root: Status of all loop devices
    Dec 15 09:47:41 Unraid root: /dev/loop1: [2049]:12 (/boot/bzfirmware)
    Dec 15 09:47:41 Unraid root: /dev/loop2: [2305]:6442451073 (/mnt/disk1/system/libvirt/libvirt.img)
    Dec 15 09:47:41 Unraid root: /dev/loop0: [2049]:10 (/boot/bzmodules)
    Dec 15 09:47:41 Unraid root: Active pids left on /mnt/*
    Dec 15 09:47:42 Unraid root: Cannot stat /mnt/user: Software caused connection abort
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/740/fd/8: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/740/fd/9: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/740/fd/10: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/740/fd/14: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/740/fd/16: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/858/fd/3: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/858/fd/7: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/7019/fd/8: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/7023/fd/8: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/7024/fd/8: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid root: Cannot stat file /proc/7025/fd/8: Transport endpoint is not connected
    Dec 15 09:47:42 Unraid kernel: ------------[ cut here ]------------
    Dec 15 09:47:42 Unraid kernel: nfsd: non-standard errno: -103
    Dec 15 09:47:42 Unraid kernel: WARNING: CPU: 2 PID: 5015 at fs/nfsd/nfsproc.c:909 nfserrno+0x45/0x51 [nfsd]
    Dec 15 09:47:42 Unraid kernel: Modules linked in: tcp_diag inet_diag bluetooth ecdh_generic ecc tls xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle vhost_net vhost vhost_iotlb xt_comment xt_connmark xt_mark nft_compat nf_tables wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha tun veth xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter xfs nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs macvtap macvlan tap af_packet 8021q garp mrp bridge stp llc igb intel_rapl_msr intel_rapl_common iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel
    Dec 15 09:47:42 Unraid kernel: ghash_clmulni_intel sha512_ssse3 sha256_ssse3 ipmi_ssif sha1_ssse3 wmi_bmof ast drm_vram_helper drm_ttm_helper ttm aesni_intel crypto_simd cryptd drm_kms_helper rapl intel_cstate drm intel_uncore i2c_i801 i2c_algo_bit mei_me agpgart ahci syscopyarea sysfillrect sysimgblt i2c_smbus fb_sys_fops i2c_core libahci mei nvme cp210x pl2303 input_leds joydev led_class usbserial acpi_ipmi intel_pch_thermal nvme_core thermal fan video wmi ipmi_si backlight intel_pmc_core acpi_tad button unix [last unloaded: igb]
    Dec 15 09:47:42 Unraid kernel: CPU: 2 PID: 5015 Comm: nfsd Tainted: P           O       6.1.64-Unraid #1
    Dec 15 09:47:42 Unraid kernel: Hardware name: Supermicro Super Server/X11SCL-IF, BIOS 2.2 10/27/2023
    Dec 15 09:47:42 Unraid kernel: RIP: 0010:nfserrno+0x45/0x51 [nfsd]
    Dec 15 09:47:42 Unraid kernel: Code: c3 cc cc cc cc 48 ff c0 48 83 f8 26 75 e0 80 3d dd c9 05 00 00 75 15 48 c7 c7 b5 c2 d9 a0 c6 05 cd c9 05 00 01 e8 01 39 30 e0 <0f> 0b b8 00 00 00 05 c3 cc cc cc cc 48 83 ec 18 31 c9 ba ff 07 00
    Dec 15 09:47:42 Unraid kernel: RSP: 0000:ffffc9000155fde8 EFLAGS: 00010286
    Dec 15 09:47:42 Unraid kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000027
    Dec 15 09:47:42 Unraid kernel: RDX: 0000000000000002 RSI: ffffffff820d7e01 RDI: 00000000ffffffff
    Dec 15 09:47:42 Unraid kernel: RBP: ffff88814e140180 R08: 0000000000000000 R09: ffffffff82245f10
    Dec 15 09:47:42 Unraid kernel: R10: 00007fffffffffff R11: ffffffff82969256 R12: 0000000000000001
    Dec 15 09:47:42 Unraid kernel: R13: 0000000000000000 R14: ffff88814f6dc0c0 R15: ffffffffa0dbf6c0
    Dec 15 09:47:42 Unraid kernel: FS:  0000000000000000(0000) GS:ffff88845ed00000(0000) knlGS:0000000000000000
    Dec 15 09:47:42 Unraid kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Dec 15 09:47:42 Unraid kernel: CR2: 000014a2aa5dbbd3 CR3: 0000000241028002 CR4: 00000000003706e0
    Dec 15 09:47:42 Unraid kernel: Call Trace:
    Dec 15 09:47:42 Unraid kernel: <TASK>
    Dec 15 09:47:42 Unraid kernel: ? __warn+0xab/0x122
    Dec 15 09:47:42 Unraid kernel: ? report_bug+0x109/0x17e
    Dec 15 09:47:42 Unraid kernel: ? nfserrno+0x45/0x51 [nfsd]
    Dec 15 09:47:42 Unraid kernel: ? handle_bug+0x41/0x6f
    Dec 15 09:47:42 Unraid kernel: ? exc_invalid_op+0x13/0x60
    Dec 15 09:47:42 Unraid kernel: ? asm_exc_invalid_op+0x16/0x20
    Dec 15 09:47:42 Unraid kernel: ? nfserrno+0x45/0x51 [nfsd]
    ### [PREVIOUS LINE REPEATED 1 TIMES] ###
    Dec 15 09:47:42 Unraid kernel: nfsd_access+0xac/0xf1 [nfsd]
    Dec 15 09:47:42 Unraid kernel: nfsd3_proc_access+0x78/0x88 [nfsd]
    Dec 15 09:47:42 Unraid kernel: nfsd_dispatch+0x1a6/0x262 [nfsd]
    Dec 15 09:47:42 Unraid kernel: svc_process_common+0x32f/0x4df [sunrpc]
    Dec 15 09:47:42 Unraid kernel: ? ktime_get+0x35/0x49
    Dec 15 09:47:42 Unraid kernel: ? nfsd_svc+0x2b6/0x2b6 [nfsd]
    Dec 15 09:47:42 Unraid kernel: ? nfsd_shutdown_threads+0x5b/0x5b [nfsd]
    Dec 15 09:47:42 Unraid kernel: svc_process+0xc7/0xe4 [sunrpc]
    Dec 15 09:47:42 Unraid kernel: nfsd+0xd5/0x155 [nfsd]
    Dec 15 09:47:42 Unraid kernel: kthread+0xe4/0xef
    Dec 15 09:47:42 Unraid kernel: ? kthread_complete_and_exit+0x1b/0x1b
    Dec 15 09:47:42 Unraid kernel: ret_from_fork+0x1f/0x30
    Dec 15 09:47:42 Unraid kernel: </TASK>
    Dec 15 09:47:42 Unraid kernel: ---[ end trace 0000000000000000 ]---

    it looks like without unmounting any of the mounts or deling with Unriad going offline.

     

    and

    Dec 15 09:47:42 Unraid kernel: traps: mariadbd[17135] general protection fault ip:148623228898 sp:1485faffe850 error:0 in libc.so.6[148623228000+195000]

     

    So this is what I'd do in the order I recommend:

    • Remove the extra packages being loaded.
    • Remove two unknown plugins:
      • un-get.plg - 2023.11.12  (Unknown to Community Applications).
      • unraid-tmux.plg - plugin: version attribute not present  (Unknown to Community Applications).
    • Find out what is causing the mariadb issue.  Granted it occurred after the shutdown was initiated, but why is it not being mnaged by the shutdown?  e.g. a Docker Container that would shut down properly.
    • Check your mount parameters for the NFS mounts.  UD uses these mount parameters:
      • soft,relatime,retrans=4,timeo=300.
    • Manage the mounts on a shutdown.  It looks like the NFS mounts are remote mounts from other Linux boxes.  Leaving them mounted when Unraid is shutting down is not a good idea.  Have the clients manage them a bit better.  The above NFS mount options may help.
    • Cut down on the number of NFS mounts.  You are asking Linux to manage an awful lot over a 1GB network.  It may be choking.
    • Go back to a more basic system and start building it back a little at a time and check performance as you go.
  7. 1 hour ago, robertklep said:

    Posted those about a month ago, then found this thread about what I believe to be the same issue (or at least related) that started back in 2018.

    Can you link me to your diagnostics?

     

    1 hour ago, robertklep said:

    As for the Docker documentation, that's just general information, at least I don't see anything specific to what you shouldn't do to prevent `shfs` from getting into trouble.

    In general, misconfiguring docker access on mount points can cause many different problems.

  8. I'm not seeing a common theme as to why this is happening to some and not others.  What would help troubleshoot this is if someone will run the server in safe mode with the syslog server set to save the log on the flash, run it this way for an extended time and see if the issue happens.  if it does, provide diagnostics and the log from the flash.

     

    If it doesn't happpen, start adding plugins and docker containers back one at a time and see if one of these causes the issue.

     

    While I appreciate that the consenus seems to be to blame Unraid, a plugin or docker container can cause this issue.  Incorrect docker mapping of /mnt/ shares could possibly cause a problem.

  9. This is a UD issue, not Unraid.  What is the remote server?

     

    The error code -22 corresponds to EINVAL, which means invalid argument. Here are a few things to check:

    • Network Connectivity: Ensure that there is proper network connectivity between your NAS and the NFS server at 192.168.1.8.
    • NFS Server Configuration: Check the NFS server configuration on 192.168.1.8 to ensure that the export at /c/backup is properly configured and accessible.
    • Permissions: Verify that the NAS has the necessary permissions to access the NFS share. Ensure that the user or the client machine's IP is allowed in the NFS server's export configuration.
    • Firewall Settings: Check if there are any firewall settings on either the NAS or the NFS server that might be blocking the NFS traffic.

     

  10. This is a UD mount that doesn't specify the NFS version.  Unraid is now set up to scan for the best protocol supported by the remote server.

    Oct 13 10:58:18 BackupServer unassigned.devices: Mounting Remote Share 'MEDIASERVER:/mnt/user/Public'...
    Oct 13 10:58:18 BackupServer unassigned.devices: Mount NFS command: /sbin/mount -t 'nfs' -o rw,soft,relatime,retrans=4,timeo=300 'MEDIASERVER:/mnt/user/Public' '/mnt/remotes/MEDIASERVER_Public'

     

    And the resultant mount:

    MEDIASERVER:/mnt/user/Public on /mnt/remotes/MEDIASERVER_Public type nfs4 (rw,noatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=300,retrans=4,sec=sys,clientaddr=192.168.1.4,local_lock=none,addr=192.168.1.3)

     

    • Like 1
    • Thanks 1