CIFS / SMB Stale File Handle


67 posts in this topic Last Reply

Recommended Posts

Not sure where to start with this. Running 6.8.0 and noticing "stale file handle" errors in my Linux boxes which have mounts to Unraid via SMB / CIFS.

 

The stale file handles happen over time, not clear what triggers them. I see a few threads re: NFS and issues with mover, but these errors are on SMB shares.

 

Multiple shares, multiple boxes (Ubuntu, Debian). Each share has Cache set to "yes". Each Linux box has Unraid shares mounted in fstab.

 

If I notice the stale file handle errors I can "umount /path/to/share" and then "mount -a" to restore the connection, but it's only a matter of time before they break again.

 

Thoughts?

Link to post
  • Replies 66
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

"vers=v1.0" in the mount command (on Linux, i don't know how it handled in windows)

To anyone still having this problem, I manged to resolve it by setting Tunable (support Hard Links) in Settings -> Global Share Settings to No

Not sure where to start with this. Running 6.8.0 and noticing "stale file handle" errors in my Linux boxes which have mounts to Unraid via SMB / CIFS.   The stale file handles happen over ti

Anything on the syslog? If not you can try increasing Samba's log level:

 

Go to Settings -> SMB -> SMB Extras and add:

 

log level = 3
logging = syslog

Then check the syslog, note that Samba will get very chatty so it can fill it up after a few hours.

Link to post

I see some out of memory errors where smbd is killed. Perhaps related?

 

 

Dec 21 01:20:38 nas01 kernel: smbd invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
Dec 21 01:20:38 nas01 kernel: smbd cpuset=/ mems_allowed=0
Dec 21 01:20:38 nas01 kernel: CPU: 3 PID: 9871 Comm: smbd Not tainted 4.19.88-Unraid #1
Dec 21 01:20:38 nas01 kernel: Hardware name: Gigabyte Technology Co., Ltd. GA-MA785GT-UD3H/GA-MA785GT-UD3H, BIOS F8 05/25/2010
Dec 21 01:20:38 nas01 kernel: Call Trace:
Dec 21 01:20:38 nas01 kernel: dump_stack+0x67/0x83
Dec 21 01:20:38 nas01 kernel: dump_header+0x66/0x289
Dec 21 01:20:38 nas01 kernel: oom_kill_process+0x9d/0x220
Dec 21 01:20:38 nas01 kernel: ? oom_badness+0x20/0x117
Dec 21 01:20:38 nas01 kernel: out_of_memory+0x3b7/0x3ea
Dec 21 01:20:38 nas01 kernel: __alloc_pages_nodemask+0x920/0xae1
Dec 21 01:20:38 nas01 kernel: alloc_pages_vma+0x13c/0x163
Dec 21 01:20:38 nas01 kernel: __handle_mm_fault+0xa79/0x11b7
Dec 21 01:20:38 nas01 kernel: handle_mm_fault+0x189/0x1e3
Dec 21 01:20:38 nas01 kernel: __do_page_fault+0x267/0x3ff
Dec 21 01:20:38 nas01 kernel: page_fault+0x1e/0x30
Dec 21 01:20:38 nas01 kernel: RIP: 0010:copy_user_generic_string+0x2c/0x40
Dec 21 01:20:38 nas01 kernel: Code: 00 83 fa 08 72 27 89 f9 83 e1 07 74 15 83 e9 08 f7 d9 29 ca 8a 06 88 07 48 ff c6 48 ff c7 ff c9 75 f2 89 d1 c1 e9 03 83 e2 07 <f3> 48 a5 89 d1 f3 a4 31 c0 0f 1f 00 c3 0f 1f 80 00 00 00 00 0f 1f
Dec 21 01:20:38 nas01 kernel: RSP: 0018:ffffc90004ffbb78 EFLAGS: 00010202
Dec 21 01:20:38 nas01 kernel: RAX: 000055cfb82ea363 RBX: ffff8883bdc98882 RCX: 0000000000000073
Dec 21 01:20:38 nas01 kernel: RDX: 0000000000000004 RSI: ffff8883bdc98a8e RDI: 000055cfb82ea000
Dec 21 01:20:38 nas01 kernel: RBP: 0000000000000000 R08: 0000000000078320 R09: 00000000000005a8
Dec 21 01:20:38 nas01 kernel: R10: 00000000000005a8 R11: ffff88841dc20000 R12: 00000000000005a8
Dec 21 01:20:38 nas01 kernel: R13: ffffc90004ffbd18 R14: 00000000000005a8 R15: 0000000000000882
Dec 21 01:20:38 nas01 kernel: copyout+0x22/0x27
Dec 21 01:20:38 nas01 kernel: copy_page_to_iter+0x157/0x2ab
Dec 21 01:20:38 nas01 kernel: skb_copy_datagram_iter+0xe8/0x19d
Dec 21 01:20:38 nas01 kernel: tcp_recvmsg+0x7f8/0x9d5
Dec 21 01:20:38 nas01 kernel: inet_recvmsg+0x95/0xbb
Dec 21 01:20:38 nas01 kernel: sock_read_iter+0x74/0xa8
Dec 21 01:20:38 nas01 kernel: do_iter_readv_writev+0x110/0x146
Dec 21 01:20:38 nas01 kernel: do_iter_read+0x87/0x15c
Dec 21 01:20:38 nas01 kernel: vfs_readv+0x6b/0xa3
Dec 21 01:20:38 nas01 kernel: ? handle_mm_fault+0x189/0x1e3
Dec 21 01:20:38 nas01 kernel: do_readv+0x6b/0xe2
Dec 21 01:20:38 nas01 kernel: do_syscall_64+0x57/0xf2
Dec 21 01:20:38 nas01 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
Dec 21 01:20:38 nas01 kernel: RIP: 0033:0x14decd38452d
Dec 21 01:20:38 nas01 kernel: Code: 28 89 54 24 1c 48 89 74 24 10 89 7c 24 08 e8 fa 39 f8 ff 8b 54 24 1c 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 13 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 2f 44 89 c7 48 89 44 24 08 e8 2e 3a f8 ff 48
Dec 21 01:20:38 nas01 kernel: RSP: 002b:00007ffd0ce6cef0 EFLAGS: 00000293 ORIG_RAX: 0000000000000013
Dec 21 01:20:38 nas01 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 000014decd38452d
Dec 21 01:20:38 nas01 kernel: RDX: 0000000000000001 RSI: 000055cdccd4dc18 RDI: 0000000000000024
Dec 21 01:20:38 nas01 kernel: RBP: 00000000ffffffff R08: 0000000000000000 R09: 0000000000000000
Dec 21 01:20:38 nas01 kernel: R10: 0000000000005a92 R11: 0000000000000293 R12: 000055cdccd4dc08
Dec 21 01:20:38 nas01 kernel: R13: 000055cdccd4dc10 R14: 000055cdccd4dc18 R15: 000055cdccd4db50
Dec 21 01:20:38 nas01 kernel: Mem-Info:
Dec 21 01:20:38 nas01 kernel: active_anon:3801310 inactive_anon:33851 isolated_anon:0
Dec 21 01:20:38 nas01 kernel: active_file:4035 inactive_file:6040 isolated_file:1
Dec 21 01:20:38 nas01 kernel: unevictable:0 dirty:766 writeback:3727 unstable:0
Dec 21 01:20:38 nas01 kernel: slab_reclaimable:19669 slab_unreclaimable:19358
Dec 21 01:20:38 nas01 kernel: mapped:44183 shmem:277294 pagetables:10253 bounce:0
Dec 21 01:20:38 nas01 kernel: free:33113 free_pcp:0 free_cma:0
Dec 21 01:20:38 nas01 kernel: Node 0 active_anon:15205240kB inactive_anon:135404kB active_file:16140kB inactive_file:24160kB unevictable:0kB isolated(anon):0kB isolated(file):4kB mapped:176732kB dirty:3064kB writeback:14908kB shmem:1109176kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 8570880kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
Dec 21 01:20:38 nas01 kernel: Node 0 DMA free:15880kB min:68kB low:84kB high:100kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15964kB managed:15880kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Dec 21 01:20:38 nas01 kernel: lowmem_reserve[]: 0 2458 15264 15264
Dec 21 01:20:38 nas01 kernel: Node 0 DMA32 free:61644kB min:10872kB low:13588kB high:16304kB active_anon:2706500kB inactive_anon:6040kB active_file:200kB inactive_file:160kB unevictable:0kB writepending:0kB present:2863684kB managed:2780924kB mlocked:0kB kernel_stack:128kB pagetables:3576kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Dec 21 01:20:38 nas01 kernel: lowmem_reserve[]: 0 0 12806 12806
Dec 21 01:20:38 nas01 kernel: Node 0 Normal free:54928kB min:56640kB low:70800kB high:84960kB active_anon:12498040kB inactive_anon:129364kB active_file:16780kB inactive_file:23748kB unevictable:0kB writepending:18504kB present:13369344kB managed:13113892kB mlocked:0kB kernel_stack:13760kB pagetables:37436kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Dec 21 01:20:38 nas01 kernel: lowmem_reserve[]: 0 0 0 0
Dec 21 01:20:38 nas01 kernel: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15880kB
Dec 21 01:20:38 nas01 kernel: Node 0 DMA32: 495*4kB (UME) 429*8kB (UME) 407*16kB (UME) 323*32kB (UME) 213*64kB (UME) 108*128kB (UME) 35*256kB (UME) 5*512kB (UME) 1*1024kB (E) 0*2048kB 0*4096kB = 62260kB
Dec 21 01:20:38 nas01 kernel: Node 0 Normal: 429*4kB (UME) 140*8kB (UME) 2623*16kB (UME) 293*32kB (UME) 4*64kB (M) 3*128kB (M) 3*256kB (M) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 55588kB
Dec 21 01:20:38 nas01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Dec 21 01:20:38 nas01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Dec 21 01:20:38 nas01 kernel: 287533 total pagecache pages
Dec 21 01:20:38 nas01 kernel: 0 pages in swap cache
Dec 21 01:20:38 nas01 kernel: Swap cache stats: add 0, delete 0, find 0/0
Dec 21 01:20:38 nas01 kernel: Free swap  = 0kB
Dec 21 01:20:38 nas01 kernel: Total swap = 0kB
Dec 21 01:20:38 nas01 kernel: 4062248 pages RAM
Dec 21 01:20:38 nas01 kernel: 0 pages HighMem/MovableOnly
Dec 21 01:20:38 nas01 kernel: 84574 pages reserved
Dec 21 01:20:38 nas01 kernel: 0 pages cma reserved
Dec 21 01:20:38 nas01 kernel: Tasks state (memory values in pages):
Dec 21 01:20:38 nas01 kernel: [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
Dec 21 01:20:38 nas01 kernel: [   1160]     0  1160     4165      882    69632        0         -1000 udevd
Dec 21 01:20:38 nas01 kernel: [   1402]     0  1402    53879      650    69632        0             0 rsyslogd
Dec 21 01:20:38 nas01 kernel: [   1487]     0  1487     2090     1137    49152        0             0 haveged
Dec 21 01:20:38 nas01 kernel: [   1541]    81  1541      954      525    45056        0             0 dbus-daemon
Dec 21 01:20:38 nas01 kernel: [   1550]    32  1550      835      500    45056        0             0 rpcbind
Dec 21 01:20:38 nas01 kernel: [   1555]    32  1555     1818     1433    57344        0             0 rpc.statd
Dec 21 01:20:38 nas01 kernel: [   1585]    44  1585    19330     1079    65536        0             0 ntpd
Dec 21 01:20:38 nas01 kernel: [   1592]     0  1592      618       23    40960        0             0 acpid
Dec 21 01:20:38 nas01 kernel: [   1610]     0  1610      633      423    49152        0             0 crond
Dec 21 01:20:38 nas01 kernel: [   1614]     0  1614      630      360    40960        0             0 atd
Dec 21 01:20:38 nas01 kernel: [   4658]     0  4658    25242     6078   176128        0             0 php
Dec 21 01:20:38 nas01 kernel: [   5680]     0  5680      643      425    40960        0             0 agetty
Dec 21 01:20:38 nas01 kernel: [   5681]     0  5681      643      432    40960        0             0 agetty
Dec 21 01:20:38 nas01 kernel: [   5682]     0  5682      643      435    40960        0             0 agetty
Dec 21 01:20:38 nas01 kernel: [   5683]     0  5683      643      402    40960        0             0 agetty
Dec 21 01:20:38 nas01 kernel: [   5684]     0  5684      643      425    40960        0             0 agetty
Dec 21 01:20:38 nas01 kernel: [   5685]     0  5685      643      427    40960        0             0 agetty
Dec 21 01:20:38 nas01 kernel: [   5699]     0  5699     6314     2106    90112        0             0 slim
Dec 21 01:20:38 nas01 kernel: [   5702]     0  5702    33936     5929   184320        0             0 Xorg
Dec 21 01:20:38 nas01 kernel: [   6410]     0  6410    87285      999   102400        0             0 emhttpd
Dec 21 01:20:38 nas01 kernel: [   7206]    61  7206     1629      835    49152        0             0 avahi-daemon
Dec 21 01:20:38 nas01 kernel: [   7208]    61  7208     1533       69    49152        0             0 avahi-daemon
Dec 21 01:20:38 nas01 kernel: [   7217]     0  7217     1179       27    49152        0             0 avahi-dnsconfd
Dec 21 01:20:38 nas01 kernel: [   7529]     0  7529    24519     2787   172032        0             0 php-fpm
Dec 21 01:20:38 nas01 kernel: [   7551]     0  7551     3307     1052    61440        0             0 ttyd
Dec 21 01:20:38 nas01 kernel: [   7554]     0  7554    37347     2473    73728        0             0 nginx
Dec 21 01:20:38 nas01 kernel: [   9745]     0  9745      946      718    40960        0             0 diskload
Dec 21 01:20:38 nas01 kernel: [  10301]     0 10301    36250      192    65536        0             0 shfs
Dec 21 01:20:38 nas01 kernel: [  10314]     0 10314   224282    10428   323584        0             0 shfs
Dec 21 01:20:38 nas01 kernel: [  10503]     0 10503    13041     3743   135168        0             0 smbd
Dec 21 01:20:38 nas01 kernel: [  10506]     0 10506    12582     2009   126976        0             0 smbd-notifyd
Dec 21 01:20:38 nas01 kernel: [  10507]     0 10507    12584     1864   126976        0             0 cleanupd
Dec 21 01:20:38 nas01 kernel: [  10508]     0 10508     8835     1801   102400        0             0 nmbd
Dec 21 01:20:38 nas01 kernel: [  10515]     0 10515      628       26    40960        0             0 wsdd
Dec 21 01:20:38 nas01 kernel: [  10518]     0 10518    16812     6961   167936        0             0 winbindd
Dec 21 01:20:38 nas01 kernel: [  10521]     0 10521    21174     9895   204800        0             0 winbindd
Dec 21 01:20:38 nas01 kernel: [  10642]     0 10642   213521    18304   339968        0          -500 dockerd
Dec 21 01:20:38 nas01 kernel: [  10657]     0 10657   157950     8407   258048        0          -500 containerd
Dec 21 01:20:38 nas01 kernel: [  11103]     0 11103    27660     2468    90112        0             0 unbalance
Dec 21 01:20:38 nas01 kernel: [  11104]     0 11104    17000     4670   167936        0             0 winbindd
Dec 21 01:20:38 nas01 kernel: [  11176]     0 11176    27276     2703    81920        0          -999 containerd-shim
Dec 21 01:20:38 nas01 kernel: [  11193]     0 11193       49        1    28672        0             0 s6-svscan
Dec 21 01:20:38 nas01 kernel: [  11292]     0 11292       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  11481]     0 11481    25831      942    65536        0          -500 docker-proxy
Dec 21 01:20:38 nas01 kernel: [  11493]     0 11493    25831      943    65536        0          -500 docker-proxy
Dec 21 01:20:38 nas01 kernel: [  11516]     0 11516       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  11518]    99 11518     1126       18    49152        0             0 sh
Dec 21 01:20:38 nas01 kernel: [  11525]    99 11525   599590   185741  2293760        0             0 Plex Media Serv
Dec 21 01:20:38 nas01 kernel: [  11526]     0 11526    26924     2065    77824        0          -999 containerd-shim
Dec 21 01:20:38 nas01 kernel: [  11554]     0 11554       49        4    28672        0             0 s6-svscan
Dec 21 01:20:38 nas01 kernel: [  11655]     0 11655       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12173]    99 12173   426917    39906   749568        0             0 Plex Script Hos
Dec 21 01:20:38 nas01 kernel: [  12460]     0 12460       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12462]     0 12462       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12463]     0 12463       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12464]     0 12464       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12465]     0 12465       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12468]     0 12468       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12469]     0 12469       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12470]     0 12470       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12684]     2 12684      124        1    28672        0             0 s6-fdholderd
Dec 21 01:20:38 nas01 kernel: [  12695]     0 12695     3740      168    73728        0             0 nginx
Dec 21 01:20:38 nas01 kernel: [  12724]    99 12724  2046120   225275  2670592        0             0 CrashPlanServic
Dec 21 01:20:38 nas01 kernel: [  12743]   122 12743     3852      283    61440        0             0 nginx
Dec 21 01:20:38 nas01 kernel: [  12744]   122 12744     3852      283    61440        0             0 nginx
Dec 21 01:20:38 nas01 kernel: [  12768]     0 12768       49        1    28672        0             0 s6-supervise
Dec 21 01:20:38 nas01 kernel: [  12991]    99 12991    93109      444   258048        0             0 Plex Tuner Serv
Dec 21 01:20:38 nas01 kernel: [  13501]     0 13501    20696     3178   208896        0             0 Xvfb
Dec 21 01:20:38 nas01 kernel: [  13620]     0 13620       51        9    28672        0             0 forstdin
Dec 21 01:20:38 nas01 kernel: [  13637]     0 13637       49        1    28672        0             0 forstdin
Dec 21 01:20:38 nas01 kernel: [  13652]    99 13652    29655      632   274432        0             0 openbox
Dec 21 01:20:38 nas01 kernel: [  13670]     0 13670      397       72    49152        0             0 tailstatusfile
Dec 21 01:20:38 nas01 kernel: [  13689]     0 13689      379        1    49152        0             0 tail
Dec 21 01:20:38 nas01 kernel: [  15628]     0 15628    14539     1147   147456        0             0 x11vnc
Dec 21 01:20:38 nas01 kernel: [  15645]    99 15645      396        1    45056        0             0 sh
Dec 21 01:20:38 nas01 kernel: [  15667]    99 15667   154640     8768  1363968        0             0 crashplan
Dec 21 01:20:38 nas01 kernel: [  16596]    99 16596    86805     1636   606208        0             0 crashplan
Dec 21 01:20:38 nas01 kernel: [  17066]    99 17066    93291     6042   712704        0             0 crashplan
Dec 21 01:20:38 nas01 kernel: [  18807]    99 18807   165479    19765  3506176        0             0 crashplan
Dec 21 01:20:38 nas01 kernel: [  15431]     0 15431     2280      660    49152        0         -1000 sshd
Dec 21 01:20:38 nas01 kernel: [  15480]     0 15480      631      431    45056        0             0 inetd
Dec 21 01:20:38 nas01 kernel: [  15505]     0 15505    37473     2155    77824        0             0 nginx
Dec 21 01:20:38 nas01 kernel: [   9770]  1008  9770  1394427   820800  7274496        0             0 smbd
Dec 21 01:20:38 nas01 kernel: [   9871]  1008  9871  2592103  2017939 16863232        0             0 smbd
Dec 21 01:20:38 nas01 kernel: [   8628]  1000  8628   696554   164021  1703936        0             0 smbd
Dec 21 01:20:38 nas01 kernel: [  17343]  1006 17343    21152     5602   200704        0             0 smbd
Dec 21 01:20:38 nas01 kernel: [  29268]     0 29268      397        1    45056        0             0 sh
Dec 21 01:20:38 nas01 kernel: [  29294]     0 29294    28603      830   262144        0             0 yad
Dec 21 01:20:38 nas01 kernel: [  25914]     0 25914    16812     4308   167936        0             0 winbindd
Dec 21 01:20:38 nas01 kernel: [  12875]    99 12875   239478    11712   458752        0             0 Plex Script Hos
Dec 21 01:20:38 nas01 kernel: [  13447]    99 13447   238667    11266   479232        0             0 Plex Script Hos
Dec 21 01:20:38 nas01 kernel: [  13950]    99 13950   237839     9628   446464        0             0 Plex Script Hos
Dec 21 01:20:38 nas01 kernel: [  17864]     0 17864      397       30    49152        0             0 tailstatusfile
Dec 21 01:20:38 nas01 kernel: [  17911]     0 17911      379        1    45056        0             0 stat
Dec 21 01:20:38 nas01 kernel: [  17931]     0 17931      114        1    32768        0             0 sh
Dec 21 01:20:38 nas01 kernel: [  17982]     0 17982      379        1    45056        0             0 md5sum
Dec 21 01:20:38 nas01 kernel: [  18056]     0 18056      379        1    45056        0             0 cut
Dec 21 01:20:38 nas01 kernel: [  18311]     0 18311      612      173    40960        0             0 sleep
Dec 21 01:20:38 nas01 kernel: [  18396]     0 18396      961      691    45056        0             0 sh
Dec 21 01:20:38 nas01 kernel: [  18397]     0 18397      661      203    45056        0             0 timeout
Dec 21 01:20:38 nas01 kernel: [  18398]     0 18398     1524      670    45056        0             0 lsblk
Dec 21 01:20:38 nas01 kernel: Out of memory: Kill process 9871 (smbd) score 508 or sacrifice child
Dec 21 01:20:38 nas01 kernel: Killed process 9871 (smbd) total-vm:10368412kB, anon-rss:8054248kB, file-rss:4kB, shmem-rss:17504kB
Dec 21 01:20:38 nas01 kernel: oom_reaper: reaped process 9871 (smbd), now anon-rss:0kB, file-rss:0kB, shmem-rss:6380kB

 

Link to post

Having the same issue since switching to unRaid 6.8.0.

I have an Ubuntu 18.04 based machine which mounts 5 shares of the unRaid. Running unRaid 6.7.0 those mount where rock stable, also surviving when the unRaid goes to sleep and wakes up hours later.

What i have read in some Forums actively switching the Samba Protocol version to 1.0 could solve the problem. I will try that now. 

Link to post

I have the same issue, however it seems to be unrelated to 'how long' a share is mounted. My issue exists after writing (a) files to the share. I would then have to remount the share for it to be fixed. I can then do exactly one operation again (file(s)/folder copy) before it goes stale again.

 

The common factor seems to be the usage of the cache drive. It's only for shares which use the cache drive that I seem to experience this issue.

Edited by kvn
corrected the symptoms
Link to post

i just switched my ubuntuVM from nfs to smb and i got this problem too, the windows vm are still fine.

however, steam can still see and use the share, i just cant browser it anymore. umount and mount will "fix" it for some time...

now i changed the share to data array only and its fine...

really hope this will be fixed in a later unraid version...

Link to post

For about a month, i was seeing this once or twice a day on my Linux Mint VM and couldn't figure out the cause (and having to umount/mount to correct it). I tried using NFS, various SMB versions, SMB options, all resulted in the same issue. As others have mentioned it didn't seem to be based on the amount of time it was mounted, rather the time after the previous write to that mount. 

 

I never discovered the source of the issue, but I have since switched to using autofs so that it unmounts and remounts the share dynamically. Since going to this, I've only had one instance of this issue come up and that was simply because I was inside the mounted share via command line and autofs couldn't unmount it. 

 

My setup uses a cache drive for writes on one of these shares, but not the other and both experienced the problem. The VM runs off the cache drive in case thats relevant at all. I also run the 'dynamix cache directories' plug in, but have it exclude one of these shares (the one that uses the cache disk for writes). But again, it still happens to both shares.

 

This has also only occurred on the new 6.8 and 6.8RC releases for me, but I built this server new and went straight to rc1 a few months ago so never attempted to run it on 6.7 and older.

Edited by Scorpionhl
changed to 6.7 and older instead of newer
Link to post
4 hours ago, WashingtonMatt said:

I started having this issue on 6.8.0 with a Ubuntu VM with multiple cifs mounts. Adding "vers=v1.0" seems to have cleared it up. Not the preferred solution though...

i know. but disabled cache use on the share, that also works (i disabled netbios, so no v1 here) i prefer to lose some write speed instead of security...

Link to post
  • 2 months later...
On 1/6/2020 at 9:20 PM, kvn said:

I have the same issue, however it seems to be unrelated to 'how long' a share is mounted. My issue exists after writing (a) files to the share. I would then have to remount the share for it to be fixed. I can then do exactly one operation again (file(s)/folder copy) before it goes stale again.

 

The common factor seems to be the usage of the cache drive. It's only for shares which use the cache drive that I seem to experience this issue.

I'm having this exact issue for mounted shares on my Proxmox server.

 

When I copy a file to a Unraid share using cashe drive the share drops from Proxmox, then I use umount -F command for that mounted share and it connects again.

Then as soon as the moover moves the file to the array the same thing happens again. 

 

When I disable the use of the cashe drive for the same share the issue goes away.

Link to post
16 minutes ago, gberg said:

When I disable the use of the cashe drive for the same share the issue goes away.

This is exactly what I am seeing. NFS / CIFS makes no difference. Disabling cache on the share and the problem has stopped.

 

I think it's related to mover.

Link to post

I had the same issue, and found you guys, here's what fixed it for me

sudo mount -t cifs -o username=<username>,vers=1.0 //path/to/share /path/to/mount

#Where username is my username of course; then it prompted for the password, and I could now move files back and forth.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.