Jarsky
-
Posts
47 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by Jarsky
-
-
15 hours ago, Interstellar said:
Yep. Confirmed.
It seems this affects older SMB clients. Both my Ubuntu VM's were 18.04 with the default Linux 4.15 Kernel, which had samba version 4.7.2. I upgraded HWE (hardware enablement stack) which upgraded my Linux kernel to 5.2. That subsequently upgraded my samba version to 4.11.x, and now no longer getting panics on UnRAID.
I'm not to familiar with OSX, but UnRAID should have a new 4.12.x samba in its next build I assume (6.9.0 RC1?), so it should be fixed then if it's related to the samba bug I linked above.
-
On 7/1/2020 at 2:32 AM, Interstellar said:
This issue for me might be related, as I've noticed SMB shares dropping offline on my Macs too.
It looks like it might be. It works fine to Windows, only seems to affect straight SMB.
Check your /var/log/samba/log.smbd
and you should see something like this, complaining about "can not close with outstanding aio requests"
[2020/07/02 05:25:48.276594, 0] ../../source3/smbd/close.c:648(assert_no_pending_aio) assert_no_pending_aio: fsp->num_aio_requests=1 [2020/07/02 05:25:48.276625, 0] ../../source3/lib/util.c:829(smb_panic_s3) PANIC (pid 8790): can not close with outstanding aio requests [2020/07/02 05:25:48.276730, 0] ../../lib/util/fault.c:222(log_stack_trace) BACKTRACE: #0 log_stack_trace + 0x39 [ip=0x14abc3605e39] [sp=0x7fffb7a7f030] #1 smb_panic_s3 + 0x23 [ip=0x14abc3127f73] [sp=0x7fffb7a7f970] #2 smb_panic + 0x2f [ip=0x14abc360604f] [sp=0x7fffb7a7f990] #3 create_file_default + 0x71f [ip=0x14abc34361cf] [sp=0x7fffb7a7faa0] #4 close_file + 0xc3 [ip=0x14abc3436b53] [sp=0x7fffb7a7fab0] #5 file_close_user + 0x35 [ip=0x14abc33dc485] [sp=0x7fffb7a7fcd0] #6 smbXsrv_session_logoff + 0x4d [ip=0x14abc347dfdd] [sp=0x7fffb7a7fcf0] #7 smbXsrv_session_logoff + 0x3e2 [ip=0x14abc347e372] [sp=0x7fffb7a7fd40] #8 dbwrap_unmarshall + 0x186 [ip=0x14abc21606b6] [sp=0x7fffb7a7fd60] #9 dbwrap_unmarshall + 0x3bb [ip=0x14abc21608eb] [sp=0x7fffb7a7fe20] #10 dbwrap_traverse + 0x7 [ip=0x14abc215ef37] [sp=0x7fffb7a7fe50] #11 smbXsrv_session_logoff_all + 0x5c [ip=0x14abc347e52c] [sp=0x7fffb7a7fe60] #12 smbXsrv_open_cleanup + 0x4d2 [ip=0x14abc3483ab2] [sp=0x7fffb7a7fea0] #13 smbd_exit_server_cleanly + 0x10 [ip=0x14abc3484050] [sp=0x7fffb7a7ff00] #14 exit_server_cleanly + 0x14 [ip=0x14abc2a44284] [sp=0x7fffb7a7ff10] #15 smbd_server_connection_terminate_ex + 0x111 [ip=0x14abc345fe91] [sp=0x7fffb7a7ff20] #16 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x14abc3462ca9] [sp=0x7fffb7a7ff50] #17 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a7ffc0] #18 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a7fff0] #19 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a80050] #20 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a80070] #21 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a800a0] #22 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a800c0] #23 smbd_process + 0x7a7 [ip=0x14abc34522f7] [sp=0x7fffb7a800e0] #24 samba_tevent_glib_glue_create + 0x2291 [ip=0x563fff42feb1] [sp=0x7fffb7a80170] #25 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a80240] #26 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a80270] #27 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a802d0] #28 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a802f0] #29 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a80320] #30 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a80340] #31 main + 0x1b2f [ip=0x563fff429c1f] [sp=0x7fffb7a80360] #32 __libc_start_main + 0xeb [ip=0x14abc2700e5b] [sp=0x7fffb7a80710] #33 _start + 0x2a [ip=0x563fff429ffa] [sp=0x7fffb7a807d0] [2020/07/02 05:25:48.286467, 0] ../../source3/lib/dumpcore.c:315(dump_core) dumping core in /var/log/samba/cores/smbd [2020/07/02 05:30:22.622989, 0] ../../source3/smbd/close.c:648(assert_no_pending_aio) assert_no_pending_aio: fsp->num_aio_requests=1 [2020/07/02 05:30:22.623013, 0] ../../source3/lib/util.c:829(smb_panic_s3) PANIC (pid 25740): can not close with outstanding aio requests [2020/07/02 05:30:22.623078, 0] ../../lib/util/fault.c:222(log_stack_trace) BACKTRACE: #0 log_stack_trace + 0x39 [ip=0x14abc3605e39] [sp=0x7fffb7a7f030] #1 smb_panic_s3 + 0x23 [ip=0x14abc3127f73] [sp=0x7fffb7a7f970] #2 smb_panic + 0x2f [ip=0x14abc360604f] [sp=0x7fffb7a7f990] #3 create_file_default + 0x71f [ip=0x14abc34361cf] [sp=0x7fffb7a7faa0] #4 close_file + 0xc3 [ip=0x14abc3436b53] [sp=0x7fffb7a7fab0] #5 file_close_user + 0x35 [ip=0x14abc33dc485] [sp=0x7fffb7a7fcd0] #6 smbXsrv_session_logoff + 0x4d [ip=0x14abc347dfdd] [sp=0x7fffb7a7fcf0] #7 smbXsrv_session_logoff + 0x3e2 [ip=0x14abc347e372] [sp=0x7fffb7a7fd40] #8 dbwrap_unmarshall + 0x186 [ip=0x14abc21606b6] [sp=0x7fffb7a7fd60] #9 dbwrap_unmarshall + 0x3bb [ip=0x14abc21608eb] [sp=0x7fffb7a7fe20] #10 dbwrap_traverse + 0x7 [ip=0x14abc215ef37] [sp=0x7fffb7a7fe50] #11 smbXsrv_session_logoff_all + 0x5c [ip=0x14abc347e52c] [sp=0x7fffb7a7fe60] #12 smbXsrv_open_cleanup + 0x4d2 [ip=0x14abc3483ab2] [sp=0x7fffb7a7fea0] #13 smbd_exit_server_cleanly + 0x10 [ip=0x14abc3484050] [sp=0x7fffb7a7ff00] #14 exit_server_cleanly + 0x14 [ip=0x14abc2a44284] [sp=0x7fffb7a7ff10] #15 smbd_server_connection_terminate_ex + 0x111 [ip=0x14abc345fe91] [sp=0x7fffb7a7ff20] #16 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x14abc3462ca9] [sp=0x7fffb7a7ff50] #17 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a7ffc0] #18 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a7fff0] #19 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a80050] #20 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a80070] #21 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a800a0] #22 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a800c0] #23 smbd_process + 0x7a7 [ip=0x14abc34522f7] [sp=0x7fffb7a800e0] #24 samba_tevent_glib_glue_create + 0x2291 [ip=0x563fff42feb1] [sp=0x7fffb7a80170] #25 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a80240] #26 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a80270] #27 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a802d0] #28 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a802f0] #29 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a80320] #30 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a80340] #31 main + 0x1b2f [ip=0x563fff429c1f] [sp=0x7fffb7a80360] #32 __libc_start_main + 0xeb [ip=0x14abc2700e5b] [sp=0x7fffb7a80710] #33 _start + 0x2a [ip=0x563fff429ffa] [sp=0x7fffb7a807d0] [2020/07/02 05:30:22.635989, 0] ../../source3/lib/dumpcore.c:315(dump_core) dumping core in /var/log/samba/cores/smbd
-
P.S I did find this which is possibly related to this https://bugzilla.samba.org/show_bug.cgi?id=14301 ?
Seems related with the error "outstanding aio requests"
Due for fix in the next 4.12.xx build (UnRaid 6.9.0-beta22 is on Version 4.12.3)
-
While everyone is mentioning to update the VM network drive to virtio-net, I thought I would also mention an issue I was having.
I was having major problems with apps becoming unresponsive in Linux VM's when trying to save to SMB shares mounted from my UnRAID host. When I checked dmesg, I was seeing errors such as this:
CIFS VFS: Close unmatched open
With the upgraded Linux kernel, don't forget to change your /etc/fstab mounts on any Linux VM's to cifs vers=3.0.
Errors stopped after making the change and remounting the shares.Also for security, removing CIFS SMB 1.0 Support from Programs & Features on any Windows machines that access your UnRAID shares.
-
18 minutes ago, Marshalleq said:
This is why I have both. Unraid for storage of minor accessed files, ZFS for critical data, VM's and dockers.
This is how i've been running as well. Using the ZFS Plugin with unassigned drives as a mirrored nvme ZFS pool just for Docker/VM/ISO. All the shares and backups are on the main UnRAID array. Best of both worlds really where you dont require high synchronous reads for general data.
- 1
[6.9.0-beta22] SMBD Panic
-
-
-
-
-
in Prereleases
Posted
This should probably be a new thread, as this isn't related to the smbd panic I reported. SMBD Panic actually has a negative effect in that it terminates communication (i.e your network shares become unreachable) and times out.
This seems to be related to an incorrect boolean value (i.e something isnt correctly defined as true/false or yes/no).
This shouldn't cause a daemon crash, at most one of your shares isn't working.