Jump to content
  • [6.9.0-beta22] SMBD Panic


    Jarsky
    • Minor

    I wonder if anyone else is having this issue?

     

    I have several Ubuntu VM's mounting my primary share from the UnRAID pool via SMB. 

    One of them in particular which runs qBitTorrent has high writes (up to approx 100MB/s). Since upgrading to the 6.9.0 build (from 6.8.3) the speed is constantly dropping as smbd appears to panic which loses connection. 

     

    From my Ubuntu VM I can see smbd loses connection and blocks the app for 2 minutes

    [18358.622226] cifs_vfs_err: 23 callbacks suppressed
    [18358.622229] CIFS VFS: Server tower has not responded in 180 seconds. Reconnecting...
    [18363.876639] CIFS VFS: Free previous auth_key.response = 00000000ee7c66f6
    [18367.456112] INFO: task qbittorrent-nox:16780 blocked for more than 120 seconds.

    When this occurs, I see the below errors on UnRAID in the SMB logs

     

    [2020/06/30 18:16:34.725129,  0] ../../source3/smbd/close.c:648(assert_no_pending_aio)
      assert_no_pending_aio: fsp->num_aio_requests=1
    [2020/06/30 18:16:34.725152,  0] ../../source3/lib/util.c:829(smb_panic_s3)
      PANIC (pid 28606): can not close with outstanding aio requests
    [2020/06/30 18:16:34.725236,  0] ../../lib/util/fault.c:222(log_stack_trace)
      BACKTRACE:
       #0 log_stack_trace + 0x39 [ip=0x14abc3605e39] [sp=0x7fffb7a7f030]
       #1 smb_panic_s3 + 0x23 [ip=0x14abc3127f73] [sp=0x7fffb7a7f970]
       #2 smb_panic + 0x2f [ip=0x14abc360604f] [sp=0x7fffb7a7f990]
       #3 create_file_default + 0x71f [ip=0x14abc34361cf] [sp=0x7fffb7a7faa0]
       #4 close_file + 0xc3 [ip=0x14abc3436b53] [sp=0x7fffb7a7fab0]
       #5 file_close_user + 0x35 [ip=0x14abc33dc485] [sp=0x7fffb7a7fcd0]
       #6 smbXsrv_session_logoff + 0x4d [ip=0x14abc347dfdd] [sp=0x7fffb7a7fcf0]
       #7 smbXsrv_session_logoff + 0x3e2 [ip=0x14abc347e372] [sp=0x7fffb7a7fd40]
       #8 dbwrap_unmarshall + 0x186 [ip=0x14abc21606b6] [sp=0x7fffb7a7fd60]
       #9 dbwrap_unmarshall + 0x3bb [ip=0x14abc21608eb] [sp=0x7fffb7a7fe20]
       #10 dbwrap_traverse + 0x7 [ip=0x14abc215ef37] [sp=0x7fffb7a7fe50]
       #11 smbXsrv_session_logoff_all + 0x5c [ip=0x14abc347e52c] [sp=0x7fffb7a7fe60]
       #12 smbXsrv_open_cleanup + 0x4d2 [ip=0x14abc3483ab2] [sp=0x7fffb7a7fea0]
       #13 smbd_exit_server_cleanly + 0x10 [ip=0x14abc3484050] [sp=0x7fffb7a7ff00]
       #14 exit_server_cleanly + 0x14 [ip=0x14abc2a44284] [sp=0x7fffb7a7ff10]
       #15 smbd_server_connection_terminate_ex + 0x111 [ip=0x14abc345fe91] [sp=0x7fffb7a7ff20]
       #16 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x14abc3462ca9] [sp=0x7fffb7a7ff50]
       #17 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a7ffc0]
       #18 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a7fff0]
       #19 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a80050]
       #20 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a80070]
       #21 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a800a0]
       #22 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a800c0]
       #23 smbd_process + 0x7a7 [ip=0x14abc34522f7] [sp=0x7fffb7a800e0]
       #24 samba_tevent_glib_glue_create + 0x2291 [ip=0x563fff42feb1] [sp=0x7fffb7a80170]
       #25 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a80240]
       #26 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a80270]
       #27 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a802d0]
       #28 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a802f0]
       #29 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a80320]
       #30 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a80340]
       #31 main + 0x1b2f [ip=0x563fff429c1f] [sp=0x7fffb7a80360]
       #32 __libc_start_main + 0xeb [ip=0x14abc2700e5b] [sp=0x7fffb7a80710]
       #33 _start + 0x2a [ip=0x563fff429ffa] [sp=0x7fffb7a807d0]

     

     

    I have tried adjusting my mount command to smb3 and also to disable cache but still the same, heres my /etc/fstab

     

    //tower/share /mnt/share cifs vers=3.0,cache=none,credentials=/home/user/.smbcredentials,uid=1000,gid=1010,iocharset=utf8,noperm 0 0
    //tower/plexmediaserver /mnt/plexmediaserver cifs vers=3.0,cache=none,credentials=/home/user/.smbcredentials,uid=1000,gid=1010,iocharset=utf8,noperm 0 0
    

     



    User Feedback

    Recommended Comments

    This issue for me might be related, as I've noticed SMB shares dropping offline on my Macs too.

     

    Jun 30 15:27:54 NAS kernel: 
    Jun 30 15:28:05 NAS emhttpd: Starting services...
    Jun 30 15:28:05 NAS emhttpd: shcmd (3068): /etc/rc.d/rc.samba restart
    Jun 30 15:28:05 NAS nmbd[19158]: [2020/06/30 15:28:05.087241,  0] ../../source3/nmbd/nmbd.c:59(terminate)
    Jun 30 15:28:05 NAS nmbd[19158]:   Got SIGTERM: going down...
    Jun 30 15:28:05 NAS winbindd[19168]: [2020/06/30 15:28:05.087260,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:05 NAS winbindd[19168]:   Got sig[15] terminate (is_parent=1)
    Jun 30 15:28:05 NAS winbindd[19170]: [2020/06/30 15:28:05.087287,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:05 NAS winbindd[19170]:   Got sig[15] terminate (is_parent=0)
    Jun 30 15:28:05 NAS winbindd[19280]: [2020/06/30 15:28:05.089948,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:05 NAS winbindd[19280]:   Got sig[15] terminate (is_parent=0)
    Jun 30 15:28:07 NAS root: Starting Samba:  /usr/sbin/smbd -D
    Jun 30 15:28:07 NAS root:                  /usr/sbin/nmbd -D
    Jun 30 15:28:07 NAS smbd[31726]: [2020/06/30 15:28:07.335241,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:07 NAS smbd[31726]:   daemon_ready: daemon 'smbd' finished starting up and ready to serve connections
    Jun 30 15:28:07 NAS root:                  /usr/sbin/wsdd 
    Jun 30 15:28:07 NAS nmbd[31731]: [2020/06/30 15:28:07.349211,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:07 NAS nmbd[31731]:   daemon_ready: daemon 'nmbd' finished starting up and ready to serve connections
    Jun 30 15:28:07 NAS root:                  /usr/sbin/winbindd -D
    Jun 30 15:28:07 NAS winbindd[31741]: [2020/06/30 15:28:07.395123,  0] ../../source3/winbindd/winbindd_cache.c:3203(initialize_winbindd_cache)
    Jun 30 15:28:07 NAS winbindd[31741]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Jun 30 15:28:07 NAS winbindd[31741]: [2020/06/30 15:28:07.395717,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:07 NAS winbindd[31741]:   daemon_ready: daemon 'winbindd' finished starting up and ready to serve connections
    Jun 30 15:28:07 NAS emhttpd: shcmd (3076): smbcontrol smbd close-share 'Backup'
    Jun 30 15:28:10 NAS emhttpd: Starting services...
    Jun 30 15:28:10 NAS emhttpd: shcmd (3078): /etc/rc.d/rc.samba restart
    Jun 30 15:28:10 NAS nmbd[31731]: [2020/06/30 15:28:10.584782,  0] ../../source3/nmbd/nmbd.c:59(terminate)
    Jun 30 15:28:10 NAS nmbd[31731]:   Got SIGTERM: going down...
    Jun 30 15:28:10 NAS winbindd[31741]: [2020/06/30 15:28:10.584818,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:10 NAS winbindd[31743]: [2020/06/30 15:28:10.584817,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:10 NAS winbindd[31743]:   Got sig[15] terminate (is_parent=0)
    Jun 30 15:28:10 NAS winbindd[31741]:   Got sig[15] terminate (is_parent=1)
    Jun 30 15:28:14 NAS root: Starting Samba:  /usr/sbin/smbd -D
    Jun 30 15:28:14 NAS root:                  /usr/sbin/nmbd -D
    Jun 30 15:28:14 NAS smbd[31832]: [2020/06/30 15:28:14.246073,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:14 NAS smbd[31832]:   daemon_ready: daemon 'smbd' finished starting up and ready to serve connections
    Jun 30 15:28:14 NAS root:                  /usr/sbin/wsdd 
    Jun 30 15:28:14 NAS nmbd[31837]: [2020/06/30 15:28:14.260205,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:14 NAS nmbd[31837]:   daemon_ready: daemon 'nmbd' finished starting up and ready to serve connections
    Jun 30 15:28:14 NAS root:                  /usr/sbin/winbindd -D
    Jun 30 15:28:14 NAS winbindd[31847]: [2020/06/30 15:28:14.305319,  0] ../../source3/winbindd/winbindd_cache.c:3203(initialize_winbindd_cache)
    Jun 30 15:28:14 NAS winbindd[31847]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Jun 30 15:28:14 NAS winbindd[31847]: [2020/06/30 15:28:14.305909,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:14 NAS winbindd[31847]:   daemon_ready: daemon 'winbindd' finished starting up and ready to serve connections
    Jun 30 15:28:14 NAS emhttpd: shcmd (3086): smbcontrol smbd close-share 'disk1'
    Jun 30 15:28:25 NAS smbd[31912]: [2020/06/30 15:28:25.749942,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jun 30 15:28:25 NAS smbd[31912]:   lp_bool(no): value is not boolean!
    Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.315970,  0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2)
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 172.17.0.1
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.316194,  0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2)
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 192.168.122.1
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.316330,  0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2)
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 10.10.1.150
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   *****

     

    Share this comment


    Link to comment
    Share on other sites
    11 hours ago, Jarsky said:

    P.S I did find this which is possibly related to this https://bugzilla.samba.org/show_bug.cgi?id=14301 ?

    Seems related with the error "outstanding aio requests"

    Due for fix in the next 4.12.xx build  (UnRaid 6.9.0-beta22 is on Version 4.12.3)

     

     

    Thank you for chasing down the source of this problem.  I can't really tell what is the proper patch, or else I'd just add it, but 4.12.4 is due out on July 2:

     

    https://lists.samba.org/archive/samba-announce/2020/000523.html

     

    Share this comment


    Link to comment
    Share on other sites
    On 7/1/2020 at 2:32 AM, Interstellar said:

    This issue for me might be related, as I've noticed SMB shares dropping offline on my Macs too.

    It looks like it might be. It works fine to Windows, only seems to affect straight SMB.

     

    Check your /var/log/samba/log.smbd 

    and you should see something like this, complaining about "can not close with outstanding aio requests"

     

    [2020/07/02 05:25:48.276594,  0] ../../source3/smbd/close.c:648(assert_no_pending_aio)
      assert_no_pending_aio: fsp->num_aio_requests=1
    [2020/07/02 05:25:48.276625,  0] ../../source3/lib/util.c:829(smb_panic_s3)
      PANIC (pid 8790): can not close with outstanding aio requests
    [2020/07/02 05:25:48.276730,  0] ../../lib/util/fault.c:222(log_stack_trace)
      BACKTRACE:
       #0 log_stack_trace + 0x39 [ip=0x14abc3605e39] [sp=0x7fffb7a7f030]
       #1 smb_panic_s3 + 0x23 [ip=0x14abc3127f73] [sp=0x7fffb7a7f970]
       #2 smb_panic + 0x2f [ip=0x14abc360604f] [sp=0x7fffb7a7f990]
       #3 create_file_default + 0x71f [ip=0x14abc34361cf] [sp=0x7fffb7a7faa0]
       #4 close_file + 0xc3 [ip=0x14abc3436b53] [sp=0x7fffb7a7fab0]
       #5 file_close_user + 0x35 [ip=0x14abc33dc485] [sp=0x7fffb7a7fcd0]
       #6 smbXsrv_session_logoff + 0x4d [ip=0x14abc347dfdd] [sp=0x7fffb7a7fcf0]
       #7 smbXsrv_session_logoff + 0x3e2 [ip=0x14abc347e372] [sp=0x7fffb7a7fd40]
       #8 dbwrap_unmarshall + 0x186 [ip=0x14abc21606b6] [sp=0x7fffb7a7fd60]
       #9 dbwrap_unmarshall + 0x3bb [ip=0x14abc21608eb] [sp=0x7fffb7a7fe20]
       #10 dbwrap_traverse + 0x7 [ip=0x14abc215ef37] [sp=0x7fffb7a7fe50]
       #11 smbXsrv_session_logoff_all + 0x5c [ip=0x14abc347e52c] [sp=0x7fffb7a7fe60]
       #12 smbXsrv_open_cleanup + 0x4d2 [ip=0x14abc3483ab2] [sp=0x7fffb7a7fea0]
       #13 smbd_exit_server_cleanly + 0x10 [ip=0x14abc3484050] [sp=0x7fffb7a7ff00]
       #14 exit_server_cleanly + 0x14 [ip=0x14abc2a44284] [sp=0x7fffb7a7ff10]
       #15 smbd_server_connection_terminate_ex + 0x111 [ip=0x14abc345fe91] [sp=0x7fffb7a7ff20]
       #16 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x14abc3462ca9] [sp=0x7fffb7a7ff50]
       #17 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a7ffc0]
       #18 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a7fff0]
       #19 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a80050]
       #20 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a80070]
       #21 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a800a0]
       #22 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a800c0]
       #23 smbd_process + 0x7a7 [ip=0x14abc34522f7] [sp=0x7fffb7a800e0]
       #24 samba_tevent_glib_glue_create + 0x2291 [ip=0x563fff42feb1] [sp=0x7fffb7a80170]
       #25 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a80240]
       #26 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a80270]
       #27 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a802d0]
       #28 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a802f0]
       #29 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a80320]
       #30 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a80340]
       #31 main + 0x1b2f [ip=0x563fff429c1f] [sp=0x7fffb7a80360]
       #32 __libc_start_main + 0xeb [ip=0x14abc2700e5b] [sp=0x7fffb7a80710]
       #33 _start + 0x2a [ip=0x563fff429ffa] [sp=0x7fffb7a807d0]
    [2020/07/02 05:25:48.286467,  0] ../../source3/lib/dumpcore.c:315(dump_core)
      dumping core in /var/log/samba/cores/smbd
    [2020/07/02 05:30:22.622989,  0] ../../source3/smbd/close.c:648(assert_no_pending_aio)
      assert_no_pending_aio: fsp->num_aio_requests=1
    [2020/07/02 05:30:22.623013,  0] ../../source3/lib/util.c:829(smb_panic_s3)
      PANIC (pid 25740): can not close with outstanding aio requests
    [2020/07/02 05:30:22.623078,  0] ../../lib/util/fault.c:222(log_stack_trace)
      BACKTRACE:
       #0 log_stack_trace + 0x39 [ip=0x14abc3605e39] [sp=0x7fffb7a7f030]
       #1 smb_panic_s3 + 0x23 [ip=0x14abc3127f73] [sp=0x7fffb7a7f970]
       #2 smb_panic + 0x2f [ip=0x14abc360604f] [sp=0x7fffb7a7f990]
       #3 create_file_default + 0x71f [ip=0x14abc34361cf] [sp=0x7fffb7a7faa0]
       #4 close_file + 0xc3 [ip=0x14abc3436b53] [sp=0x7fffb7a7fab0]
       #5 file_close_user + 0x35 [ip=0x14abc33dc485] [sp=0x7fffb7a7fcd0]
       #6 smbXsrv_session_logoff + 0x4d [ip=0x14abc347dfdd] [sp=0x7fffb7a7fcf0]
       #7 smbXsrv_session_logoff + 0x3e2 [ip=0x14abc347e372] [sp=0x7fffb7a7fd40]
       #8 dbwrap_unmarshall + 0x186 [ip=0x14abc21606b6] [sp=0x7fffb7a7fd60]
       #9 dbwrap_unmarshall + 0x3bb [ip=0x14abc21608eb] [sp=0x7fffb7a7fe20]
       #10 dbwrap_traverse + 0x7 [ip=0x14abc215ef37] [sp=0x7fffb7a7fe50]
       #11 smbXsrv_session_logoff_all + 0x5c [ip=0x14abc347e52c] [sp=0x7fffb7a7fe60]
       #12 smbXsrv_open_cleanup + 0x4d2 [ip=0x14abc3483ab2] [sp=0x7fffb7a7fea0]
       #13 smbd_exit_server_cleanly + 0x10 [ip=0x14abc3484050] [sp=0x7fffb7a7ff00]
       #14 exit_server_cleanly + 0x14 [ip=0x14abc2a44284] [sp=0x7fffb7a7ff10]
       #15 smbd_server_connection_terminate_ex + 0x111 [ip=0x14abc345fe91] [sp=0x7fffb7a7ff20]
       #16 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x14abc3462ca9] [sp=0x7fffb7a7ff50]
       #17 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a7ffc0]
       #18 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a7fff0]
       #19 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a80050]
       #20 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a80070]
       #21 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a800a0]
       #22 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a800c0]
       #23 smbd_process + 0x7a7 [ip=0x14abc34522f7] [sp=0x7fffb7a800e0]
       #24 samba_tevent_glib_glue_create + 0x2291 [ip=0x563fff42feb1] [sp=0x7fffb7a80170]
       #25 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a80240]
       #26 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a80270]
       #27 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a802d0]
       #28 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a802f0]
       #29 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a80320]
       #30 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a80340]
       #31 main + 0x1b2f [ip=0x563fff429c1f] [sp=0x7fffb7a80360]
       #32 __libc_start_main + 0xeb [ip=0x14abc2700e5b] [sp=0x7fffb7a80710]
       #33 _start + 0x2a [ip=0x563fff429ffa] [sp=0x7fffb7a807d0]
    [2020/07/02 05:30:22.635989,  0] ../../source3/lib/dumpcore.c:315(dump_core)
      dumping core in /var/log/samba/cores/smbd

     

     

    Share this comment


    Link to comment
    Share on other sites

    Yep. Confirmed.

     

    [2020/06/30 14:48:23.705048,  0] ../../source3/lib/util.c:829(smb_panic_s3)
      PANIC (pid 11793): can not close with outstanding aio requests
    [2020/06/30 14:48:23.705241,  0] ../../lib/util/fault.c:222(log_stack_trace)
      BACKTRACE:
       #0 log_stack_trace + 0x39 [ip=0x14f39d639e39] [sp=0x7ffce1f867f0]
       #1 smb_panic_s3 + 0x23 [ip=0x14f39d15bf73] [sp=0x7ffce1f87130]
       #2 smb_panic + 0x2f [ip=0x14f39d63a04f] [sp=0x7ffce1f87150]
       #3 create_file_default + 0x71f [ip=0x14f39d46a1cf] [sp=0x7ffce1f87260]
       #4 close_file + 0xc3 [ip=0x14f39d46ab53] [sp=0x7ffce1f87270]
       #5 file_close_conn + 0x5a [ip=0x14f39d41031a] [sp=0x7ffce1f87490]
       #6 close_cnum + 0x61 [ip=0x14f39d488ed1] [sp=0x7ffce1f874b0]
       #7 smbXsrv_tcon_disconnect + 0x4b [ip=0x14f39d4b485b] [sp=0x7ffce1f875f0]
       #8 smbXsrv_tcon_disconnect + 0x3d2 [ip=0x14f39d4b4be2] [sp=0x7ffce1f87640]
       #9 dbwrap_unmarshall + 0x186 [ip=0x14f39c1956b6] [sp=0x7ffce1f87660]
       #10 dbwrap_unmarshall + 0x3bb [ip=0x14f39c1958eb] [sp=0x7ffce1f87720]
       #11 dbwrap_traverse + 0x7 [ip=0x14f39c193f37] [sp=0x7ffce1f87750]
       #12 smbXsrv_session_global_traverse + 0x790 [ip=0x14f39d4b3860] [sp=0x7ffce1f87760]
       #13 smbXsrv_open_cleanup + 0x4bf [ip=0x14f39d4b7a9f] [sp=0x7ffce1f877b0]
       #14 smbd_exit_server_cleanly + 0x10 [ip=0x14f39d4b8050] [sp=0x7ffce1f87810]
       #15 exit_server_cleanly + 0x14 [ip=0x14f39ca78284] [sp=0x7ffce1f87820]
       #16 no_acl_syscall_error + 0x42 [ip=0x14f39d47f4d2] [sp=0x7ffce1f87830]
       #17 tevent_common_invoke_signal_handler + 0x92 [ip=0x14f39ca297b2] [sp=0x7ffce1f87840]
       #18 tevent_common_check_signal + 0xf3 [ip=0x14f39ca29943] [sp=0x7ffce1f87880]
       #19 tevent_wakeup_recv + 0xe4a [ip=0x14f39ca2b82a] [sp=0x7ffce1f879a0]
       #20 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14f39ca29c07] [sp=0x7ffce1f87a00]
       #21 _tevent_loop_once + 0x94 [ip=0x14f39ca24df4] [sp=0x7ffce1f87a20]
       #22 tevent_common_loop_wait + 0x1b [ip=0x14f39ca2509b] [sp=0x7ffce1f87a50]
       #23 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14f39ca29ba7] [sp=0x7ffce1f87a70]
       #24 smbd_process + 0x7a7 [ip=0x14f39d4862f7] [sp=0x7ffce1f87a90]
       #25 samba_tevent_glib_glue_create + 0x2291 [ip=0x55a0d0735eb1] [sp=0x7ffce1f87b20]
       #26 tevent_common_invoke_fd_handler + 0x7d [ip=0x14f39ca2570d] [sp=0x7ffce1f87bf0]
       #27 tevent_wakeup_recv + 0x1097 [ip=0x14f39ca2ba77] [sp=0x7ffce1f87c20]
       #28 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14f39ca29c07] [sp=0x7ffce1f87c80]
       #29 _tevent_loop_once + 0x94 [ip=0x14f39ca24df4] [sp=0x7ffce1f87ca0]
       #30 tevent_common_loop_wait + 0x1b [ip=0x14f39ca2509b] [sp=0x7ffce1f87cd0]
       #31 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14f39ca29ba7] [sp=0x7ffce1f87cf0]
       #32 main + 0x1b2f [ip=0x55a0d072fc1f] [sp=0x7ffce1f87d10]
       #33 __libc_start_main + 0xeb [ip=0x14f39c735e5b] [sp=0x7ffce1f880c0]
       #34 _start + 0x2a [ip=0x55a0d072fffa] [sp=0x7ffce1f88180]
    [2020/06/30 14:48:23.724916,  0] ../../source3/lib/dumpcore.c:315(dump_core)
      dumping core in /var/log/samba/cores/smbd

     

    Edited by Interstellar

    Share this comment


    Link to comment
    Share on other sites


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.