• [6.9.0-beta22] SMBD Panic


    Jarsky
    • Minor

    I wonder if anyone else is having this issue?

     

    I have several Ubuntu VM's mounting my primary share from the UnRAID pool via SMB. 

    One of them in particular which runs qBitTorrent has high writes (up to approx 100MB/s). Since upgrading to the 6.9.0 build (from 6.8.3) the speed is constantly dropping as smbd appears to panic which loses connection. 

     

    From my Ubuntu VM I can see smbd loses connection and blocks the app for 2 minutes

    [18358.622226] cifs_vfs_err: 23 callbacks suppressed
    [18358.622229] CIFS VFS: Server tower has not responded in 180 seconds. Reconnecting...
    [18363.876639] CIFS VFS: Free previous auth_key.response = 00000000ee7c66f6
    [18367.456112] INFO: task qbittorrent-nox:16780 blocked for more than 120 seconds.

    When this occurs, I see the below errors on UnRAID in the SMB logs

     

    [2020/06/30 18:16:34.725129,  0] ../../source3/smbd/close.c:648(assert_no_pending_aio)
      assert_no_pending_aio: fsp->num_aio_requests=1
    [2020/06/30 18:16:34.725152,  0] ../../source3/lib/util.c:829(smb_panic_s3)
      PANIC (pid 28606): can not close with outstanding aio requests
    [2020/06/30 18:16:34.725236,  0] ../../lib/util/fault.c:222(log_stack_trace)
      BACKTRACE:
       #0 log_stack_trace + 0x39 [ip=0x14abc3605e39] [sp=0x7fffb7a7f030]
       #1 smb_panic_s3 + 0x23 [ip=0x14abc3127f73] [sp=0x7fffb7a7f970]
       #2 smb_panic + 0x2f [ip=0x14abc360604f] [sp=0x7fffb7a7f990]
       #3 create_file_default + 0x71f [ip=0x14abc34361cf] [sp=0x7fffb7a7faa0]
       #4 close_file + 0xc3 [ip=0x14abc3436b53] [sp=0x7fffb7a7fab0]
       #5 file_close_user + 0x35 [ip=0x14abc33dc485] [sp=0x7fffb7a7fcd0]
       #6 smbXsrv_session_logoff + 0x4d [ip=0x14abc347dfdd] [sp=0x7fffb7a7fcf0]
       #7 smbXsrv_session_logoff + 0x3e2 [ip=0x14abc347e372] [sp=0x7fffb7a7fd40]
       #8 dbwrap_unmarshall + 0x186 [ip=0x14abc21606b6] [sp=0x7fffb7a7fd60]
       #9 dbwrap_unmarshall + 0x3bb [ip=0x14abc21608eb] [sp=0x7fffb7a7fe20]
       #10 dbwrap_traverse + 0x7 [ip=0x14abc215ef37] [sp=0x7fffb7a7fe50]
       #11 smbXsrv_session_logoff_all + 0x5c [ip=0x14abc347e52c] [sp=0x7fffb7a7fe60]
       #12 smbXsrv_open_cleanup + 0x4d2 [ip=0x14abc3483ab2] [sp=0x7fffb7a7fea0]
       #13 smbd_exit_server_cleanly + 0x10 [ip=0x14abc3484050] [sp=0x7fffb7a7ff00]
       #14 exit_server_cleanly + 0x14 [ip=0x14abc2a44284] [sp=0x7fffb7a7ff10]
       #15 smbd_server_connection_terminate_ex + 0x111 [ip=0x14abc345fe91] [sp=0x7fffb7a7ff20]
       #16 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x14abc3462ca9] [sp=0x7fffb7a7ff50]
       #17 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a7ffc0]
       #18 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a7fff0]
       #19 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a80050]
       #20 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a80070]
       #21 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a800a0]
       #22 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a800c0]
       #23 smbd_process + 0x7a7 [ip=0x14abc34522f7] [sp=0x7fffb7a800e0]
       #24 samba_tevent_glib_glue_create + 0x2291 [ip=0x563fff42feb1] [sp=0x7fffb7a80170]
       #25 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a80240]
       #26 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a80270]
       #27 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a802d0]
       #28 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a802f0]
       #29 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a80320]
       #30 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a80340]
       #31 main + 0x1b2f [ip=0x563fff429c1f] [sp=0x7fffb7a80360]
       #32 __libc_start_main + 0xeb [ip=0x14abc2700e5b] [sp=0x7fffb7a80710]
       #33 _start + 0x2a [ip=0x563fff429ffa] [sp=0x7fffb7a807d0]

     

     

    I have tried adjusting my mount command to smb3 and also to disable cache but still the same, heres my /etc/fstab

     

    //tower/share /mnt/share cifs vers=3.0,cache=none,credentials=/home/user/.smbcredentials,uid=1000,gid=1010,iocharset=utf8,noperm 0 0
    //tower/plexmediaserver /mnt/plexmediaserver cifs vers=3.0,cache=none,credentials=/home/user/.smbcredentials,uid=1000,gid=1010,iocharset=utf8,noperm 0 0
    

     




    User Feedback

    Recommended Comments

    This issue for me might be related, as I've noticed SMB shares dropping offline on my Macs too.

     

    Jun 30 15:27:54 NAS kernel: 
    Jun 30 15:28:05 NAS emhttpd: Starting services...
    Jun 30 15:28:05 NAS emhttpd: shcmd (3068): /etc/rc.d/rc.samba restart
    Jun 30 15:28:05 NAS nmbd[19158]: [2020/06/30 15:28:05.087241,  0] ../../source3/nmbd/nmbd.c:59(terminate)
    Jun 30 15:28:05 NAS nmbd[19158]:   Got SIGTERM: going down...
    Jun 30 15:28:05 NAS winbindd[19168]: [2020/06/30 15:28:05.087260,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:05 NAS winbindd[19168]:   Got sig[15] terminate (is_parent=1)
    Jun 30 15:28:05 NAS winbindd[19170]: [2020/06/30 15:28:05.087287,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:05 NAS winbindd[19170]:   Got sig[15] terminate (is_parent=0)
    Jun 30 15:28:05 NAS winbindd[19280]: [2020/06/30 15:28:05.089948,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:05 NAS winbindd[19280]:   Got sig[15] terminate (is_parent=0)
    Jun 30 15:28:07 NAS root: Starting Samba:  /usr/sbin/smbd -D
    Jun 30 15:28:07 NAS root:                  /usr/sbin/nmbd -D
    Jun 30 15:28:07 NAS smbd[31726]: [2020/06/30 15:28:07.335241,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:07 NAS smbd[31726]:   daemon_ready: daemon 'smbd' finished starting up and ready to serve connections
    Jun 30 15:28:07 NAS root:                  /usr/sbin/wsdd 
    Jun 30 15:28:07 NAS nmbd[31731]: [2020/06/30 15:28:07.349211,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:07 NAS nmbd[31731]:   daemon_ready: daemon 'nmbd' finished starting up and ready to serve connections
    Jun 30 15:28:07 NAS root:                  /usr/sbin/winbindd -D
    Jun 30 15:28:07 NAS winbindd[31741]: [2020/06/30 15:28:07.395123,  0] ../../source3/winbindd/winbindd_cache.c:3203(initialize_winbindd_cache)
    Jun 30 15:28:07 NAS winbindd[31741]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Jun 30 15:28:07 NAS winbindd[31741]: [2020/06/30 15:28:07.395717,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:07 NAS winbindd[31741]:   daemon_ready: daemon 'winbindd' finished starting up and ready to serve connections
    Jun 30 15:28:07 NAS emhttpd: shcmd (3076): smbcontrol smbd close-share 'Backup'
    Jun 30 15:28:10 NAS emhttpd: Starting services...
    Jun 30 15:28:10 NAS emhttpd: shcmd (3078): /etc/rc.d/rc.samba restart
    Jun 30 15:28:10 NAS nmbd[31731]: [2020/06/30 15:28:10.584782,  0] ../../source3/nmbd/nmbd.c:59(terminate)
    Jun 30 15:28:10 NAS nmbd[31731]:   Got SIGTERM: going down...
    Jun 30 15:28:10 NAS winbindd[31741]: [2020/06/30 15:28:10.584818,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:10 NAS winbindd[31743]: [2020/06/30 15:28:10.584817,  0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler)
    Jun 30 15:28:10 NAS winbindd[31743]:   Got sig[15] terminate (is_parent=0)
    Jun 30 15:28:10 NAS winbindd[31741]:   Got sig[15] terminate (is_parent=1)
    Jun 30 15:28:14 NAS root: Starting Samba:  /usr/sbin/smbd -D
    Jun 30 15:28:14 NAS root:                  /usr/sbin/nmbd -D
    Jun 30 15:28:14 NAS smbd[31832]: [2020/06/30 15:28:14.246073,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:14 NAS smbd[31832]:   daemon_ready: daemon 'smbd' finished starting up and ready to serve connections
    Jun 30 15:28:14 NAS root:                  /usr/sbin/wsdd 
    Jun 30 15:28:14 NAS nmbd[31837]: [2020/06/30 15:28:14.260205,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:14 NAS nmbd[31837]:   daemon_ready: daemon 'nmbd' finished starting up and ready to serve connections
    Jun 30 15:28:14 NAS root:                  /usr/sbin/winbindd -D
    Jun 30 15:28:14 NAS winbindd[31847]: [2020/06/30 15:28:14.305319,  0] ../../source3/winbindd/winbindd_cache.c:3203(initialize_winbindd_cache)
    Jun 30 15:28:14 NAS winbindd[31847]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Jun 30 15:28:14 NAS winbindd[31847]: [2020/06/30 15:28:14.305909,  0] ../../lib/util/become_daemon.c:135(daemon_ready)
    Jun 30 15:28:14 NAS winbindd[31847]:   daemon_ready: daemon 'winbindd' finished starting up and ready to serve connections
    Jun 30 15:28:14 NAS emhttpd: shcmd (3086): smbcontrol smbd close-share 'disk1'
    Jun 30 15:28:25 NAS smbd[31912]: [2020/06/30 15:28:25.749942,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jun 30 15:28:25 NAS smbd[31912]:   lp_bool(no): value is not boolean!
    Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.315970,  0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2)
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 172.17.0.1
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.316194,  0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2)
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 192.168.122.1
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.316330,  0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2)
    Jun 30 15:28:37 NAS nmbd[31837]:   *****
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 10.10.1.150
    Jun 30 15:28:37 NAS nmbd[31837]:   
    Jun 30 15:28:37 NAS nmbd[31837]:   *****

     

    Link to comment
    11 hours ago, Jarsky said:

    P.S I did find this which is possibly related to this https://bugzilla.samba.org/show_bug.cgi?id=14301 ?

    Seems related with the error "outstanding aio requests"

    Due for fix in the next 4.12.xx build  (UnRaid 6.9.0-beta22 is on Version 4.12.3)

     

     

    Thank you for chasing down the source of this problem.  I can't really tell what is the proper patch, or else I'd just add it, but 4.12.4 is due out on July 2:

     

    https://lists.samba.org/archive/samba-announce/2020/000523.html

     

    Link to comment
    On 7/1/2020 at 2:32 AM, Interstellar said:

    This issue for me might be related, as I've noticed SMB shares dropping offline on my Macs too.

    It looks like it might be. It works fine to Windows, only seems to affect straight SMB.

     

    Check your /var/log/samba/log.smbd 

    and you should see something like this, complaining about "can not close with outstanding aio requests"

     

    [2020/07/02 05:25:48.276594,  0] ../../source3/smbd/close.c:648(assert_no_pending_aio)
      assert_no_pending_aio: fsp->num_aio_requests=1
    [2020/07/02 05:25:48.276625,  0] ../../source3/lib/util.c:829(smb_panic_s3)
      PANIC (pid 8790): can not close with outstanding aio requests
    [2020/07/02 05:25:48.276730,  0] ../../lib/util/fault.c:222(log_stack_trace)
      BACKTRACE:
       #0 log_stack_trace + 0x39 [ip=0x14abc3605e39] [sp=0x7fffb7a7f030]
       #1 smb_panic_s3 + 0x23 [ip=0x14abc3127f73] [sp=0x7fffb7a7f970]
       #2 smb_panic + 0x2f [ip=0x14abc360604f] [sp=0x7fffb7a7f990]
       #3 create_file_default + 0x71f [ip=0x14abc34361cf] [sp=0x7fffb7a7faa0]
       #4 close_file + 0xc3 [ip=0x14abc3436b53] [sp=0x7fffb7a7fab0]
       #5 file_close_user + 0x35 [ip=0x14abc33dc485] [sp=0x7fffb7a7fcd0]
       #6 smbXsrv_session_logoff + 0x4d [ip=0x14abc347dfdd] [sp=0x7fffb7a7fcf0]
       #7 smbXsrv_session_logoff + 0x3e2 [ip=0x14abc347e372] [sp=0x7fffb7a7fd40]
       #8 dbwrap_unmarshall + 0x186 [ip=0x14abc21606b6] [sp=0x7fffb7a7fd60]
       #9 dbwrap_unmarshall + 0x3bb [ip=0x14abc21608eb] [sp=0x7fffb7a7fe20]
       #10 dbwrap_traverse + 0x7 [ip=0x14abc215ef37] [sp=0x7fffb7a7fe50]
       #11 smbXsrv_session_logoff_all + 0x5c [ip=0x14abc347e52c] [sp=0x7fffb7a7fe60]
       #12 smbXsrv_open_cleanup + 0x4d2 [ip=0x14abc3483ab2] [sp=0x7fffb7a7fea0]
       #13 smbd_exit_server_cleanly + 0x10 [ip=0x14abc3484050] [sp=0x7fffb7a7ff00]
       #14 exit_server_cleanly + 0x14 [ip=0x14abc2a44284] [sp=0x7fffb7a7ff10]
       #15 smbd_server_connection_terminate_ex + 0x111 [ip=0x14abc345fe91] [sp=0x7fffb7a7ff20]
       #16 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x14abc3462ca9] [sp=0x7fffb7a7ff50]
       #17 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a7ffc0]
       #18 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a7fff0]
       #19 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a80050]
       #20 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a80070]
       #21 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a800a0]
       #22 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a800c0]
       #23 smbd_process + 0x7a7 [ip=0x14abc34522f7] [sp=0x7fffb7a800e0]
       #24 samba_tevent_glib_glue_create + 0x2291 [ip=0x563fff42feb1] [sp=0x7fffb7a80170]
       #25 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a80240]
       #26 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a80270]
       #27 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a802d0]
       #28 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a802f0]
       #29 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a80320]
       #30 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a80340]
       #31 main + 0x1b2f [ip=0x563fff429c1f] [sp=0x7fffb7a80360]
       #32 __libc_start_main + 0xeb [ip=0x14abc2700e5b] [sp=0x7fffb7a80710]
       #33 _start + 0x2a [ip=0x563fff429ffa] [sp=0x7fffb7a807d0]
    [2020/07/02 05:25:48.286467,  0] ../../source3/lib/dumpcore.c:315(dump_core)
      dumping core in /var/log/samba/cores/smbd
    [2020/07/02 05:30:22.622989,  0] ../../source3/smbd/close.c:648(assert_no_pending_aio)
      assert_no_pending_aio: fsp->num_aio_requests=1
    [2020/07/02 05:30:22.623013,  0] ../../source3/lib/util.c:829(smb_panic_s3)
      PANIC (pid 25740): can not close with outstanding aio requests
    [2020/07/02 05:30:22.623078,  0] ../../lib/util/fault.c:222(log_stack_trace)
      BACKTRACE:
       #0 log_stack_trace + 0x39 [ip=0x14abc3605e39] [sp=0x7fffb7a7f030]
       #1 smb_panic_s3 + 0x23 [ip=0x14abc3127f73] [sp=0x7fffb7a7f970]
       #2 smb_panic + 0x2f [ip=0x14abc360604f] [sp=0x7fffb7a7f990]
       #3 create_file_default + 0x71f [ip=0x14abc34361cf] [sp=0x7fffb7a7faa0]
       #4 close_file + 0xc3 [ip=0x14abc3436b53] [sp=0x7fffb7a7fab0]
       #5 file_close_user + 0x35 [ip=0x14abc33dc485] [sp=0x7fffb7a7fcd0]
       #6 smbXsrv_session_logoff + 0x4d [ip=0x14abc347dfdd] [sp=0x7fffb7a7fcf0]
       #7 smbXsrv_session_logoff + 0x3e2 [ip=0x14abc347e372] [sp=0x7fffb7a7fd40]
       #8 dbwrap_unmarshall + 0x186 [ip=0x14abc21606b6] [sp=0x7fffb7a7fd60]
       #9 dbwrap_unmarshall + 0x3bb [ip=0x14abc21608eb] [sp=0x7fffb7a7fe20]
       #10 dbwrap_traverse + 0x7 [ip=0x14abc215ef37] [sp=0x7fffb7a7fe50]
       #11 smbXsrv_session_logoff_all + 0x5c [ip=0x14abc347e52c] [sp=0x7fffb7a7fe60]
       #12 smbXsrv_open_cleanup + 0x4d2 [ip=0x14abc3483ab2] [sp=0x7fffb7a7fea0]
       #13 smbd_exit_server_cleanly + 0x10 [ip=0x14abc3484050] [sp=0x7fffb7a7ff00]
       #14 exit_server_cleanly + 0x14 [ip=0x14abc2a44284] [sp=0x7fffb7a7ff10]
       #15 smbd_server_connection_terminate_ex + 0x111 [ip=0x14abc345fe91] [sp=0x7fffb7a7ff20]
       #16 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x14abc3462ca9] [sp=0x7fffb7a7ff50]
       #17 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a7ffc0]
       #18 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a7fff0]
       #19 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a80050]
       #20 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a80070]
       #21 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a800a0]
       #22 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a800c0]
       #23 smbd_process + 0x7a7 [ip=0x14abc34522f7] [sp=0x7fffb7a800e0]
       #24 samba_tevent_glib_glue_create + 0x2291 [ip=0x563fff42feb1] [sp=0x7fffb7a80170]
       #25 tevent_common_invoke_fd_handler + 0x7d [ip=0x14abc29f070d] [sp=0x7fffb7a80240]
       #26 tevent_wakeup_recv + 0x1097 [ip=0x14abc29f6a77] [sp=0x7fffb7a80270]
       #27 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14abc29f4c07] [sp=0x7fffb7a802d0]
       #28 _tevent_loop_once + 0x94 [ip=0x14abc29efdf4] [sp=0x7fffb7a802f0]
       #29 tevent_common_loop_wait + 0x1b [ip=0x14abc29f009b] [sp=0x7fffb7a80320]
       #30 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14abc29f4ba7] [sp=0x7fffb7a80340]
       #31 main + 0x1b2f [ip=0x563fff429c1f] [sp=0x7fffb7a80360]
       #32 __libc_start_main + 0xeb [ip=0x14abc2700e5b] [sp=0x7fffb7a80710]
       #33 _start + 0x2a [ip=0x563fff429ffa] [sp=0x7fffb7a807d0]
    [2020/07/02 05:30:22.635989,  0] ../../source3/lib/dumpcore.c:315(dump_core)
      dumping core in /var/log/samba/cores/smbd

     

     

    Link to comment

    Yep. Confirmed.

     

    [2020/06/30 14:48:23.705048,  0] ../../source3/lib/util.c:829(smb_panic_s3)
      PANIC (pid 11793): can not close with outstanding aio requests
    [2020/06/30 14:48:23.705241,  0] ../../lib/util/fault.c:222(log_stack_trace)
      BACKTRACE:
       #0 log_stack_trace + 0x39 [ip=0x14f39d639e39] [sp=0x7ffce1f867f0]
       #1 smb_panic_s3 + 0x23 [ip=0x14f39d15bf73] [sp=0x7ffce1f87130]
       #2 smb_panic + 0x2f [ip=0x14f39d63a04f] [sp=0x7ffce1f87150]
       #3 create_file_default + 0x71f [ip=0x14f39d46a1cf] [sp=0x7ffce1f87260]
       #4 close_file + 0xc3 [ip=0x14f39d46ab53] [sp=0x7ffce1f87270]
       #5 file_close_conn + 0x5a [ip=0x14f39d41031a] [sp=0x7ffce1f87490]
       #6 close_cnum + 0x61 [ip=0x14f39d488ed1] [sp=0x7ffce1f874b0]
       #7 smbXsrv_tcon_disconnect + 0x4b [ip=0x14f39d4b485b] [sp=0x7ffce1f875f0]
       #8 smbXsrv_tcon_disconnect + 0x3d2 [ip=0x14f39d4b4be2] [sp=0x7ffce1f87640]
       #9 dbwrap_unmarshall + 0x186 [ip=0x14f39c1956b6] [sp=0x7ffce1f87660]
       #10 dbwrap_unmarshall + 0x3bb [ip=0x14f39c1958eb] [sp=0x7ffce1f87720]
       #11 dbwrap_traverse + 0x7 [ip=0x14f39c193f37] [sp=0x7ffce1f87750]
       #12 smbXsrv_session_global_traverse + 0x790 [ip=0x14f39d4b3860] [sp=0x7ffce1f87760]
       #13 smbXsrv_open_cleanup + 0x4bf [ip=0x14f39d4b7a9f] [sp=0x7ffce1f877b0]
       #14 smbd_exit_server_cleanly + 0x10 [ip=0x14f39d4b8050] [sp=0x7ffce1f87810]
       #15 exit_server_cleanly + 0x14 [ip=0x14f39ca78284] [sp=0x7ffce1f87820]
       #16 no_acl_syscall_error + 0x42 [ip=0x14f39d47f4d2] [sp=0x7ffce1f87830]
       #17 tevent_common_invoke_signal_handler + 0x92 [ip=0x14f39ca297b2] [sp=0x7ffce1f87840]
       #18 tevent_common_check_signal + 0xf3 [ip=0x14f39ca29943] [sp=0x7ffce1f87880]
       #19 tevent_wakeup_recv + 0xe4a [ip=0x14f39ca2b82a] [sp=0x7ffce1f879a0]
       #20 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14f39ca29c07] [sp=0x7ffce1f87a00]
       #21 _tevent_loop_once + 0x94 [ip=0x14f39ca24df4] [sp=0x7ffce1f87a20]
       #22 tevent_common_loop_wait + 0x1b [ip=0x14f39ca2509b] [sp=0x7ffce1f87a50]
       #23 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14f39ca29ba7] [sp=0x7ffce1f87a70]
       #24 smbd_process + 0x7a7 [ip=0x14f39d4862f7] [sp=0x7ffce1f87a90]
       #25 samba_tevent_glib_glue_create + 0x2291 [ip=0x55a0d0735eb1] [sp=0x7ffce1f87b20]
       #26 tevent_common_invoke_fd_handler + 0x7d [ip=0x14f39ca2570d] [sp=0x7ffce1f87bf0]
       #27 tevent_wakeup_recv + 0x1097 [ip=0x14f39ca2ba77] [sp=0x7ffce1f87c20]
       #28 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14f39ca29c07] [sp=0x7ffce1f87c80]
       #29 _tevent_loop_once + 0x94 [ip=0x14f39ca24df4] [sp=0x7ffce1f87ca0]
       #30 tevent_common_loop_wait + 0x1b [ip=0x14f39ca2509b] [sp=0x7ffce1f87cd0]
       #31 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14f39ca29ba7] [sp=0x7ffce1f87cf0]
       #32 main + 0x1b2f [ip=0x55a0d072fc1f] [sp=0x7ffce1f87d10]
       #33 __libc_start_main + 0xeb [ip=0x14f39c735e5b] [sp=0x7ffce1f880c0]
       #34 _start + 0x2a [ip=0x55a0d072fffa] [sp=0x7ffce1f88180]
    [2020/06/30 14:48:23.724916,  0] ../../source3/lib/dumpcore.c:315(dump_core)
      dumping core in /var/log/samba/cores/smbd

     

    Edited by Interstellar
    Link to comment
    15 hours ago, Interstellar said:

    Yep. Confirmed.

     

    It seems this affects older SMB clients. Both my Ubuntu VM's were 18.04 with the default Linux 4.15 Kernel, which had samba version 4.7.2. I upgraded HWE (hardware enablement stack) which upgraded my Linux kernel to 5.2. That subsequently upgraded my samba version to 4.11.x, and now no longer getting panics on UnRAID. 

     

    I'm not to familiar with OSX, but UnRAID should have a new 4.12.x samba in its next build I assume (6.9.0 RC1?), so it should be fixed then if it's related to the samba bug I linked above. 

    Link to comment

    Seems to be resolved in beta24.

     

    Getting these messages in the syslog though...

     

    Jul  9 03:00:15 NAS smbd[25571]: [2020/07/09 03:00:15.531790,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul  9 03:00:15 NAS smbd[25571]:   lp_bool(no): value is not boolean!
    Jul  9 03:09:56 NAS smbd[28278]: [2020/07/09 03:09:56.762635,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul  9 03:09:56 NAS smbd[28278]:   lp_bool(no): value is not boolean!
    Jul  9 03:24:34 NAS smbd[32488]: [2020/07/09 03:24:34.417671,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul  9 03:24:34 NAS smbd[32488]:   lp_bool(no): value is not boolean!
    Jul  9 03:30:19 NAS smbd[1791]: [2020/07/09 03:30:19.449985,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul  9 03:30:19 NAS smbd[1791]:   lp_bool(no): value is not boolean!

     

    Link to comment
    2 hours ago, Interstellar said:

    Getting these messages in the syslog though...

    Please post output of

    testparm -sv

    (this will reveal your share names, you can edit and get rid of those if you want, I only need to see  Global section)

    Link to comment

    Sorry for the delay... been away!

     

    [global]
    	abort shutdown script = 
    	add group script = 
    	additional dns hostnames = 
    	add machine script = 
    	addport command = 
    	addprinter command = 
    	add share command = 
    	add user script = 
    	add user to group script = 
    	afs token lifetime = 604800
    	afs username map = 
    	aio max threads = 100
    	algorithmic rid base = 1000
    	allow dcerpc auth level connect = No
    	allow dns updates = secure only
    	allow insecure wide links = No
    	allow nt4 crypto = No
    	allow trusted domains = Yes
    	allow unsafe cluster upgrade = No
    	apply group policies = No
    	async smb echo handler = No
    	auth event notification = No
    	auto services = 
    	binddns dir = /var/lib/samba/bind-dns
    	bind interfaces only = No
    	browse list = Yes
    	cache directory = /var/cache/samba
    	change notify = Yes
    	change share command = 
    	check password script = 
    	cldap port = 389
    	client ipc max protocol = default
    	client ipc min protocol = default
    	client ipc signing = default
    	client lanman auth = No
    	client ldap sasl wrapping = sign
    	client max protocol = default
    	client min protocol = SMB2_02
    	client NTLMv2 auth = Yes
    	client plaintext auth = No
    	client schannel = Yes
    	client signing = default
    	client use spnego principal = No
    	client use spnego = Yes
    	cluster addresses = 
    	clustering = No
    	config backend = file
    	config file = 
    	create krb5 conf = Yes
    	ctdbd socket = 
    	ctdb locktime warn threshold = 0
    	ctdb timeout = 0
    	cups connection timeout = 30
    	cups encrypt = No
    	cups server = 
    	dcerpc endpoint servers = epmapper, wkssvc, rpcecho, samr, netlogon, lsarpc, drsuapi, dssetup, unixinfo, browser, eventlog6, backupkey, dnsserver
    	deadtime = 10080
    	debug class = No
    	debug encryption = No
    	debug hires timestamp = Yes
    	debug pid = No
    	debug prefix timestamp = No
    	debug uid = No
    	dedicated keytab file = 
    	default service = 
    	defer sharing violations = Yes
    	delete group script = 
    	deleteprinter command = 
    	delete share command = 
    	delete user from group script = 
    	delete user script = 
    	dgram port = 138
    	disable netbios = No
    	disable spoolss = Yes
    	dns forwarder = 
    	dns proxy = Yes
    	dns update command = /usr/sbin/samba_dnsupdate
    	dns zone scavenging = No
    	domain logons = No
    	domain master = Auto
    	dos charset = CP850
    	dsdb event notification = No
    	dsdb group change notification = No
    	dsdb password event notification = No
    	enable asu support = No
    	enable core files = Yes
    	enable privileges = Yes
    	encrypt passwords = Yes
    	enhanced browsing = Yes
    	enumports command = 
    	eventlog list = 
    	get quota command = 
    	getwd cache = Yes
    	gpo update command = /usr/sbin/samba-gpupdate
    	guest account = nobody
    	homedir map = auto.home
    	host msdfs = Yes
    	hostname lookups = No
    	idmap backend = tdb
    	idmap cache time = 604800
    	idmap gid = 
    	idmap negative cache time = 120
    	idmap uid = 
    	include system krb5 conf = Yes
    	init logon delay = 100
    	init logon delayed hosts = 
    	interfaces = 
    	iprint server = 
    	keepalive = 300
    	kerberos encryption types = all
    	kerberos method = default
    	kernel change notify = Yes
    	kpasswd port = 464
    	krb5 port = 88
    	lanman auth = No
    	large readwrite = Yes
    	ldap admin dn = 
    	ldap connection timeout = 2
    	ldap debug level = 0
    	ldap debug threshold = 10
    	ldap delete dn = No
    	ldap deref = auto
    	ldap follow referral = Auto
    	ldap group suffix = 
    	ldap idmap suffix = 
    	ldap machine suffix = 
    	ldap max anonymous request size = 256000
    	ldap max authenticated request size = 16777216
    	ldap max search request size = 256000
    	ldap page size = 1000
    	ldap passwd sync = no
    	ldap replication sleep = 1000
    	ldap server require strong auth = Yes
    	ldap ssl = start tls
    	ldap ssl ads = No
    	ldap suffix = 
    	ldap timeout = 15
    	ldap user suffix = 
    	lm announce = Auto
    	lm interval = 60
    	load printers = No
    	local master = Yes
    	lock directory = /var/cache/samba
    	lock spin time = 200
    	log file = 
    	logging = syslog@0
    	log level = 1
    	log nt token command = 
    	logon drive = 
    	logon home = \\%N\%U
    	logon path = \\%N\%U\profile
    	logon script = 
    	log writeable files on exit = No
    	lpq cache time = 30
    	lsa over netlogon = No
    	machine password timeout = 604800
    	mangle prefix = 1
    	mangling method = hash2
    	map to guest = Bad User
    	max disk size = 0
    	max log size = 5000
    	max mux = 50
    	max open files = 16424
    	max smbd processes = 0
    	max stat cache size = 512
    	max ttl = 259200
    	max wins ttl = 518400
    	max xmit = 16644
    	mdns name = netbios
    	message command = 
    	min receivefile size = 0
    	min wins ttl = 21600
    	mit kdc command = 
    	multicast dns register = No
    	name cache timeout = 660
    	name resolve order = lmhosts wins host bcast
    	nbt client socket address = 0.0.0.0
    	nbt port = 137
    	ncalrpc dir = /var/run/samba/ncalrpc
    	netbios aliases = 
    	netbios name = BBDG-NAS
    	netbios scope = 
    	neutralize nt4 emulation = No
    	NIS homedir = No
    	nmbd bind explicit broadcast = Yes
    	nsupdate command = /usr/bin/nsupdate -g
    	ntlm auth = ntlmv1-permitted
    	nt pipe support = Yes
    	ntp signd socket directory = /var/lib/samba/ntp_signd
    	nt status support = Yes
    	null passwords = Yes
    	obey pam restrictions = No
    	old password allowed period = 60
    	oplock break wait time = 0
    	os2 driver map = 
    	os level = 100
    	pam password change = No
    	panic action = 
    	passdb backend = smbpasswd
    	passdb expand explicit = No
    	passwd chat = *new*password* %n\n *new*password* %n\n *changed*
    	passwd chat debug = No
    	passwd chat timeout = 2
    	passwd program = 
    	password hash gpg key ids = 
    	password hash userPassword schemes = 
    	password server = *
    	perfcount module = 
    	pid directory = /var/run
    	preferred master = Auto
    	prefork backoff increment = 10
    	prefork children = 4
    	prefork maximum backoff = 120
    	preload modules = 
    	printcap cache time = 750
    	printcap name = /dev/null
    	private dir = /var/lib/samba/private
    	raw NTLMv2 auth = No
    	read raw = Yes
    	realm = 
    	registry shares = No
    	reject md5 clients = No
    	reject md5 servers = No
    	remote announce = 
    	remote browse sync = 
    	rename user script = 
    	require strong key = Yes
    	reset on zero vc = No
    	restrict anonymous = 0
    	root directory = 
    	rpc big endian = No
    	rpc server dynamic port range = 49152-65535
    	rpc server port = 0
    	samba kcc command = /usr/sbin/samba_kcc
    	security = USER
    	server max protocol = SMB3
    	server min protocol = NT1
    	server multi channel support = No
    	server role = auto
    	server schannel = Yes
    	server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbindd, ntp_signd, kcc, dnsupdate, dns
    	server signing = default
    	server string = NAS
    	set primary group script = 
    	set quota command = 
    	share backend = classic
    	show add printer wizard = No
    	shutdown script = 
    	smb2 leases = Yes
    	smb2 max credits = 8192
    	smb2 max read = 8388608
    	smb2 max trans = 8388608
    	smb2 max write = 8388608
    	smbd profiling level = off
    	smb passwd file = /var/lib/samba/private/smbpasswd
    	smb ports = 445 139
    	socket options = TCP_NODELAY
    	spn update command = /usr/sbin/samba_spnupdate
    	stat cache = Yes
    	state directory = /var/lib/samba
    	svcctl list = 
    	syslog = 1
    	syslog only = No
    	template homedir = /home/%D/%U
    	template shell = /bin/false
    	time server = No
    	timestamp logs = Yes
    	tls cafile = tls/ca.pem
    	tls certfile = tls/cert.pem
    	tls crlfile = 
    	tls dh params file = 
    	tls enabled = Yes
    	tls keyfile = tls/key.pem
    	tls priority = NORMAL:-VERS-SSL3.0
    	tls verify peer = as_strict_as_possible
    	unicode = Yes
    	unix charset = UTF-8
    	unix extensions = No
    	unix password sync = No
    	use mmap = Yes
    	username level = 0
    	username map = 
    	username map cache time = 0
    	username map script = 
    	usershare allow guests = No
    	usershare max shares = 0
    	usershare owner only = Yes
    	usershare path = /var/lib/samba/usershares
    	usershare prefix allow list = 
    	usershare prefix deny list = 
    	usershare template share = 
    	utmp = No
    	utmp directory = 
    	winbind cache time = 300
    	winbindd socket directory = /var/run/samba/winbindd
    	winbind enum groups = No
    	winbind enum users = No
    	winbind expand groups = 0
    	winbind max clients = 200
    	winbind max domain connections = 1
    	winbind nested groups = Yes
    	winbind normalize names = No
    	winbind nss info = template
    	winbind offline logon = No
    	winbind reconnect delay = 30
    	winbind refresh tickets = No
    	winbind request timeout = 60
    	winbind rpc only = No
    	winbind scan trusted domains = Yes
    	winbind sealed pipes = Yes
    	winbind separator = \
    	winbind use default domain = No
    	winbind use krb5 enterprise principals = No
    	wins hook = 
    	wins proxy = No
    	wins server = 
    	wins support = No
    	workgroup = WORKGROUP
    	write raw = Yes
    	wtmp directory = 
    	fruit:nfs_aces = no
    	fruit:encoding = native
    	fruit:locking = none
    	fruit:metadata = netatalk
    	fruit:resource = file
    	fruit:aapl = yes
    	idmap config * : range = 3000-7999
    	idmap config * : backend = tdb
    	access based share enum = No
    	acl allow execute always = Yes
    	acl check permissions = Yes
    	acl group control = No
    	acl map full control = Yes
    	administrative share = No
    	admin users = 
    	afs share = No
    	aio read size = 0
    	aio write behind = 
    	aio write size = 4096
    	allocation roundup size = 0
    	available = Yes
    	blocking locks = Yes
    	block size = 1024
    	browseable = Yes
    	case sensitive = Yes
    	check parent directory delete on close = No
    	comment = 
    	copy = 
    	create mask = 0777
    	csc policy = manual
    	cups options = 
    	default case = lower
    	default devmode = Yes
    	delete readonly = No
    	delete veto files = No
    	dfree cache time = 0
    	dfree command = 
    	directory mask = 0777
    	directory name cache size = 100
    	dmapi support = No
    	dont descend = 
    	dos filemode = No
    	dos filetime resolution = No
    	dos filetimes = Yes
    	durable handles = Yes
    	ea support = Yes
    	fake directory create times = No
    	fake oplocks = No
    	follow symlinks = Yes
    	force create mode = 0000
    	force directory mode = 0000
    	force group = 
    	force printername = No
    	force unknown acl user = No
    	force user = 
    	fstype = NTFS
    	guest ok = No
    	guest only = No
    	hide dot files = No
    	hide files = 
    	hide new files timeout = 0
    	hide special files = No
    	hide unreadable = No
    	hide unwriteable files = No
    	hosts allow = 
    	hosts deny = 
    	include = /etc/samba/unassigned-shares/CCTV2.conf
    	inherit acls = No
    	inherit owner = no
    	inherit permissions = No
    	invalid users = root
    	kernel oplocks = No
    	kernel share modes = Yes
    	level2 oplocks = Yes
    	locking = Yes
    	lppause command = 
    	lpq command = lpq -P'%p'
    	lpresume command = 
    	lprm command = lprm -P'%p' %j
    	magic output = 
    	magic script = 
    	mangled names = illegal
    	mangling char = ~
    	map acl inherit = No
    	map archive = No
    	map hidden = No
    	map readonly = yes
    	map system = No
    	max connections = 0
    	max print jobs = 1000
    	max reported print jobs = 0
    	min print space = 0
    	msdfs proxy = 
    	msdfs root = No
    	msdfs shuffle referrals = No
    	nt acl support = Yes
    	ntvfs handler = unixuid, default
    	oplocks = Yes
    	path = 
    	posix locking = Yes
    	postexec = 
    	preexec = 
    	preexec close = No
    	preserve case = Yes
    	printable = No
    	print command = lpr -r -P'%p' %s
    	printer name = 
    	printing = bsd
    	printjob username = %U
    	print notify backchannel = No
    	queuepause command = 
    	queueresume command = 
    	read list = 
    	read only = Yes
    	root postexec = 
    	root preexec = 
    	root preexec close = No
    	short preserve case = Yes
    	smbd async dosmode = No
    	smbd getinfo ask sharemode = Yes
    	smbd max async dosmode = 0
    	smbd search ask sharemode = Yes
    	smb encrypt = default
    	spotlight = No
    	spotlight backend = noindex
    	store dos attributes = Yes
    	strict allocate = No
    	strict locking = Auto
    	strict rename = No
    	strict sync = Yes
    	sync always = No
    	use client driver = No
    	use sendfile = Yes
    	valid users = 
    	veto files = /._*/.DS_Store/
    	veto oplock files = 
    	vfs objects = 
    	volume = 
    	wide links = Yes
    	write list = 

     

     

     

    Link to comment
    18 hours ago, Interstellar said:

    Sorry for the delay... been away!

    Those messages are probably being caused by a plugin.

    Link to comment

    Not installed anything new recently, so I’ll have to investigate.

     

    Although everything appears to work so I think I’ll just leave it for now!

     

    Thanks!

    Edited by Interstellar
    Link to comment

    Did you find out what was causing this?

    I am running into this on 6.9.0-beta25

    Jul 22 11:14:04 Unraid smbd[32706]: [2020/07/22 11:14:04.494323,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 22 11:14:04 Unraid smbd[32706]:   lp_bool(no): value is not boolean!
    Jul 22 11:14:16 Unraid smbd[7368]: [2020/07/22 11:14:16.215700,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 22 11:14:16 Unraid smbd[7368]:   lp_bool(no): value is not boolean!
    Jul 22 11:14:17 Unraid smbd[7392]: [2020/07/22 11:14:17.397938,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 22 11:14:17 Unraid smbd[7392]:   lp_bool(no): value is not boolean!
    Jul 22 11:14:18 Unraid smbd[7736]: [2020/07/22 11:14:18.053278,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 22 11:14:18 Unraid smbd[7736]:   lp_bool(no): value is not boolean!

     

    Link to comment
    7 hours ago, steini84 said:

    Did you find out what was causing this?

    I am running into this on 6.9.0-beta25

    
    Jul 22 11:14:04 Unraid smbd[32706]: [2020/07/22 11:14:04.494323,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 22 11:14:04 Unraid smbd[32706]:   lp_bool(no): value is not boolean!
    Jul 22 11:14:16 Unraid smbd[7368]: [2020/07/22 11:14:16.215700,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 22 11:14:16 Unraid smbd[7368]:   lp_bool(no): value is not boolean!
    Jul 22 11:14:17 Unraid smbd[7392]: [2020/07/22 11:14:17.397938,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 22 11:14:17 Unraid smbd[7392]:   lp_bool(no): value is not boolean!
    Jul 22 11:14:18 Unraid smbd[7736]: [2020/07/22 11:14:18.053278,  0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 22 11:14:18 Unraid smbd[7736]:   lp_bool(no): value is not boolean!

     

    This is not a "panic" aka, "crash".  It's a config setting which is not set correctly.  Do you have anything in config/smb-extra.conf file?  Do messages still appear if you boot in 'safe mode'?

    Link to comment

    I have not tried booting into safe mode, but will try when nobody is home and using the server

     

    The only thing I have is:

    cat /boot/config/smb-extra.conf
    veto files = /._*/.DS_Store/

    I have tried restarting samba after removing this line, but no difference there. 

    Link to comment

    @limetech

    I have this "error" too, and a few other in this post. Beta 25 right now

     

     

    Jul 29 17:54:21 Unraid smbd[25083]: [2020/07/29 17:54:21.267529, 0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 29 17:54:21 Unraid smbd[25083]: lp_bool(no): value is not boolean!
    Jul 29 17:54:24 Unraid smbd[25083]: [2020/07/29 17:54:24.237445, 0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 29 17:54:24 Unraid smbd[25083]: lp_bool(no): value is not boolean!
    Jul 29 17:54:24 Unraid smbd[25083]: [2020/07/29 17:54:24.253927, 0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 29 17:54:24 Unraid smbd[25083]: lp_bool(no): value is not boolean!
    Jul 29 17:54:24 Unraid smbd[25083]: [2020/07/29 17:54:24.272429, 0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 29 17:54:24 Unraid smbd[25083]: lp_bool(no): value is not boolean!
    Jul 29 17:54:25 Unraid smbd[25083]: [2020/07/29 17:54:25.691188, 0] ../../lib/param/loadparm.c:415(lp_bool)
    Jul 29 17:54:25 Unraid smbd[25083]: lp_bool(no): value is not boolean!
    Jul 29 18:00:41 Unraid kernel: kvm: already loaded the other module

     

    Link to comment

    This should probably be a new thread, as this isn't related to the smbd panic I reported. SMBD Panic actually has a negative effect in that it terminates communication (i.e your network shares become unreachable) and times out. 

     

    This seems to be related to an incorrect boolean value (i.e something isnt correctly defined as true/false or yes/no).

    This shouldn't cause a daemon crash, at most one of your shares isn't working. 

    Link to comment

    FYI i installed 6.9.0-beta29 and the warnings seem to have stopped:

     

    root@Unraid:~# cat /var/log/syslog | grep -i lp_bool
    root@Unraid:~#

    I also cleaned up some things related to my normal user account since that ability was removed ("only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')"

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.