Jump to content

Interstellar

Members
  • Content Count

    587
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Interstellar

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Seems to be resolved in beta24. Getting these messages in the syslog though... Jul 9 03:00:15 NAS smbd[25571]: [2020/07/09 03:00:15.531790, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 9 03:00:15 NAS smbd[25571]: lp_bool(no): value is not boolean! Jul 9 03:09:56 NAS smbd[28278]: [2020/07/09 03:09:56.762635, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 9 03:09:56 NAS smbd[28278]: lp_bool(no): value is not boolean! Jul 9 03:24:34 NAS smbd[32488]: [2020/07/09 03:24:34.417671, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 9 03:24:34 NAS smbd[32488]: lp_bool(no): value is not boolean! Jul 9 03:30:19 NAS smbd[1791]: [2020/07/09 03:30:19.449985, 0] ../../lib/param/loadparm.c:415(lp_bool) Jul 9 03:30:19 NAS smbd[1791]: lp_bool(no): value is not boolean!
  2. Yep. Confirmed. [2020/06/30 14:48:23.705048, 0] ../../source3/lib/util.c:829(smb_panic_s3) PANIC (pid 11793): can not close with outstanding aio requests [2020/06/30 14:48:23.705241, 0] ../../lib/util/fault.c:222(log_stack_trace) BACKTRACE: #0 log_stack_trace + 0x39 [ip=0x14f39d639e39] [sp=0x7ffce1f867f0] #1 smb_panic_s3 + 0x23 [ip=0x14f39d15bf73] [sp=0x7ffce1f87130] #2 smb_panic + 0x2f [ip=0x14f39d63a04f] [sp=0x7ffce1f87150] #3 create_file_default + 0x71f [ip=0x14f39d46a1cf] [sp=0x7ffce1f87260] #4 close_file + 0xc3 [ip=0x14f39d46ab53] [sp=0x7ffce1f87270] #5 file_close_conn + 0x5a [ip=0x14f39d41031a] [sp=0x7ffce1f87490] #6 close_cnum + 0x61 [ip=0x14f39d488ed1] [sp=0x7ffce1f874b0] #7 smbXsrv_tcon_disconnect + 0x4b [ip=0x14f39d4b485b] [sp=0x7ffce1f875f0] #8 smbXsrv_tcon_disconnect + 0x3d2 [ip=0x14f39d4b4be2] [sp=0x7ffce1f87640] #9 dbwrap_unmarshall + 0x186 [ip=0x14f39c1956b6] [sp=0x7ffce1f87660] #10 dbwrap_unmarshall + 0x3bb [ip=0x14f39c1958eb] [sp=0x7ffce1f87720] #11 dbwrap_traverse + 0x7 [ip=0x14f39c193f37] [sp=0x7ffce1f87750] #12 smbXsrv_session_global_traverse + 0x790 [ip=0x14f39d4b3860] [sp=0x7ffce1f87760] #13 smbXsrv_open_cleanup + 0x4bf [ip=0x14f39d4b7a9f] [sp=0x7ffce1f877b0] #14 smbd_exit_server_cleanly + 0x10 [ip=0x14f39d4b8050] [sp=0x7ffce1f87810] #15 exit_server_cleanly + 0x14 [ip=0x14f39ca78284] [sp=0x7ffce1f87820] #16 no_acl_syscall_error + 0x42 [ip=0x14f39d47f4d2] [sp=0x7ffce1f87830] #17 tevent_common_invoke_signal_handler + 0x92 [ip=0x14f39ca297b2] [sp=0x7ffce1f87840] #18 tevent_common_check_signal + 0xf3 [ip=0x14f39ca29943] [sp=0x7ffce1f87880] #19 tevent_wakeup_recv + 0xe4a [ip=0x14f39ca2b82a] [sp=0x7ffce1f879a0] #20 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14f39ca29c07] [sp=0x7ffce1f87a00] #21 _tevent_loop_once + 0x94 [ip=0x14f39ca24df4] [sp=0x7ffce1f87a20] #22 tevent_common_loop_wait + 0x1b [ip=0x14f39ca2509b] [sp=0x7ffce1f87a50] #23 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14f39ca29ba7] [sp=0x7ffce1f87a70] #24 smbd_process + 0x7a7 [ip=0x14f39d4862f7] [sp=0x7ffce1f87a90] #25 samba_tevent_glib_glue_create + 0x2291 [ip=0x55a0d0735eb1] [sp=0x7ffce1f87b20] #26 tevent_common_invoke_fd_handler + 0x7d [ip=0x14f39ca2570d] [sp=0x7ffce1f87bf0] #27 tevent_wakeup_recv + 0x1097 [ip=0x14f39ca2ba77] [sp=0x7ffce1f87c20] #28 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x14f39ca29c07] [sp=0x7ffce1f87c80] #29 _tevent_loop_once + 0x94 [ip=0x14f39ca24df4] [sp=0x7ffce1f87ca0] #30 tevent_common_loop_wait + 0x1b [ip=0x14f39ca2509b] [sp=0x7ffce1f87cd0] #31 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x14f39ca29ba7] [sp=0x7ffce1f87cf0] #32 main + 0x1b2f [ip=0x55a0d072fc1f] [sp=0x7ffce1f87d10] #33 __libc_start_main + 0xeb [ip=0x14f39c735e5b] [sp=0x7ffce1f880c0] #34 _start + 0x2a [ip=0x55a0d072fffa] [sp=0x7ffce1f88180] [2020/06/30 14:48:23.724916, 0] ../../source3/lib/dumpcore.c:315(dump_core) dumping core in /var/log/samba/cores/smbd
  3. This issue for me might be related, as I've noticed SMB shares dropping offline on my Macs too. Jun 30 15:27:54 NAS kernel: Jun 30 15:28:05 NAS emhttpd: Starting services... Jun 30 15:28:05 NAS emhttpd: shcmd (3068): /etc/rc.d/rc.samba restart Jun 30 15:28:05 NAS nmbd[19158]: [2020/06/30 15:28:05.087241, 0] ../../source3/nmbd/nmbd.c:59(terminate) Jun 30 15:28:05 NAS nmbd[19158]: Got SIGTERM: going down... Jun 30 15:28:05 NAS winbindd[19168]: [2020/06/30 15:28:05.087260, 0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler) Jun 30 15:28:05 NAS winbindd[19168]: Got sig[15] terminate (is_parent=1) Jun 30 15:28:05 NAS winbindd[19170]: [2020/06/30 15:28:05.087287, 0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler) Jun 30 15:28:05 NAS winbindd[19170]: Got sig[15] terminate (is_parent=0) Jun 30 15:28:05 NAS winbindd[19280]: [2020/06/30 15:28:05.089948, 0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler) Jun 30 15:28:05 NAS winbindd[19280]: Got sig[15] terminate (is_parent=0) Jun 30 15:28:07 NAS root: Starting Samba: /usr/sbin/smbd -D Jun 30 15:28:07 NAS root: /usr/sbin/nmbd -D Jun 30 15:28:07 NAS smbd[31726]: [2020/06/30 15:28:07.335241, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Jun 30 15:28:07 NAS smbd[31726]: daemon_ready: daemon 'smbd' finished starting up and ready to serve connections Jun 30 15:28:07 NAS root: /usr/sbin/wsdd Jun 30 15:28:07 NAS nmbd[31731]: [2020/06/30 15:28:07.349211, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Jun 30 15:28:07 NAS nmbd[31731]: daemon_ready: daemon 'nmbd' finished starting up and ready to serve connections Jun 30 15:28:07 NAS root: /usr/sbin/winbindd -D Jun 30 15:28:07 NAS winbindd[31741]: [2020/06/30 15:28:07.395123, 0] ../../source3/winbindd/winbindd_cache.c:3203(initialize_winbindd_cache) Jun 30 15:28:07 NAS winbindd[31741]: initialize_winbindd_cache: clearing cache and re-creating with version number 2 Jun 30 15:28:07 NAS winbindd[31741]: [2020/06/30 15:28:07.395717, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Jun 30 15:28:07 NAS winbindd[31741]: daemon_ready: daemon 'winbindd' finished starting up and ready to serve connections Jun 30 15:28:07 NAS emhttpd: shcmd (3076): smbcontrol smbd close-share 'Backup' Jun 30 15:28:10 NAS emhttpd: Starting services... Jun 30 15:28:10 NAS emhttpd: shcmd (3078): /etc/rc.d/rc.samba restart Jun 30 15:28:10 NAS nmbd[31731]: [2020/06/30 15:28:10.584782, 0] ../../source3/nmbd/nmbd.c:59(terminate) Jun 30 15:28:10 NAS nmbd[31731]: Got SIGTERM: going down... Jun 30 15:28:10 NAS winbindd[31741]: [2020/06/30 15:28:10.584818, 0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler) Jun 30 15:28:10 NAS winbindd[31743]: [2020/06/30 15:28:10.584817, 0] ../../source3/winbindd/winbindd.c:244(winbindd_sig_term_handler) Jun 30 15:28:10 NAS winbindd[31743]: Got sig[15] terminate (is_parent=0) Jun 30 15:28:10 NAS winbindd[31741]: Got sig[15] terminate (is_parent=1) Jun 30 15:28:14 NAS root: Starting Samba: /usr/sbin/smbd -D Jun 30 15:28:14 NAS root: /usr/sbin/nmbd -D Jun 30 15:28:14 NAS smbd[31832]: [2020/06/30 15:28:14.246073, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Jun 30 15:28:14 NAS smbd[31832]: daemon_ready: daemon 'smbd' finished starting up and ready to serve connections Jun 30 15:28:14 NAS root: /usr/sbin/wsdd Jun 30 15:28:14 NAS nmbd[31837]: [2020/06/30 15:28:14.260205, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Jun 30 15:28:14 NAS nmbd[31837]: daemon_ready: daemon 'nmbd' finished starting up and ready to serve connections Jun 30 15:28:14 NAS root: /usr/sbin/winbindd -D Jun 30 15:28:14 NAS winbindd[31847]: [2020/06/30 15:28:14.305319, 0] ../../source3/winbindd/winbindd_cache.c:3203(initialize_winbindd_cache) Jun 30 15:28:14 NAS winbindd[31847]: initialize_winbindd_cache: clearing cache and re-creating with version number 2 Jun 30 15:28:14 NAS winbindd[31847]: [2020/06/30 15:28:14.305909, 0] ../../lib/util/become_daemon.c:135(daemon_ready) Jun 30 15:28:14 NAS winbindd[31847]: daemon_ready: daemon 'winbindd' finished starting up and ready to serve connections Jun 30 15:28:14 NAS emhttpd: shcmd (3086): smbcontrol smbd close-share 'disk1' Jun 30 15:28:25 NAS smbd[31912]: [2020/06/30 15:28:25.749942, 0] ../../lib/param/loadparm.c:415(lp_bool) Jun 30 15:28:25 NAS smbd[31912]: lp_bool(no): value is not boolean! Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.315970, 0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2) Jun 30 15:28:37 NAS nmbd[31837]: ***** Jun 30 15:28:37 NAS nmbd[31837]: Jun 30 15:28:37 NAS nmbd[31837]: Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 172.17.0.1 Jun 30 15:28:37 NAS nmbd[31837]: Jun 30 15:28:37 NAS nmbd[31837]: ***** Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.316194, 0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2) Jun 30 15:28:37 NAS nmbd[31837]: ***** Jun 30 15:28:37 NAS nmbd[31837]: Jun 30 15:28:37 NAS nmbd[31837]: Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 192.168.122.1 Jun 30 15:28:37 NAS nmbd[31837]: Jun 30 15:28:37 NAS nmbd[31837]: ***** Jun 30 15:28:37 NAS nmbd[31837]: [2020/06/30 15:28:37.316330, 0] ../../source3/nmbd/nmbd_become_lmb.c:397(become_local_master_stage2) Jun 30 15:28:37 NAS nmbd[31837]: ***** Jun 30 15:28:37 NAS nmbd[31837]: Jun 30 15:28:37 NAS nmbd[31837]: Samba name server NAS is now a local master browser for workgroup WORKGROUP on subnet 10.10.1.150 Jun 30 15:28:37 NAS nmbd[31837]: Jun 30 15:28:37 NAS nmbd[31837]: *****
  4. Background: I had another thread open (which I've now deleted) as I thought the smb message I got at the time was relevant. Short Story: Previously I used to backup directly to /mnt/disk1/Backup/XXX and this seemingly worked just fine. However a few months ago now these started to regularly fail, especially when backing up my Mac's boot drive which has some very long and deep file paths. After some testing I found out that going via the user share "Backup" works what appears to be absolutely perfectly with only minor errors (i.e. the backup completes compared to failing completely). If I move back to "/mnt/disk1/Backup/XXX" things start to fail again. This applies to both rsync and CCC. On some occasions the SMB mount point is actually ejected (along with all the others). These are the errors that I can see in syslog during the backups: Jun 1 18:34:13 TOWER-NAS smbd[24819]: [2020/06/01 18:34:13.839583, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:34:13 TOWER-NAS smbd[24819]: lp_bool(no): value is not boolean! Jun 1 18:35:21 TOWER-NAS smbd[29422]: [2020/06/01 18:35:21.358475, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:35:21 TOWER-NAS smbd[29422]: lp_bool(no): value is not boolean! Jun 1 18:37:52 TOWER-NAS smbd[9607]: [2020/06/01 18:37:52.683904, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:37:52 TOWER-NAS smbd[9607]: lp_bool(no): value is not boolean! Jun 1 18:38:13 TOWER-NAS smbd[11372]: [2020/06/01 18:38:13.076973, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:38:13 TOWER-NAS smbd[11372]: lp_bool(no): value is not boolean! Jun 1 18:39:28 TOWER-NAS smbd[17024]: [2020/06/01 18:39:28.851723, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:39:28 TOWER-NAS smbd[17024]: lp_bool(no): value is not boolean! Jun 1 18:42:19 TOWER-NAS smbd[31108]: [2020/06/01 18:42:19.593322, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:42:19 TOWER-NAS smbd[31108]: lp_bool(no): value is not boolean! Jun 1 18:42:52 TOWER-NAS smbd[429]: [2020/06/01 18:42:52.433449, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:42:52 TOWER-NAS smbd[429]: lp_bool(no): value is not boolean! Jun 1 18:43:18 TOWER-NAS smbd[3406]: [2020/06/01 18:43:18.728991, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:43:18 TOWER-NAS smbd[3406]: lp_bool(no): value is not boolean! Jun 1 18:47:55 TOWER-NAS smbd[429]: [2020/06/01 18:47:55.535195, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:47:55 TOWER-NAS smbd[429]: lp_bool(no): value is not boolean! Jun 1 18:48:58 TOWER-NAS smbd[30389]: [2020/06/01 18:48:58.740944, 0] ../../lib/param/loadparm.c:414(lp_bool) Jun 1 18:48:58 TOWER-NAS smbd[30389]: lp_bool(no): value is not boolean! The problem with going via the user share is that the performance is much slower. These are the SMB settings at the moment: veto files = /._*/.DS_Store/ case sensitive = yes #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [global] log level = 0 logging = syslog Enhanced macOS interoperability: Yes Enable NetBIOS: Yes Enable WSD: Yes I originally thought it was something to do with macOS, but given it works via a user share and not a disk share then that points back at UnRAID. There is nothing else in the log that hints at any other problems. Cache_dirs has been tried on and off as well as at different search levels. I've also tried 6.0beta1 - same problems. Why on earth would the user share but not the disk share work OK!? Thanks...!
  5. Bit late to the party but upgraded with no-issues thus far... Keep up the good work.
  6. Also having this issue now... my CCC backups kept failing and I thought it wasn’t UnRAID until I stumbled across this... I'm fairly sure this is affecting me too. All the above has had zero affect on me. One folder with 22k files in (A time-lapse from 9 months ago) either shows nothing or the incorrect number of files in Finder. Going to roll back to 6.7.2 to test...
  7. Regarding cache_dirs, is there a way to permanently exclude sub directories? I need "_CCC SafetyNet" excluded... I can't seem to to find a way of doing this?
  8. Just before I open a bug topic, can people using Safari check that when you open a sub-window (e.g. the log) that it asks you to login again? Safari: I get the login window on every pop-up (E.g. The 'Log Summary' button) which when you login just takes you to the home 'Dashboard' page again... Chrome: Works I would assume as intended (all works as per pre-6.8) I use Safari 99.999% of the time, so this is a minor annoyance. (This happens on both my iMac and newly installed MBP so it isn't a cache issue).
  9. Updated this morning. System totally locked up after about 10 mins. Nothing in the console or flash drive syslog. Forced a reboot - everything ok 8 hours later so hoping it is a one off!
  10. Update. Managed to achieve isolating Docker/VMs in the end thanks to the following threads: Essentially your second/third/forth NIC needs to be set as follows in UnRAID's network settings: UnRAID Network Settings Enable bonding: No Enable bridging: No IPv4 address assignment: None UnRAID Docker Settings (Example) - Use your own IP ranges. Subnet: 10.10.2.0/24 Gateway: 10.10.2.1 DHCP pool: 10.10.2.100/27 (32 hosts) Then in each of your docker containers you need to pick eth1 (or br1 if you picked 'Enable Bridging' = Yes above). Then add a fixed IP if you want, otherwise it'll be set an IP based on the range set above. Now all my Dockers can communicate with each other, the outside world and UnRAID (and vice-versa)!
  11. Still struggling with this if anyone has any ideas? Cheers.
  12. Any ideas why I'm now getting this error in the preview window? (Only change I've made is upgrading from 6.7.1 to 6.7.2) tput: unknown terminal "tmux-256color" Thoughts? Cheers.
  13. Updated from 6.6.5 (just shy of 90 days uptime...) All seems to have gone OK. Updated all my plugins and dockers before hand and again afterwards for those plugins that required 6.7.0. I'll be staying at this version for another 90 days 😂
  14. As a side note I hate monochrome icons. When the sidebar icons changed from colour to monochrome in OSX it takes longer for me to work out which one I want. Style over functionality!!!