-
Posts
160 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by dsmith44
-
-
17 hours ago, ljm42 said:
So wait_daemon() currently waits up to 15 seconds. If you remove your "sleep 10" and bump wait_daemon() up to 30 does that solve the problem?
Yes that does indeed fix the issue, so I got digging.
I have DOCKER_OPTS set in /boot/config/docker.cfg as per
Removing this and problem is solved, and as I no longer need remote access I'm good, however this will bite others potentially.
It's the -H tcp://0.0.0.0:2375 that causes the delay for me, somewhere between 15 and 30s.
Thanks
-
I setup an inotify on /var/run and ran a tail of /var/log/syslog
Jun 11 23:06:47 unraid emhttpd: shcmd (362): /etc/rc.d/rc.docker start Jun 11 23:06:47 unraid root: starting dockerd ... /var/run/ CREATE dockerd.pid Jun 11 23:06:47 unraid avahi-daemon[14531]: Server startup complete. Host name is unraid.local. Local service cookie is 39991689. Jun 11 23:06:48 unraid avahi-daemon[14531]: Service "unraid" (/services/ssh.service) successfully established. Jun 11 23:06:48 unraid avahi-daemon[14531]: Service "unraid" (/services/smb.service) successfully established. Jun 11 23:06:48 unraid avahi-daemon[14531]: Service "unraid" (/services/sftp-ssh.service) successfully established. Jun 11 23:07:02 unraid root: Docker not running, can't start containers /var/run/ CREATE docker.sock
You can see that the CREATE of docker.sock is after start_containers complains docker isn't running.
Looking through rc.docker this puzzles me as wait_daemon is called in start_network so I'm at a bit of a loss
-
There is some kind of race condition here.
If I add a sleep 10 into the rc.docker script before start_containers then when I disable and re-enable docker then it works.
It is failing is_docker_running due to lack of /var/run/docker.sock
This appears <2s after start_containers complains that docker isn't running from my testing.
-
I ran unraid-api in debug mode and got the following, last line repeats forever.
Connecting to internal for the first time. ⌨️ INTERNAL:CONNECTING ⌨️ INTERNAL:CONNECTED ☁️ RELAY:CONNECTING <ws> my_servers[4fc0a96c-XXXX-400b-XXXX-9055331fa536] connected. ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW <ws> upc[58c97493-XXXX-40f6-XXXX-219c2d8ce450] connected. 📒 Checking "https://unraiddev.x.y.com" for CORS access. ✔️ Origin check passed, granting CORS! Found API key for my_servers "unraid_XXXXXXXXXXXXXXXX" ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW ☁️ RELAY:200:OK:RECONNECTING:NOW
-
Something isn't happening, as I don't have 'My servers' on the forum header.
-
No, unraid is running in a VM, but it has its own IP address and there is no proxy, reverse or otherwise, between it and the internet.
-
@jonp any clues on why we all have krb.conf (v4 if memory serves) instead of a krb5.conf in /etc?
I think these are created by emhttp?
Dean
-
Ok, I've fixed this for me.
Running a strace on the net ads join command it was referencing /var/cache/samba/smb_krb5/krb5.conf.SHORTDOMAIN in which there is the line
include /etc/krb5.conf
That file doesn't exist on my system, instead I have /etc/krb.conf.
A symlink later and I can join the domain properly.
I'm adding this to /boot/config/go for now
# Fix missing /etc/krb5.conf if [ ! -f /etc/krb5.conf ] && [ -f /etc/krb.conf ]; then ln -s /etc/krb.conf /etc/krb5.conf fi
- 3
-
I have the exact same issue.
Have tested in gui-safe mode with all plugins disabled to no effect.
Domain controller is a Data Centre 2019 VM running on a different Linux host.
Tried joining with a normal user and domain admin - no difference.
Trying a command line join gives loads of kerberos errors I can't work out.
root@unraid:~# net ads join -U domainadmin Enter domainadmin's password: smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not be read) kerberos_kinit_password_ext: kerberos init context failed (Included profile file could not be read) kerberos_kinit_password [email protected] failed: Included profile file could not be read smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not be read) smb_krb5_init_context_common: Krb5 context initialization failed (Included profile file could not be read) secrets_domain_info_kerberos_keys: kerberos init context failed (Included profile file could not be read) secrets_store_JoinCtx: secrets_domain_info_password_create(pw) failed for XXXXX - NT_STATUS_UNSUCCESSFUL libnet_join_joindomain_store_secrets: secrets_store_JoinCtx() failed NT_STATUS_UNSUCCESSFUL Failed to join domain: This machine is not currently joined to a domain. root@unraid:~#
-
This is a critical issue for me, a NAS that can't act as a NAS is somewhat pointless.
Seems to be a Samba 4.9 issue which 6.7 introduces.
https://bugzilla.samba.org/show_bug.cgi?id=13697
As suggested in multiple places (https://lists.samba.org/archive/samba/2018-September/218485.html)
running
net -s /dev/null groupmap add sid=S-1-5-32-546 unixgroup=nobody type=builtin
fixes the problem.
I've added this to my /boot/config/go script.
However I am not 100% comfortable with this as not clear if this is going to cause other issues and the mapping created doesn't actually match the errors, but will run rc5 like this for now and see if anything crops up.
- 1
-
Yes, sorry, 6.7.0-rc5.
Not sure where the rc7 came from.
- 1
-
Reverting to 6.6 and all my shares are accessible again, so there is an issue with 6.7.0-rc7
- 1
-
I also have this issue.
A domain un-join and re-join after reboot doesn't rectify this either.
This is with rc7
Final lines in syslog, with additional SMB logs enabled are
Mar 3 10:20:21 unraid smbd[7310]: check_ntlm_password: authentication for user [userid] -> [userid] -> [DOMAIN\userid] succeeded Mar 3 10:20:21 unraid smbd[7310]: [2019/03/03 10:20:21.011860, 2] ../source3/auth/token_util.c:713(finalize_local_nt_token) Mar 3 10:20:21 unraid smbd[7310]: WARNING: Failed to create BUILTIN\Administrators group! Can Winbind allocate gids? Mar 3 10:20:21 unraid smbd[7310]: [2019/03/03 10:20:21.012217, 2] ../source3/auth/token_util.c:732(finalize_local_nt_token) Mar 3 10:20:21 unraid smbd[7310]: WARNING: Failed to create BUILTIN\Users group! Can Winbind allocate gids? Mar 3 10:20:21 unraid smbd[7310]: [2019/03/03 10:20:21.012569, 2] ../source3/auth/token_util.c:774(finalize_local_nt_token) Mar 3 10:20:21 unraid smbd[7310]: Failed to create BUILTIN\Guests group NT_STATUS_ACCESS_DENIED! Can Winbind allocate gids?
So looks like some winbind problem.
Note behaviour is identical from Windows and Mac hosts, so this isn't a recent Windows update.
Happened when moving to 6.7.0-rc releases.
I will probably revert to 6.6
- 1
since 6.12 hard freezes cought a call trace
in Stable Releases
Posted
I am also seeing this issue, I've switched to ipvlan but in the past needed macvlan for some reasons, so will need to audit my many containers as to why.
diagnostics attached, kernel panic below
Hopefully upstream fix will appear in next release anyway.
unraid-diagnostics-20230629-0950.zip