Jump to content

Server notifies me in the bottom left it is "Array Started•Starting services..." been like this for over 12 hours.


Recommended Posts

5 minutes ago, PeteAsking said:

I dont notice anything wrong, all is working but is it normal for unraid to tell me its starting services forever? Can I see what has got stuck somehow?

It is not normal :( 

 

you could try navigating to a different page in the webGUI in case it is just a refresh issue.

Link to comment

Just checked again and the message has gone away on its own, very strange. I did change the setting on my disks array to auto start (it was off) and change 3 vm's to auto start and some dockers to auto start. I had them all off and the array off for auto starting as I had turned it off before updating to the latest unraid version. Perhaps this has something to do with it.

 

Edited by PeteAsking
Link to comment
  • 3 weeks later...

Just encountered this as well after changing a share's minimum free space. Everything seems to be working, last log message was about winbind but i'm not sure they're related.

 

Dec 25 18:40:35 Node  wsdd2[11454]: 'Terminated' signal received.
Dec 25 18:40:35 Node  winbindd[11458]: [2022/12/25 18:40:35.498412,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Dec 25 18:40:35 Node  winbindd[11458]:   Got sig[15] terminate (is_parent=1)
Dec 25 18:40:35 Node  wsdd2[11454]: terminating.
Dec 25 18:40:35 Node  winbindd[11460]: [2022/12/25 18:40:35.498462,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Dec 25 18:40:35 Node  winbindd[11460]:   Got sig[15] terminate (is_parent=0)
Dec 25 18:40:35 Node  winbindd[12972]: [2022/12/25 18:40:35.498567,  0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler)
Dec 25 18:40:35 Node  winbindd[12972]:   Got sig[15] terminate (is_parent=0)
Dec 25 18:40:37 Node root: Starting Samba:  /usr/sbin/smbd -D
Dec 25 18:40:37 Node  smbd[15089]: [2022/12/25 18:40:37.670343,  0] ../../source3/smbd/server.c:1741(main)
Dec 25 18:40:37 Node  smbd[15089]:   smbd version 4.17.3 started.
Dec 25 18:40:37 Node  smbd[15089]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Dec 25 18:40:37 Node root:                  /usr/sbin/wsdd2 -d 
Dec 25 18:40:37 Node  wsdd2[15103]: starting.
Dec 25 18:40:37 Node root:                  /usr/sbin/winbindd -D
Dec 25 18:40:37 Node  winbindd[15104]: [2022/12/25 18:40:37.722342,  0] ../../source3/winbindd/winbindd.c:1440(main)
Dec 25 18:40:37 Node  winbindd[15104]:   winbindd version 4.17.3 started.
Dec 25 18:40:37 Node  winbindd[15104]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
Dec 25 18:40:37 Node  winbindd[15106]: [2022/12/25 18:40:37.724476,  0] ../../source3/winbindd/winbindd_cache.c:3116(initialize_winbindd_cache)
Dec 25 18:40:37 Node  winbindd[15106]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
Dec 25 18:41:52 Node  sshd[16971]: Connection from 10.250.0.3 port 57350 on 192.168.20.249 port 22 rdomain ""

 

 

 

node-diagnostics-20221225-1845.zip

 

 

EDIT: and just like that after about 10 minutes its gone. No entries in the logs or anything.... very strange.

Edited by weirdcrap
  • Upvote 1
Link to comment
  • 4 weeks later...
1 minute ago, DontWorryScro said:

I too am seeing this right now.  Did update to 6.11.5 last week or so and did change minimum space requirements on a share but I can't be sure if this has been there all along or if I just happened to only notice it this morning.

 

AthkVpE.png

Have you tried an array stop and then start? That cleared it for me. Reboot would probably also work.

  • Upvote 1
Link to comment
  • 3 weeks later...
On 1/17/2023 at 11:20 AM, wgstarks said:

Have you tried an array stop and then start? That cleared it for me. Reboot would probably also work.

Just chiming in to report this issue as well!  Rebooting temporarily resolves the issue.   But it randomly returns, and my users are reporting flaky file access (crashes, slow to load etc).  This only seems to have become an issue since "upgrading" to v.6.11.5. 

 

I am also getting a random error "kernel: traps: lsof[*****] general protection fault ip:************* error:0 in libc-2.36.so[**********]" in my logs as well.  This is quite frustrating over the last few weeks, as random things i do "resolve" the issue, so i have no idea what the actual cause/solution is!  

(BTW, i just noticed that we all seem to be running Supermicro boards (mine is x10)...i wonder if that is related?)

 

EDIT #2:
Yet another observation:  I had the "Starting Services" message on the bottom, then i hit mover (to move some local machine image backups from the cache to the array), and it seems like the "starting services" message is gone!  AHHHHHHHHHH!!! The randomness continues!

 

Edited by miicar
possible epiphany
Link to comment
6 hours ago, miicar said:

I am also getting a random error "kernel: traps: lsof[*****] general protection fault ip:************* error:0 in libc-2.36.so[**********]" in my logs as well.  This is quite frustrating over the last few weeks, as random things i do "resolve" the issue, so i have no idea what the actual cause/solution is!  

 

I've gotten this on and off for years across various versions. It doesn't seem to affect anything for me that I can tell.

Link to comment

So interesting how others have said this is sporadic and they can't figure out what is going on. Once I've *kind of* figured out my SMART scanning issue, that message went away.

 

I am wondering if smartctl is doing a long scan of drives (which can take upwards of 12 hours) is the cause of the services message?

  • Upvote 1
Link to comment
4 minutes ago, aglyons said:

I am wondering if smartctl is doing a long scan of drives (which can take upwards of 12 hours) is the cause of the services message?

Unraid never does this on its own.    If you want the Extended SMART test to be run then you have to initiate it manually from a disks settings page.

  • Haha 1
Link to comment
  • 1 month later...
On 3/19/2023 at 6:45 PM, xorinzor said:

I can reproduce this whenever I change the "Use cache pool" setting of shares.

If I change the "Export" field for example it resolves itself.

 

edit: I am on 6.11.5

Exact same scenario here.  6.11.5, "STARTING SERVICES" message came on persistently after modifying a share's "Use cache pool" setting, message went away when I changed the share's "Export" option.  

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...