Why shutdown always this painfull


Recommended Posts

Hi, I am going to rant here and anyone wants to slate me for it go ahead. But after so many years of programming why the shutdown process is so painfull and messy? Why do i have to use commands and google and do research just to bloody shutdown the server?

 

Can you not just have the server kill all process when i issue the shutdown command? come on guys, this is the most basic thing and 10 years after inception we are still having to try and figure out why we can not shut down the server.

Link to comment

Ok, during the above i had to use the power button and gone through parity sync again after the hard shutdown. I have been testing unraid. i like the overall working and just purchased it. While trying to install the key, it asked me to stop the array. I clicked the stop array button and again now saying "Array Stopping•Retry unmounting user share(s)..". So this is what i meant last time. I am not sure what commands i need to run or why even have to go thrugh this process just to stop the array? where do i start troubleshooting? and just to be clear I am angry with the unriad software, but I appriciate your efferot to help. so dont take my anger personally.

Link to comment

in addition to the recommendation from @trurl above, some general remarks

 

... after pulling the power forcibly, while the array was online, going through parity sync/rebuild is normal and expected, don't you agree?

 

...the message "Array Stopping•Retry unmounting user share(s).." indicates, that there are shares in access by other processes.

Some of them might not be a local process but could be "external" (remote clients or local VMs accessing the shares, still writing to the array and behaving badly by not shutting down or disconnecting), which is the reason the process of reaching a state where a clean shutdown is possible might take some time.

 

Tell us more about your setup.

Are you running dockers or VMs and if yes, which ones...also do you have a cache disk/pool and how are the shares for VMs, docker.img and appdata configured with respect to how cache is used.

If these shares are located not only on cache but on the array, this is - to my experience - the main reason which increases the time to shut the array offline.

Link to comment
Nov  1 13:09:25 Tower emhttpd: Unmounting disks...
Nov  1 13:09:25 Tower emhttpd: shcmd (222): umount /mnt/cache
Nov  1 13:09:25 Tower root: umount: /mnt/cache: target is busy.
Nov  1 13:09:25 Tower emhttpd: shcmd (222): exit status: 32
Nov  1 13:09:25 Tower emhttpd: Retry unmounting disk share(s)...
Nov  1 13:09:26 Tower root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
### [PREVIOUS LINE REPEATED 3 TIMES] ###

Do you have any other browsers or tabs accessing your server? Any computer with open files on the server?

 

Do you have a command line open anywhere with any disk (cache?) as the current working directory?

Link to comment
10 minutes ago, Ford Prefect said:

in addition to the recommendation from @trurl above, some general remarks

 

... after pulling the power forcibly, while the array was online, going through parity sync/rebuild is normal and expected, don't you agree?

 

...the message "Array Stopping•Retry unmounting user share(s).." indicates, that there are shares in access by other processes.

Some of them might not be a local process but could be "external" (remote clients or local VMs accessing the shares, still writing to the array and behaving badly by not shutting down or disconnecting), which is the reason the process of reaching a state where a clean shutdown is possible might take some time.

 

Tell us more about your setup.

Are you running dockers or VMs and if yes, which ones...also do you have a cache disk/pool and how are the shares for VMs, docker.img and appdata configured with respect to how cache is used.

If these shares are located not only on cache but on the array, this is - to my experience - the main reason which increases the time to shut the array offline.

 

... after pulling the power forcibly, while the array was online, going through parity sync/rebuild is normal and expected, don't you agree?

 

I aggree it should do this. what i dont aggree with is Unriad should not make it so difficult to shutdown the server where you are forced to use the power button and in turn have to re run the parity sync.

 

========================================

 

..the message "Array Stopping•Retry unmounting user share(s).." indicates, that there are shares in access by other processes.

Some of them might not be a local process but could be "external" (remote clients or local VMs accessing the shares, still writing to the array and behaving badly by not shutting down or disconnecting), which is the reason the process of reaching a state where a clean shutdown is possible might take some time.

 

so if i understood this correctly if we wait a little while it may stopo the array by itself? interesting. will give it 5 minutes or so next time to see if it helps.

=======================================

 

Tell us more about your setup.

Are you running dockers or VMs and if yes, which ones...also do you have a cache disk/pool and how are the shares for VMs, docker.img and appdata configured with respect to how cache is used.

 

No VMs installed, but running, Plex, Krusader, Unbalance, nexctcloud, mariadb and few other applications suggested by spaceinvader one.

Specs:

12 x 8TB hdd with 2 parity and 10 in the array.

2 x 1tb NVME drives as cashe

shares:

appdata: prefer cashe

domains: prefer cashe

isos: yes cashe

system: prefer cashe

 

 

 

 

 

 

If these shares are located not only on cache but on the array, this is - to my experience - the main reason which increases the time to shut the array offline.

 

 

Link to comment
6 minutes ago, trurl said:

Nov  1 13:09:25 Tower emhttpd: Unmounting disks...
Nov  1 13:09:25 Tower emhttpd: shcmd (222): umount /mnt/cache
Nov  1 13:09:25 Tower root: umount: /mnt/cache: target is busy.
Nov  1 13:09:25 Tower emhttpd: shcmd (222): exit status: 32
Nov  1 13:09:25 Tower emhttpd: Retry unmounting disk share(s)...
Nov  1 13:09:26 Tower root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token
### [PREVIOUS LINE REPEATED 3 TIMES] ###

Do you have any other browsers or tabs accessing your server? Any computer with open files on the server?

 

Do you have a command line open anywhere with any disk (cache?) as the current working directory?

 

Do you have any other browsers or tabs accessing your server? Any computer with open files on the server?

Yes, I sometimes have unraid and krusader etc in different tabs. would that be the cause?

 

Do you have a command line open anywhere with any disk (cache?) as the current working directory?

I started the telnet session after the array refused to stop. so i dont believe this part would be related to the issue?

 

 

Link to comment
7 minutes ago, Propro009 said:

so if i understood this correctly if we wait a little while it may stopo the array by itself? interesting. will give it 5 minutes or so next time to see if it helps.

...yes, unRaid should do that, as already indicated by others.

 

Did you run the mover after you did set these shares to "prefer"?

Only after that and only if there were no files under access from applications when the mover runs, the files are transfered to cache.

Stop all dockers, even disable the service temporarily, before running the mover with for this task.

Check if there are some remains left on the Array (via CLI navigate to a disk share, i.e. go to /mnt/disk1/appdata ... the share directory should only be there on /mnt/cache or at least should hold no files or sub-dirs on *any* array disk)

Then, after that, you could change the cache setting to "only" and apply that, before restarting docker service and the docker apps.

 

see if this helps.

Link to comment
2 hours ago, Ford Prefect said:

...yes, unRaid should do that, as already indicated by others.

 

Did you run the mover after you did set these shares to "prefer"?

Only after that and only if there were no files under access from applications when the mover runs, the files are transfered to cache.

Stop all dockers, even disable the service temporarily, before running the mover with for this task.

Check if there are some remains left on the Array (via CLI navigate to a disk share, i.e. go to /mnt/disk1/appdata ... the share directory should only be there on /mnt/cache or at least should hold no files or sub-dirs on *any* array disk)

Then, after that, you could change the cache setting to "only" and apply that, before restarting docker service and the docker apps.

 

see if this helps.

thank you, will give it try once the server finishes copying some media files.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.