BillyJ Posted February 21, 2015 Share Posted February 21, 2015 I seem to be waiting anywhere up to 5 minutes to "Mounting Disks" and at least 2 minutes for "Syncing Filesystem" when shutting down. My mates unRAID's seem to stop and start in 10 - 15 seconds. Link to comment
CHBMB Posted February 21, 2015 Share Posted February 21, 2015 My experience is like your friends, about 10 seconds on all three different systems I've had. Might want to examine your system to find out if there's a possible problem, I'm sure an UnRaid guru will offer some advice when they see your post. Link to comment
itimpi Posted February 21, 2015 Share Posted February 21, 2015 The amount of time to mount disks can very considerably depending on whether you did a successful tidy shutdown, and also with how much data you tend to write. A forced close is VERY likely to lead to extended mount times as any transaction logs need to be replayed. The sync times on shutdown can also depend on whether you have been writing data to the array just before initiating the shutdown. If you have then you have to wait while disks any buffers in RAM are flushed to disk. The more RAM you have the longer this can take. If the server has been idle for some time, then data is almost certainly already flushed, but you have to wait while disks are spun up so they can by synced. Link to comment
BillyJ Posted February 21, 2015 Author Share Posted February 21, 2015 The amount of time to mount disks can very considerably depending on whether you did a successful tidy shutdown, and also with how much data you tend to write. A forced close is VERY likely to lead to extended mount times as any transaction logs need to be replayed. The sync times on shutdown can also depend on whether you have been writing data to the array just before initiating the shutdown. If you have then you have to wait while disks any buffers in RAM are flushed to disk. The more RAM you have the longer this can take. If the server has been idle for some time, then data is almost certainly already flushed, but you have to wait while disks are spun up so they can by synced. I had just moved 60GB of 2mb video files off the cache to the array, then proceeded to reboot. I always invoke mover before a shut down or reboot to clear the cache. With beta14 I've just formatted to XFS from BTRFS and it seems like stop and starts are close to about 10 seconds, I'll wait and see if this goes back up to 5 min when i kick off my NVR. I could have stuffed something up with the BTRFS pooling i attempted a while back. Thank you. Link to comment
Helmonder Posted February 21, 2015 Share Posted February 21, 2015 The amount of time to mount disks can very considerably depending on whether you did a successful tidy shutdown, and also with how much data you tend to write. A forced close is VERY likely to lead to extended mount times as any transaction logs need to be replayed. The sync times on shutdown can also depend on whether you have been writing data to the array just before initiating the shutdown. If you have then you have to wait while disks any buffers in RAM are flushed to disk. The more RAM you have the longer this can take. If the server has been idle for some time, then data is almost certainly already flushed, but you have to wait while disks are spun up so they can by synced. I had just moved 60GB of 2mb video files off the cache to the array, then proceeded to reboot. I always invoke mover before a shut down or reboot to clear the cache. With beta14 I've just formatted to XFS from BTRFS and it seems like stop and starts are close to about 10 seconds, I'll wait and see if this goes back up to 5 min when i kick off my NVR. I could have stuffed something up with the BTRFS pooling i attempted a while back. Thank you. Invoking the mover is not needed.. How do you shutdown or reboot ? Thru the webinterface or from commandline... Or even (shudder to think) by pressing the button on your system ? Link to comment
Swixxy Posted February 22, 2015 Share Posted February 22, 2015 I used to have this problem. Since moving from plugins to docker only for all my apps it was fine. I believe one of the plugins would not shut down and hang a drive, stopping the unmount, which would in turn cause delays on the startup. Are you using plugins? Link to comment
BillyJ Posted February 22, 2015 Author Share Posted February 22, 2015 Are you using plugins? I was only using the APCUPS plugin but I have removed that since Beta14. Invoking the mover is not needed.. How do you shutdown or reboot ? Thru the webinterface or from commandline... Or even (shudder to think) by pressing the button on your system ? I'd usually just click the move now button and wait till all the data is over to the array... Mostly though the web interface, I choose Stop array. After 5 or so minutes i had been known to use Terminal and remote in and use the reboot command.... failing that after 10 minutes (maybe happens a few times a month) I would login to IPMP and click power cycle to force it. Link to comment
Swixxy Posted February 22, 2015 Share Posted February 22, 2015 Does it get stuck on a particular disk unmounting? Easiest way to check would be to just terminal in cd /mnt/user/ ls and see which disk is still up, since it umounts from disk1 upwards in logical order. Or check the syslog and see which disk its hanging on. It could be a bad disk, or something running on that disk refusing to stop Link to comment
BillyJ Posted February 22, 2015 Author Share Posted February 22, 2015 Does it get stuck on a particular disk unmounting? Easiest way to check would be to just terminal in cd /mnt/user/ ls and see which disk is still up, since it umounts from disk1 upwards in logical order. Or check the syslog and see which disk its hanging on. It could be a bad disk, or something running on that disk refusing to stop Thanks, will give it a shot next time. Link to comment
Helmonder Posted February 27, 2015 Share Posted February 27, 2015 Does it get stuck on a particular disk unmounting? Easiest way to check would be to just terminal in cd /mnt/user/ ls and see which disk is still up, since it umounts from disk1 upwards in logical order. Or check the syslog and see which disk its hanging on. It could be a bad disk, or something running on that disk refusing to stop Thanks, will give it a shot next time. ps -elf | grep disk and ps -elf | grep user Will also give you info. For me in most cases it was cache_dirs taking longer then I had patience for to stop working... Link to comment
BillyJ Posted February 27, 2015 Author Share Posted February 27, 2015 Does it get stuck on a particular disk unmounting? Easiest way to check would be to just terminal in cd /mnt/user/ ls and see which disk is still up, since it umounts from disk1 upwards in logical order. Or check the syslog and see which disk its hanging on. It could be a bad disk, or something running on that disk refusing to stop Thanks, will give it a shot next time. ps -elf | grep disk and ps -elf | grep user Will also give you info. For me in most cases it was cache_dirs taking longer then I had patience for to stop working... Thanks for this info, I have some bigger problems then the speed of the Unmount. Although I'm sure the last couple of unmounts that led me to post this may of in fact been the CPU stall issue. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.