Raptor2k Posted February 2, 2019 Share Posted February 2, 2019 (edited) Hello support folks ;), i have the following problem and need your help for resolving. Since i have installed MariaDB and Nextcloud on my nas the nas will never go to sleep state cause every minute some activity is going on on my array: Feb 2 15:55:13 nas s3_sleep: Disk activity on going: sdc Feb 2 15:55:13 nas s3_sleep: Disk activity detected. Reset timers. Feb 2 15:56:13 nas s3_sleep: Disk activity on going: sdc Feb 2 15:56:13 nas s3_sleep: Disk activity detected. Reset timers. Feb 2 15:57:13 nas s3_sleep: Disk activity on going: sdc Feb 2 15:57:13 nas s3_sleep: Disk activity detected. Reset timers. Feb 2 15:58:13 nas s3_sleep: Disk activity on going: sdc ... etc. My first efforts to find solutions for the problem: Step 1.) Moving all dockers to the Cache Drive. Step 2.) Use the Mover Schedule for Moving the folders appdate, system, vm to the Cache Drive. Step 3.) Use the button "Spin Down" in the Unraid Web GUI > Main. Now the S3 Sleep Plugin starts the countdown for sleep state: Feb 2 16:26:44 nas kernel: mdcmd (51): spindown 0 Feb 2 16:26:45 nas kernel: mdcmd (52): spindown 1 Feb 2 16:26:46 nas kernel: mdcmd (53): spindown 2 Feb 2 16:26:46 nas emhttpd: shcmd (112): /usr/sbin/hdparm -y /dev/sda Feb 2 16:26:46 nas root: Feb 2 16:26:46 nas root: /dev/sda: Feb 2 16:26:46 nas root: issuing standby command Feb 2 16:27:14 nas s3_sleep: All monitored HDDs are spun down Feb 2 16:27:14 nas s3_sleep: Extra delay period running: 10 minute(s) For addtional analysis... A screenshot of Glances: A code snippet of the Plugin File Activity: File Activity ** Disk 1 Feb 02 14:08:06 CREATE => /mnt/disk1/1114285956.tmp Feb 02 14:08:06 OPEN => /mnt/disk1/1114285956.tmp Feb 02 14:08:06 OPEN => /mnt/disk1/1114285956.tmp Feb 02 14:08:06 DELETE => /mnt/disk1/1114285956.tmp ** Disk 2 Feb 02 14:08:06 CREATE => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 OPEN => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 OPEN => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 DELETE => /mnt/disk2/1114683235.tmp ** Cache Feb 02 14:08:06 CREATE => /mnt/cache/1682956116.tmp Feb 02 14:08:06 OPEN => /mnt/cache/1682956116.tmp Feb 02 14:08:06 OPEN => /mnt/cache/1682956116.tmp Feb 02 14:08:06 DELETE => /mnt/cache/1682956116.tmp Feb 02 14:15:02 OPEN => /mnt/cache/n/nextcloud.log Feb 02 15:43:20 OPEN => /mnt/cache/system/docker/docker.img Feb 02 15:45:02 OPEN => /mnt/cache/n/nextcloud.log Feb 02 15:47:35 ATTRIB,ISDIR => /mnt/cache/n Feb 02 15:47:35 ATTRIB,ISDIR => /mnt/cache/n/ Feb 02 16:00:01 OPEN => /mnt/cache/n/nextcloud.log Feb 02 16:30:01 OPEN => /mnt/cache/n/nextcloud.log My S3 Sleep Plugin Settings: - Wait for Disk Array : Yes, exclude cache - 10 minutes delay for Sleep State - Wait for network inactivity: Yes, Medium Traffic (100kbits) ------------------- I would be so happy if somebody can help me. I am trying to resolve this problem since over 2 weeks. Thank you very much for your support. Kind regards and greetings from Germany. Raptor2k aka Tom. nas-diagnostics-20190202-1604.zip Edited February 2, 2019 by Raptor2k Quote Link to comment
Squid Posted February 2, 2019 Share Posted February 2, 2019 Feb 02 14:08:06 CREATE => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 OPEN => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 OPEN => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 DELETE => /mnt/disk2/1114683235.tmp This is from Fix Common Problems running a scan. You can have it set to avoid spin ups to stop the tests on drives which are spun down. Quote Link to comment
Raptor2k Posted February 2, 2019 Author Share Posted February 2, 2019 (edited) 2 hours ago, Squid said: Feb 02 14:08:06 CREATE => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 OPEN => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 OPEN => /mnt/disk2/1114683235.tmp Feb 02 14:08:06 DELETE => /mnt/disk2/1114683235.tmp This is from Fix Common Problems running a scan. You can have it set to avoid spin ups to stop the tests on drives which are spun down. Ah thank you. So i deactivated the recommended settings: And this setting was the every minut acitivity on my array ? I will test it and report asap. Edited February 2, 2019 by Raptor2k Quote Link to comment
Squid Posted February 2, 2019 Share Posted February 2, 2019 1 hour ago, Raptor2k said: And this setting was the every minut acitivity on my array Nope. It only ran as often as you had FCP scheduled to run (in this case it was weekly) I would guess that when you originally set up your system you did not have a cache drive, started playing around with docker apps, and then added a cache drive afterwards. My reasonable guess is that your docker.img file is sitting on disk 1 (currently sdc) DOCKER_IMAGE_FILE="/mnt/cache/system/docker/docker.img" # Share exists on cache,disk1 Stop the docker service (settings / docker) and the VM service (settings VMs). Then go to Main, Array Operations, Move Now. Refresh the page every once in a while and when mover stops running, re-enable docker and vms Quote Link to comment
Raptor2k Posted February 2, 2019 Author Share Posted February 2, 2019 (edited) 30 minutes ago, Squid said: Nope. It only ran as often as you had FCP scheduled to run (in this case it was weekly) I would guess that when you originally set up your system you did not have a cache drive, started playing around with docker apps, and then added a cache drive afterwards. My reasonable guess is that your docker.img file is sitting on disk 1 (currently sdc) DOCKER_IMAGE_FILE="/mnt/cache/system/docker/docker.img" # Share exists on cache,disk1 Stop the docker service (settings / docker) and the VM service (settings VMs). Then go to Main, Array Operations, Move Now. Refresh the page every once in a while and when mover stops running, re-enable docker and vms Yep you are right. I have played around first without an cache drive. Strangely i have tried the mover workround but the system / docker.img is not removed by unraid. Here are my settings: - The excluded disk setting was a desperate attempt :/... Edited February 2, 2019 by Raptor2k Quote Link to comment
Squid Posted February 2, 2019 Share Posted February 2, 2019 You've wound up with a duplicated docker.img somehow (its hard for me to replicate that circumstance) To solve, Settings - Docker. Disable the service Check off and delete the image From the command prompt (or terminal button) rm /mnt/user/system/docker.img Settings - Docker. Re-enable the service Apps Tab, Previous Apps Section, check off all of your apps you had installed previously, and then hit the Install Multi. A minute or two later, and you're back in business Quote Link to comment
Raptor2k Posted February 2, 2019 Author Share Posted February 2, 2019 (edited) ok i have done you 5 steps guide. Now i will wait 15 minutes... eb 2 22:06:24 nas s3_sleep: Disk activity detected. Reset timers. Feb 2 22:07:24 nas s3_sleep: Disk activity on going: sdc Feb 2 22:07:24 nas s3_sleep: Disk activity detected. Reset timers. omg thank you for your support ! it works !: Feb 2 22:21:25 nas kernel: mdcmd (84): spindown 0 Feb 2 22:21:25 nas s3_sleep: Disk activity on going: sdc Feb 2 22:21:25 nas s3_sleep: Disk activity detected. Reset timers. Feb 2 22:21:25 nas kernel: mdcmd (85): spindown 1 Feb 2 22:22:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:22:25 nas s3_sleep: Extra delay period running: 10 minute(s) Feb 2 22:23:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:23:25 nas s3_sleep: Extra delay period running: 9 minute(s) Feb 2 22:24:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:24:25 nas s3_sleep: Extra delay period running: 8 minute(s) Feb 2 22:25:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:25:25 nas s3_sleep: Extra delay period running: 7 minute(s) Feb 2 22:26:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:26:25 nas s3_sleep: Extra delay period running: 6 minute(s) Feb 2 22:27:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:27:25 nas s3_sleep: Extra delay period running: 5 minute(s) Feb 2 22:28:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:28:25 nas s3_sleep: Extra delay period running: 4 minute(s) Feb 2 22:29:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:29:25 nas s3_sleep: Extra delay period running: 3 minute(s) Feb 2 22:30:25 nas s3_sleep: All monitored HDDs are spun down Feb 2 22:30:25 nas s3_sleep: Extra delay period running: 2 minute(s) Edited February 2, 2019 by Raptor2k New status of my raid. Quote Link to comment
Squid Posted February 2, 2019 Share Posted February 2, 2019 1 hour ago, Raptor2k said: omg thank you for your support No problems... And thank YOU Quote Link to comment
Raptor2k Posted February 4, 2019 Author Share Posted February 4, 2019 (edited) Unfortunately i was too optimistic. Strangely when my array wakes up from standby the s3 sleep plugin detects a disk activity on sdc. But when i spin up my drives and unraid spins down cause of inactivity then the s3 sleep plugin starts the countdown for sleep state. How can we detect the disk activity which interrupts s3 sleep plugin for sleep state? Feb 4 07:55:43 nas s3_sleep: s3_sleep process ID 15868 started, To terminate it, type: s3_sleep -q Feb 4 07:55:43 nas s3_sleep: Disk activity on going: sdc Feb 4 07:55:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 07:56:43 nas s3_sleep: Disk activity on going: sdc Feb 4 07:56:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 07:57:43 nas s3_sleep: Disk activity on going: sdc Feb 4 07:57:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 07:58:43 nas s3_sleep: Disk activity on going: sdc Feb 4 07:58:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 07:59:43 nas s3_sleep: Disk activity on going: sdc Feb 4 07:59:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 08:00:43 nas s3_sleep: Disk activity on going: sdc Feb 4 08:00:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 08:01:43 nas s3_sleep: Disk activity on going: sdc Feb 4 08:01:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 08:02:43 nas s3_sleep: Disk activity on going: sdc Feb 4 08:02:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 08:03:43 nas s3_sleep: Disk activity on going: sdc Feb 4 08:03:43 nas s3_sleep: Disk activity detected. Reset timers. ... Feb 4 08:14:43 nas s3_sleep: Disk activity detected. Reset timers. Feb 4 08:15:23 nas emhttpd: req (10): cmd=/plugins/open.files/scripts/killprocess&arg1=11795&csrf_token=**************** Feb 4 08:15:23 nas emhttpd: cmd: /usr/local/emhttp/plugins/open.files/scripts/killprocess 11795 Feb 4 08:15:43 nas s3_sleep: Disk activity on going: sdc Feb 4 08:15:43 nas s3_sleep: Disk activity detected. Reset timers. nas-diagnostics-20190204-0810.zip Edited February 4, 2019 by Raptor2k Quote Link to comment
zeus83 Posted December 18, 2019 Share Posted December 18, 2019 Try the solution from here: https://forums.unraid.net/topic/34889-dynamix-v6-plugins/?do=findComment&comment=522886 unraid doesn't support s3 sleep officially so it seems it won't put disks into standby again after wake up because it thinks they are already in standby. 1 Quote Link to comment
Raptor2k Posted December 21, 2019 Author Share Posted December 21, 2019 Thx zeus83 for your answer. I could resolve this problem with setting the appdata folder to "Prefer" / Prefer Cache Drive for saving data. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.