Is default setting for "Tunable (poll_attributes)" ok for managing array wear and tear?


Recommended Posts

I have noticed that the disks in my array are almost always all spun up, and looking at the logs I can see that this seems to be because of the default interval set by UnRaid for polling each drive for SMART data. This is every 1800 seconds (or 30 minutes). Separate to this, I have also previously set the default spin down delay to 30 mins to avoid too much stopping and starting. With these two settings combined there really isn't much time where individual drives are actually spun down.

 

What is typical best practice for these settings (and any others that are similarly relevant) in terms of power efficiency and long term disk health?

unRaid Disk Activity.jpg

Edited by nametaken_thisonetoo
Clarity & Grammer
Link to comment
5 hours ago, nametaken_thisonetoo said:

default interval set by UnRaid for polling each drive for SMART data. This is every 1800 seconds (or 30 minutes).

This won't cause disk spinup, you need identify what writing / reading to array. I even set this to 3min, but disk spindown 99% all time.

Link to comment

Thanks @Vr2Io, although I'm really pretty lost as to what would be causing so much activity. At the time that screenshot was taken there was one user watching Plex remotely, Deluge downloading to the Cache drive, and that's it. My setup is pretty straightforward - just Plex and a few *arr related Dockers. No VM's and not much else at all.

 

I've attached my diagnostics if you or anyone else can offer some insight it would be much appreciated.

themagiceye-diagnostics-20210623-0923.zip

Link to comment
On 6/24/2021 at 2:42 AM, Vr2Io said:

FYR, I haven't use cache pool and all VM / Docker host by a SSD under UD, so it may less spinup issue.

Pls also ensure no other network client wakeup network share.

 

Apologies @Vr2Iobut I'm a bit of a noob and not entirely sure what you are referring to in this post.

 

What I have done is double check the logs for activity related to drive spin up and down, and 99% of the time it is for the purposes of reading the drive SMART info. Perhaps I'm not looking at the correct log or missing something entirely? I also haven't noticed anything in the logs about anything else on the network that might be waking up the drives, but admittedly I'm very vague on what that would look like in logs or elsewhere.

 

I have attached recent diagnostics - hoping you or someone else might be able to take a look and see what I might be missing.

themagiceye-diagnostics-20210627-1126.zip

Link to comment
2 hours ago, nametaken_thisonetoo said:

Perhaps I'm not looking at the correct log or missing something entirely?

Those "read SMART" message were not the cause for disk spinup. When Unraid detect disk in sleep then it won't read its SMART data, but once detect it spinup then Unraid will read / poll it.

 

You need try and error to check what will cause array/cache disks spinup, could be first stop most unnecessary service/application, after no more unexpected spinup then start service/application one-by-one until spinup happen again.

  

Some problem found as below

 

(1) Docker store use array, in path /mnt/user", as mention, my docker/VM were host in UD - SSD.

 

Your docker.cfg

DOCKER_ENABLED="yes"
DOCKER_IMAGE_FILE="/mnt/user/system/docker/docker.img"
DOCKER_IMAGE_SIZE="20"
DOCKER_APP_CONFIG_PATH="/mnt/user/appdata/"
DOCKER_APP_UNRAID_PATH=""
DOCKER_CUSTOM_NETWORKS=" "
DOCKER_LOG_ROTATION="yes"
DOCKER_LOG_SIZE="50m"
DOCKER_LOG_FILES="1"
DOCKER_AUTHORING_MODE="no"
DOCKER_USER_NETWORKS="remove"
DOCKER_TIMEOUT="10"

 

(2) Below periodic task may wakeup disk ( not sure )

Jun 24 05:42:24 TheMagicEye flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup.php update

 

(3) Your cache were full

Jun 24 03:08:51 TheMagicEye shfs: share cache full

 

(4) ATA7 & ATA11, periodic error

Jun 14 08:24:45 TheMagicEye kernel: ata7.00: status: { DRDY }
Jun 14 08:24:45 TheMagicEye kernel: ata7.00: failed command: READ FPDMA QUEUED
Jun 14 08:24:45 TheMagicEye kernel: ata7.00: cmd 60/40:30:68:ef:6b/05:00:c6:00:00/40 tag 6 ncq dma 688128 in
Jun 14 08:24:45 TheMagicEye kernel:         res 40/00:b0:a8:d4:6b/00:00:c6:00:00/40 Emask 0x2 (HSM violation)

 

Edited by Vr2Io
  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.