Michael H

  • Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Michael H's Achievements


Rookie (2/14)



  1. as mentioned a few replies before, the server does not have sata ports. i do however have 2 sas ports directly on the motherboard, each supporting 4 drives via my backplate . so i have the 4 array disks on the motherboard now and the cache and the ud-disk still on the raid controller as i only had one fitting sas cable which i borrowed from another server. the 4 disks stayed down for half an hour (new record) but then spun up almost in sync one after another. i spun dem down, they immedeately spun up again. now they are down and i am monitoring. server still vomits the udev to logs (probably because of the cache and the ud on the raid controller). so my next step is to buy 2 sas cables to use both ports on the moderboard. i loose 4 slots for disks for now, but if this ends up resolving the issue the energy bill savings alone would fund a new lsi card in about a year.
  2. that ud disk is the sas disk. as all disks are attached via a backplane i would doubt that its a cable or power supply issue otherwise it would affect all disks wouldnt it? anyway i put it in a different slot same result. so i removed it from the server, still it spams: Also opened the machine, reseated every cable, the raid card itself, the disks... still the same... as that sas disk is directly forwarded to a windows vm, shouldnt i see something there as well? as it is the disk for my surveilance i should get corrupted files or not? i dont see any such issues edit: it gets worse. now even the parity wont stay down. i guess its a hardware issue after all... welp...
  3. Here are the diagnostics and a snippet of the relevant part of the log, which repeats over and over : hp-diagnostics-20220922-2145.zip
  4. so unraid spun them down on its own and then back up about a minute later. really not ideal...
  5. i dont think thats the issue, because i would think both the seagates and the reds do support spindown. at least for the seagate we can be sure because it does stay down (parity). also, this issue is there since the server was set up on 6.8.x. but nevertheless, i set the spindown timer to 15 minutes for all disks - had it set to never for the reds because it would spin up again anyhow so if i cant save on energy costs why strain the disks with the countless spin down and ups. what is weird is after setting the timer to 15 minutes, unraid spins it down and 5 seconds later smart read and up: also can you tell me why the kernel loves to print out the disks all the time... especially for the passed through unassigned device: pollutes the log quite a bit. maybe this is the issue?
  6. as part of the array, the new seagate now reports correctly. however it spins up like the wd's do still, so its not a disk/controller-specific problem as the same disk as parity stays down. also although its empty removing it would need me to recalculate parity again which i dont want to do. wish i had known that sooner
  7. all others report correctly. the new seagate also does when it is active, just in standby it does not. adding it to the array now to see if that makes a difference - preclear will finish in 11 hours.
  8. new discovery to pass along: hdparm -C /dev/sdg /dev/sdg: drive state is: unknown for the new seagate disk. strange, isnt it? maybe thats why it spinss up even sooner than the wd red
  9. i added a seagate disk like the parity drive, but there is data on it so i wanted to erase it with your preclear plugin. however it does not show the option to do so, as i understood from your support thread thats because there is a partition and data on the disk. so how do i clear it before adding it to the array? but for now, even just sitting there in unassigned devices i can spin it down and it spins back up at will... so can we call it a failure already?
  10. yes i should be able to do that, i have a few disks lying around. however they are not the kind that i would entrust with my data on the long run, so is there any way to remove a disk from the array afterwards? i guess it should have data on it just to create a real world scenario.
  11. thanks for taking it to LT. the spindown plugin does not help. the spindown itself works fine and the drives in question are sata, not sas drives.
  12. @dlandon updated to rc5, booted in safe mode, disabled virtlib and docker, spun down disks and same issue: also your openfiles plugin never reported anything (which is to be expected as read and write counters also dont go up) @SimonF nevermind the sas drive, it is passed through directly to a vm, it will never have to be spun down. hdparm -C correctly reports the drive as standby when spun down. so i wrote a script to see if the smart read is done while it still reports as standby and what is curious is that it is indeed reporting active a few seconds before the smart status is read. but then the drive does not respond for a good 10 seconds (my script reads every second). so i guess you are right that smart only reads when the drive is active. which leaves the question why it is active, when there is no plugin, docker or vm running or any access on the shares... i hate this problem here are the new diagnostics hp-diagnostics-20220918-2012.zip
  13. i think I'm misunderstood. i know that smart is only read after the disk is spun up (theoretically). my point is that unraid does not correctly detect that the disk is spun down, hence issues the smart command which REALLY wakes up the disk. to fully prove this theory (altough in my mind it is certain) i want to completely block smart. however, setting poll interval to 0, renaming the smart commands in sbin and using "none" as the scheduler have all not helped in really disabling smart. so regardless wheter my theory is correct or not, i would say its at least a bug that i am not able to disable smart. here are more recent diagnostics. hp-diagnostics-20220916-1034.zip
  14. thanks for trying to help!! a few more odd things to consider: i already have the poll to 0, but unraid does not care and i still see the spinup and smart calls (and temperature) i also tried to brute force it to not read smart by renaming smartctl_type, smartctl and smartd but still spin up and reading. for now i did not dare to delete them, but it gets more and more tempting i even set the scheduler to none so that it would not be executed, and the temps disappeared. i dared to hope but was dissapointed again: Sep 15 23:07:40 hp emhttpd: shcmd (24249): echo none > /sys/block/sdc/queue/scheduler Sep 15 23:07:40 hp emhttpd: shcmd (24250): echo none > /sys/block/sdd/queue/scheduler Sep 15 23:07:40 hp emhttpd: shcmd (24251): echo none > /sys/block/sdb/queue/scheduler Sep 15 23:07:40 hp kernel: mdcmd (36): set md_num_stripes 1280 Sep 15 23:07:40 hp kernel: mdcmd (37): set md_queue_limit 80 Sep 15 23:07:40 hp kernel: mdcmd (38): set md_sync_limit 5 Sep 15 23:07:40 hp kernel: mdcmd (39): set md_write_method Sep 15 23:07:46 hp emhttpd: spinning down /dev/sdd Sep 15 23:07:48 hp emhttpd: spinning down /dev/sdb Sep 15 23:18:18 hp emhttpd: read SMART /dev/sdb Sep 15 23:18:31 hp emhttpd: read SMART /dev/sdd as far as i can tell from the thread you linked there is also no increase in the read/write counter and it spins up... also no solution there as far as i can see, just like the other similar threads. as you are a respected community member, maybe can you try to take it to the unraid people to be looked at as the bug as it seems to be? also if you want to remote into my server to try things and poke around, we can surely arrange that. i know how difficult it is to remotely diagnose a problem.
  15. can you tell me how to disable smart completely? or which command is used by unraid to monitor smart so i can check the disk spin up with that?