Michael H

Members
  • Posts

    50
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Michael H's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. just while i have @dlandon here... it seems the log-vomit is because of your (awesome!) plugin. when i deinstall it it stops. im not saying your plugin is causing it, but making the issue visible. so that may be something you could take to limetech. let me know if you need anything from me
  2. Update for now: Completely removed the RAID card, used the SAS Ports on the motherboard. Disks stay down now, however the logs still spits out roughly every 30 seconds. as this is not the sas disk that was doing it before, but a completely different disk usually not present in the server and also not connected via the raid card i guess the ud plugin does not like that it is passed physically to a windows vm (/dev/disk/by-id/xxxxx). as this is just a nuissance when trying to view the log and not a bug with unraid all that is left is to thank you for sticking with me and giving suggestions and saying sorry for wasting everones time.
  3. as mentioned a few replies before, the server does not have sata ports. i do however have 2 sas ports directly on the motherboard, each supporting 4 drives via my backplate . so i have the 4 array disks on the motherboard now and the cache and the ud-disk still on the raid controller as i only had one fitting sas cable which i borrowed from another server. the 4 disks stayed down for half an hour (new record) but then spun up almost in sync one after another. i spun dem down, they immedeately spun up again. now they are down and i am monitoring. server still vomits the udev to logs (probably because of the cache and the ud on the raid controller). so my next step is to buy 2 sas cables to use both ports on the moderboard. i loose 4 slots for disks for now, but if this ends up resolving the issue the energy bill savings alone would fund a new lsi card in about a year.
  4. that ud disk is the sas disk. as all disks are attached via a backplane i would doubt that its a cable or power supply issue otherwise it would affect all disks wouldnt it? anyway i put it in a different slot same result. so i removed it from the server, still it spams: Also opened the machine, reseated every cable, the raid card itself, the disks... still the same... as that sas disk is directly forwarded to a windows vm, shouldnt i see something there as well? as it is the disk for my surveilance i should get corrupted files or not? i dont see any such issues edit: it gets worse. now even the parity wont stay down. i guess its a hardware issue after all... welp...
  5. Here are the diagnostics and a snippet of the relevant part of the log, which repeats over and over : hp-diagnostics-20220922-2145.zip
  6. so unraid spun them down on its own and then back up about a minute later. really not ideal...
  7. i dont think thats the issue, because i would think both the seagates and the reds do support spindown. at least for the seagate we can be sure because it does stay down (parity). also, this issue is there since the server was set up on 6.8.x. but nevertheless, i set the spindown timer to 15 minutes for all disks - had it set to never for the reds because it would spin up again anyhow so if i cant save on energy costs why strain the disks with the countless spin down and ups. what is weird is after setting the timer to 15 minutes, unraid spins it down and 5 seconds later smart read and up: also can you tell me why the kernel loves to print out the disks all the time... especially for the passed through unassigned device: pollutes the log quite a bit. maybe this is the issue?
  8. as part of the array, the new seagate now reports correctly. however it spins up like the wd's do still, so its not a disk/controller-specific problem as the same disk as parity stays down. also although its empty removing it would need me to recalculate parity again which i dont want to do. wish i had known that sooner
  9. all others report correctly. the new seagate also does when it is active, just in standby it does not. adding it to the array now to see if that makes a difference - preclear will finish in 11 hours.
  10. new discovery to pass along: hdparm -C /dev/sdg /dev/sdg: drive state is: unknown for the new seagate disk. strange, isnt it? maybe thats why it spinss up even sooner than the wd red
  11. i added a seagate disk like the parity drive, but there is data on it so i wanted to erase it with your preclear plugin. however it does not show the option to do so, as i understood from your support thread thats because there is a partition and data on the disk. so how do i clear it before adding it to the array? but for now, even just sitting there in unassigned devices i can spin it down and it spins back up at will... so can we call it a failure already?
  12. yes i should be able to do that, i have a few disks lying around. however they are not the kind that i would entrust with my data on the long run, so is there any way to remove a disk from the array afterwards? i guess it should have data on it just to create a real world scenario.
  13. thanks for taking it to LT. the spindown plugin does not help. the spindown itself works fine and the drives in question are sata, not sas drives.
  14. @dlandon updated to rc5, booted in safe mode, disabled virtlib and docker, spun down disks and same issue: also your openfiles plugin never reported anything (which is to be expected as read and write counters also dont go up) @SimonF nevermind the sas drive, it is passed through directly to a vm, it will never have to be spun down. hdparm -C correctly reports the drive as standby when spun down. so i wrote a script to see if the smart read is done while it still reports as standby and what is curious is that it is indeed reporting active a few seconds before the smart status is read. but then the drive does not respond for a good 10 seconds (my script reads every second). so i guess you are right that smart only reads when the drive is active. which leaves the question why it is active, when there is no plugin, docker or vm running or any access on the shares... i hate this problem here are the new diagnostics hp-diagnostics-20220918-2012.zip