Michael H
Members-
Posts
50 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Michael H's Achievements
Rookie (2/14)
1
Reputation
-
Update for now: Completely removed the RAID card, used the SAS Ports on the motherboard. Disks stay down now, however the logs still spits out roughly every 30 seconds. as this is not the sas disk that was doing it before, but a completely different disk usually not present in the server and also not connected via the raid card i guess the ud plugin does not like that it is passed physically to a windows vm (/dev/disk/by-id/xxxxx). as this is just a nuissance when trying to view the log and not a bug with unraid all that is left is to thank you for sticking with me and giving suggestions and saying sorry for wasting everones time.
-
as mentioned a few replies before, the server does not have sata ports. i do however have 2 sas ports directly on the motherboard, each supporting 4 drives via my backplate . so i have the 4 array disks on the motherboard now and the cache and the ud-disk still on the raid controller as i only had one fitting sas cable which i borrowed from another server. the 4 disks stayed down for half an hour (new record) but then spun up almost in sync one after another. i spun dem down, they immedeately spun up again. now they are down and i am monitoring. server still vomits the udev to logs (probably because of the cache and the ud on the raid controller). so my next step is to buy 2 sas cables to use both ports on the moderboard. i loose 4 slots for disks for now, but if this ends up resolving the issue the energy bill savings alone would fund a new lsi card in about a year.
-
that ud disk is the sas disk. as all disks are attached via a backplane i would doubt that its a cable or power supply issue otherwise it would affect all disks wouldnt it? anyway i put it in a different slot same result. so i removed it from the server, still it spams: Also opened the machine, reseated every cable, the raid card itself, the disks... still the same... as that sas disk is directly forwarded to a windows vm, shouldnt i see something there as well? as it is the disk for my surveilance i should get corrupted files or not? i dont see any such issues edit: it gets worse. now even the parity wont stay down. i guess its a hardware issue after all... welp...
-
i dont think thats the issue, because i would think both the seagates and the reds do support spindown. at least for the seagate we can be sure because it does stay down (parity). also, this issue is there since the server was set up on 6.8.x. but nevertheless, i set the spindown timer to 15 minutes for all disks - had it set to never for the reds because it would spin up again anyhow so if i cant save on energy costs why strain the disks with the countless spin down and ups. what is weird is after setting the timer to 15 minutes, unraid spins it down and 5 seconds later smart read and up: also can you tell me why the kernel loves to print out the disks all the time... especially for the passed through unassigned device: pollutes the log quite a bit. maybe this is the issue?
-
as part of the array, the new seagate now reports correctly. however it spins up like the wd's do still, so its not a disk/controller-specific problem as the same disk as parity stays down. also although its empty removing it would need me to recalculate parity again which i dont want to do. wish i had known that sooner
-
i added a seagate disk like the parity drive, but there is data on it so i wanted to erase it with your preclear plugin. however it does not show the option to do so, as i understood from your support thread thats because there is a partition and data on the disk. so how do i clear it before adding it to the array? but for now, even just sitting there in unassigned devices i can spin it down and it spins back up at will... so can we call it a failure already?
-
@dlandon updated to rc5, booted in safe mode, disabled virtlib and docker, spun down disks and same issue: also your openfiles plugin never reported anything (which is to be expected as read and write counters also dont go up) @SimonF nevermind the sas drive, it is passed through directly to a vm, it will never have to be spun down. hdparm -C correctly reports the drive as standby when spun down. so i wrote a script to see if the smart read is done while it still reports as standby and what is curious is that it is indeed reporting active a few seconds before the smart status is read. but then the drive does not respond for a good 10 seconds (my script reads every second). so i guess you are right that smart only reads when the drive is active. which leaves the question why it is active, when there is no plugin, docker or vm running or any access on the shares... i hate this problem here are the new diagnostics hp-diagnostics-20220918-2012.zip