timekiller

Members
  • Posts

    48
  • Joined

  • Last visited

About timekiller

  • Birthday 10/21/1974

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

timekiller's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. Yup, I know "PUIS" because that's how Highpoint refers to it. I said "staggered spin up" because that a term that I figure more people would recognize. I've been searching, but either this is not a common feature, or it's just not documented well, I can't find much...
  2. My unraid server has 21 drives and the power supply can't handle spinning up all the drives at once. I'm currently using a Highpoint 750 (40 port) sata card but it appears to be having issues and won't be supported in the next release of unraid. Looking for a card that supports staggered spin up and ideally up to 30 drives. If I have to buy 2 16 ports cards, that would work too, but the key is that it MUST support staggered spin up. Before it's mentioned - no I don't let my drives spin down when idle, so yes the only time they all spin up at once is at boot. Yes I can upgrade my power supply, but when I hit 30 drives, I'm not sure a single power supply will have enough juice.
  3. Hmm, I suppose I could try swapping in my old controllers to test. If it's the Highpoint that would be unfortunate since I bought it on ebay a few months ago. Though at least that would give me an excuse to get something that will be supported in the next version of unraid. Thanks @trurl
  4. Quick update: I powered down and reseated cables. Powered up and immediately disks 7 and 8 had problems again. Same as before, disk8 disabled, disk7 read errors. I powered down again and connected disk7 and disk8 to different ports on the sata card. Power up and disk8 is immediately offline gain, but disk7 looks ok. Data rebuild can now continue as there is enough parity information to rebuild. Fingers crossed that disk7 stays ok - only time will tell. If disk9 completes the data rebuild (in about 3 days) then I can power down, swap out disk8 and see what's up (probably have to RMA it).
  5. thanks @trurl I was dealing with some other (non unraid) stuff. Everything you said makes total sense. Going to shutdown and check cabling now. Fortunately I am backed up. I sync everything to google cloud regularly, and the replaced drives have not been wiped yet, so worst case I can swap them in and create a fresh array and I shouldn't lose anything. Also, yes I can see everything you were talking about on my main screen:
  6. Hmm, actually I just realized disk8 is the one that is disabled/emulated, not disk7. Now I'm even more concerned
  7. I am in the process of replacing drives. I removed a 4TB drive and replaced with a 12TB drive. I'm 32% into a data rebuild and now I have a unraid reporting that a drive is disabled, contents emulated. My logs show a ton of messages like: Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979624 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979632 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979640 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979648 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979656 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979664 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979672 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979680 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979688 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979696 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979704 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979712 Jan 6 10:31:30 Storage kernel: md: disk7 read error, sector=5384979720 I know I need to address this, but I'm nervous about doing anything while the data rebuild is running. Fortunately I do have 2 parity drives, so I should not lose any data but I'll feel a lot better when the data rebuild is complete (in 2 days!). What has me concerned right now is that I can't write to any share under /mnt/user. I can read from the array and if I write to a specific disk at /mnt/disk#/<share> I can see the new content when I access the share through /mnt/user/<share>. What I need to know is: Is it safe to let the data rebuild continue? Am I better off shutting down and seeing what is up with disk7? If I shutdown and it turns out disk7's sata cable is lose or something else like that then would I create more issues with the disk suddenly coming back? The nightmare scenario I'm imagining is if a new file is written while disk7 is offline, then I shut down and get disk7 back online, then when I boot back up the parity drives will have the wrong calculations based on disk7 and the data rebuild would be corrupted. Is this a valid concern? Diagnostics attached storage-diagnostics-20210106-1339.zip
  8. Are you asking if I've backed up 122TB of data, disabled encryption, and copied the data back to test disk performance? No. I have not done that. Is there any way to test the read/write speeds of individual disks without disabling the array and testing each disk? I had thought about writing a test file to each `/mnt/disk#` but realized with the array started, I'm still getting hit with parity calculations, so even if I'm writing to 1 disk, I'm reading from all of them and writing to the 2 parity drives. I'm wondering if anyone else has seen a performance hit when dealing with this many drives? Thinking about biting the bullet and ditching the 10 4TB drives in favor of 4 12TB drives. I'd get a capacity bump and get rid of 6 spinning drives which should help parity performance as well as heat and power draw.
  9. Bump. Nothing on this? My server is basically unusable
  10. I have an issue where Unraid is becoming unusable with extremely high load, caused by shfs spinning out of control. This seems to be due to radarr doing file operations, but I'm not 100% sure. What I'm seeing is the load on the server starts climbing (getting up to 150 at times), and all other actions that require disk access just hang. I can't even `ls` a directory. I've done all I can think of to handle it, but nothing has worked. Things I've tried: Convert cache from Raid1 to Raid0 - I have (2) 1TB nvme drives for cache. My Plex appdata folder takes up 700GB alone, so very little space was left for the data cache. I converted to Raid0 so I have 2TB of cache, but that didn't help Move NZBGet, Sonarr, and Radarr off the unraid server. When content is downloaded is when it's the worst. Especially if NZBGet has to repair a file. I thought by moving these service to another system, I could offload that work so Unraid wold only have to do be available to receive the copied file (over an NFS share) These have not solved the problem. I still regularly see the load spike up. When that happens, I con't do anything with the server until the load drops. No file access, no Plex, nothing. It's truly infuriating. I do have a lot of drives (21 including 2 parity drives), but I feel like this has gotten way worse recently. I thought maybe I have a failing drive that is causing the parity functions to hang, but I am seeing no SMART errors that I would expect to see. I'm at my wits end and need help! Unraid 6.8.3 21 spinning drives (19 data + 2 parity) 136TB capacity 2 1TB nvme cache drives in RAID0 storage-diagnostics-20201207-0935.zip
  11. Oh no!! I literally just bought this card less than a month ago. This is truly terrible news for me...
  12. Just tried 6.9.0-beta35 and discovered my Highpoint Rocket 750 RAID card is not recognized. Hopefully support will be added soon (I was trying to enable nvidia hardware decoding in Plex since the old way is apparently deprecated now?!?
  13. Thanks, that's what I figured but wanted to see if there was another possibility
  14. Lost power and docker didn't come back up with the server. If I login and try to start docker manually I get : root@Storage:/var/log# /etc/rc.d/rc.docker start no image mounted at /var/lib/docker I checked and the docker image does exists at /mnt/user/system/docker/docker.img Diagnostics attached storage-diagnostics-20201123-0802.zip