gandalf15 Posted November 8, 2023 Share Posted November 8, 2023 (edited) Hi fellow "unraiders" Since my data was not important in the past I did not use a Parity drive. That changed now - basicly since I had a drive "left over" and set it as parity. This is the stat on parity sync after almost 2 days: As you can see, its really slow. Some informations: There are lots of tiny files (only a few kb, but also a big files (100+ GB). Its in total a 284 TB array. The drive is a Exo 20TB (X20 model ST20000NM007D) for parity. All drives are eighter 16TB or 20TB drives. The parity was precleared at an average speed arround 200 MB/s. I cant stop the array for a parity since (and cant in the futur for parity check). There is data being written during the sync (not often though). currently it read at 350-450 MB/s on all drives - and writes with 12-19 MB/s. The drives ( in Total 15) are all in a server hot swappable case with SAS2 connections on a HBA. I use an expander to get to all drives. The case uses a backplane. I read that it shouldnt take that long - but is it possible that it is so slow since I have A LOT of tiny files? More precise I run a storj node that uses arround 15TB of all data. I also attached the diagnostics Thank you for every help (even though you tell me this is normal). Edited November 9, 2023 by gandalf15 Delete diagnostics for privacy Quote Link to comment
JorgeB Posted November 9, 2023 Share Posted November 9, 2023 There's something else reading from disks 1 and 2. Quote Link to comment
gandalf15 Posted November 9, 2023 Author Share Posted November 9, 2023 So the slowdown is due to that? As I said - I cant stop the server for a parity sync / check. So from your response I read, that this is normal due to the acess of the server? Quote Link to comment
JorgeB Posted November 9, 2023 Share Posted November 9, 2023 Anything else accessing the disks will slow down the parity sync/check considerably. Quote Link to comment
gandalf15 Posted November 9, 2023 Author Share Posted November 9, 2023 35 minutes ago, JorgeB said: Anything else accessing the disks will slow down the parity sync/check considerably. Thank you - so basicly parity is nothing for me (a normal 20 TB drive will take like 2 days I think I read). I cant stop everything for 2 days all the time. Quote Link to comment
JorgeB Posted November 9, 2023 Share Posted November 9, 2023 You can also do it in parts, during no use hours. Quote Link to comment
gandalf15 Posted November 9, 2023 Author Share Posted November 9, 2023 basicly there is 24/7 use. I guess it will just run beside the operation. Or is there any harm if the drives spinn for that long? Quote Link to comment
JorgeB Posted November 9, 2023 Share Posted November 9, 2023 There's no extra harm, it's just slower. Quote Link to comment
itimpi Posted November 9, 2023 Share Posted November 9, 2023 7 hours ago, gandalf15 said: Thank you - so basicly parity is nothing for me (a normal 20 TB drive will take like 2 days I think I read). I cant stop everything for 2 days all the time. You can use the Parity Check Tuning plugin to only run the checks in increments during what is normally idle time. This minimises impact on normal use at the expense of the the total elapsed time being longer. Quote Link to comment
gandalf15 Posted November 9, 2023 Author Share Posted November 9, 2023 Thank you very much for those informations. I guess I will just let it run then. Obviously, there is some "idle time" - but storj node runs 24/4 and is writting to the disks. Media center is up and someone of the family might watch. Backups from other boxes run at night already and so on. There isnt really a "from midnight to 6 am" idle window. So it might be smarter to just let it run and let it take a month to sync - and after that make a check every 6 months or so. Quote Link to comment
Kilrah Posted November 9, 2023 Share Posted November 9, 2023 (edited) Storj is a workload that is extremely poorly suited to an unraid array, even worse with parity, pretty much can't imagine anything worse. When I ran it it was on a dedicated pool separate from the array. Edited November 9, 2023 by Kilrah Quote Link to comment
gandalf15 Posted November 10, 2023 Author Share Posted November 10, 2023 Well, I run that node for over 3 years already with a cache drive - and it beginns to pay now The Idea of a separate ZFS array though would make sense. I might keep that in mind for the futur. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.