how long should parity rebuild take when you upgrade to 14tb drives


Recommended Posts

i upgraded swapped out my 12tb wd  drives  with all 14tb segate  and now 2 14tb wd drives

 

its been 16 hours  and it still says it needs a day and several hours...  its fluxating from 50mbs  to like 150mbs  it used to start off at 250mbs    

is this normal and the drives are filled on a couple of the drives...  as i remember it used to take only like 23-25 hours to do a parity check etc

 

is it say a bad backplane?  as i use  sas card and cable that breaks out to 4 satas...    is there something in the diagnostics say anything?

 

and is it due to i swapped out Desktop Ram 3200mhz 32gig   with ECC Ram 3200mhz 32gig 

 

does the ram cause the parity to go slower?  i figured once i upgraded all the drives to be the same basiclly... and have ECC  ram  things would run smoother but seems to go slower?

 

 

 

tardis-diagnostics-20230607-1012.zip

drives.JPG

drive2.JPG

Edited by comet424
Link to comment

Try to pause rebuild and check any disk read/write counter buildup.

 

If sit at 50% position, a ref. throughput should ~140MB/s instead 79.7MB/s.

 

Cache_Dirs plugin seems use too much CPU/disk resources.

 

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
12512 root      20   0   33476  31388   2048 R  76.5   0.1   0:01.77 find
12505 root      20   0    5128   3392   2032 R  35.3   0.0   0:00.75 find

 

root     12244  0.0  0.0   5588  3664 ?        S    Jun06   0:28 /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -u -c 1 -U 100000 -l off
root     12454  0.0  0.0   4564  2936 ?        S    10:12   0:00  \_ /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -u -c 1 -U 100000 -l off
root     12620  0.0  0.0   2584   872 ?        S    10:12   0:00  |   \_ /bin/timeout 30 find /mnt/disk1/Projects -noleaf
root     12621  2.0  0.0   3920  2364 ?        D    10:12   0:00  |       \_ find /mnt/disk1/Projects -noleaf
root     12455  0.0  0.0   4564  2936 ?        S    10:12   0:00  \_ /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -u -c 1 -U 100000 -l off
root     12824  0.0  0.0   2584   940 ?        S    10:12   0:00  |   \_ /bin/timeout 29 find /mnt/disk2/Music -noleaf
root     12825  0.0  0.0   3932  2476 ?        D    10:12   0:00  |       \_ find /mnt/disk2/Music -noleaf
root     12475  0.0  0.0   4564  2936 ?        S    10:12   0:00  \_ /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -u -c 1 -U 100000 -l off
root     12498  0.0  0.0   2584   828 ?        S    10:12   0:00  |   \_ /bin/timeout 30 find /mnt/lan_cache_pool/lancache -noleaf
root     12501 22.5  0.0   3856  1096 ?        D    10:12   0:00  |       \_ find /mnt/lan_cache_pool/lancache -noleaf
root     12485  0.0  0.0   4564  2936 ?        S    10:12   0:00  \_ /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -u -c 1 -U 100000 -l off
root     12500  0.0  0.0   2584   936 ?        S    10:12   0:00  |   \_ /bin/timeout 30 find /mnt/unraid_files/appdata -noleaf
root     12505 39.0  0.0   5128  3392 ?        R    10:12   0:00  |       \_ find /mnt/unraid_files/appdata -noleaf

root     12490  0.0  0.0   4564  2936 ?        S    10:12   0:00  \_ /bin/bash /usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -u -c 1 -U 100000 -l off
root     12510  0.0  0.0   2584   924 ?        S    10:12   0:00      \_ /bin/timeout 30 find /mnt/vms_pool/vms -noleaf
root     12512 91.5  0.0  33476 31388 ?        R    10:12   0:01          \_ find /mnt/vms_pool/vms -noleaf

Edited by Vr2Io
Link to comment

how do i check the counter build up  not sure how to do any of that    i know how to pause lol

 

and reason i was asking about the ram   i wanted to know if the ecc ram is making it run low 30mbs  and if i had desktop ram it make it run at 250mbs  is basiclly what i asking

 

but how do i do that test?

Link to comment
1 minute ago, comet424 said:

so how do i run the test now? to see throughput

Pls try uninstall those plugin, confirm no more counter build-up, then resume rebuild.

 

You can click the icon to toggle between bandwidth / counter mode.

Edited by Vr2Io
Link to comment
7 minutes ago, comet424 said:

uninstall which plugin the cache plugin??

Cache directory plugin.

 

7 minutes ago, comet424 said:

and here is me flipping it back  its just reading randomy i guess idling

Rebuild in pause ??

To got best rebuild speed, minimize other parallel disk read/write operation. Or provide a new diagnostic to check.

Edited by Vr2Io
Link to comment

i have no idea what another parallel disk read operation is...  i know i installed the cache dirctory a few years ago..  but then i had a cache drive back then.. but then i found i was filling it too much  500gb wasnt big enough  and 1tb didnt help so i just wrote to the array instead  

 

its doing better now   and here is the new diagnostic....  do you use cache directory  when you have a cache drive??  and the cache on the disks''     these seagates have 256 but my wd  have 512mb   does unraid utilize them?

drive8.JPG

tardis-diagnostics-20230607-2134.zip

Link to comment

189MB/s really good now.

 

5 minutes ago, comet424 said:

do you use cache directory  when you have a cache drive??  and the cache on the disks''

I haven't use cache directory and cache pool. I like RAM cache or running most stuff in RAM. ( 1st tier was SSD / NVMe, spinner disk usually 2nd tier main-storage )

Edited by Vr2Io
Link to comment

ya  is anything else slowing things down  least it be done in a  7 hours it says not 22 more hours  like 2 days  like frig lol...

 

how do you do ram caching?  or run msot of the stuff in ram isnt that how unraid does it?     aand what you  mean 1st tier was ssd/nvme and 2nd for main storage?

 

and i appreciate you helping me so far..  as i wouldnt know what was hogging stuff.. as long as cpu usuage isnt full red i figure i doing ok lol  and wanted freenas speeds but love the way unraid works  1 drive ata time adding lol  so unraid is for me 🙂

Link to comment

Here's a historical benchmark for you.  You'll notice that I replaced the two 18TB drives with two 14TB drives (hence the reason for the parity sync of the two 14TB drives on Sat and Sun in the middle).  Note that one of the parity checks took four days.  If the server is in use during a parity check then the check will take a lot longer.  These are all Exos drives.

 

image.thumb.png.7e40d2e77672b3ed02e26e2df0aeec4c.png

Link to comment
37 minutes ago, comet424 said:

how do you do ram caching?

Usually I will maximize memory in mobo, only use 32G module, so a SFF mobo will have 64G. Then utilize  the ratio in Tips And Tweaks. This would help use RAM cache as much as possible, it help during file transfer.

 

image.png.14474f2c9a93b03c887e4a6067ac7219.png

 

37 minutes ago, comet424 said:

what you  mean 1st tier was ssd/nvme and 2nd for main storage?

Docker won't need too much storage, in my case less then 6GB, so I set all run to /tmp ( RAM disk ), only appdata backup to SSD or restore to /tmp when system restart. A CCTV recording need ~12G for 3 days also store in /tmp recycle, just daily backup to harddisk. VM and other application are use SSD.

 

Spinner disk just as main storage, mix with array and RAID0 pool ( this got high performance so eliminate cache pool ) , this tier usually seldom access because those are cold data.

 

 

 

 

 

 

Edited by Vr2Io
Link to comment

ah ok so then my system was working away even though i had everything turned off too i tried before  like stop all dockers and vms and it took 2 days before too.. os it was the cache directory plugin?  it used to work fine maybe something bugger it up...

 

so youd suggest then i get another of this ecc ram 32gig  to make it 64gb  and then use the tips and tweaks plugin?  and that lets you adjust more ram caching then? and whats a SFF motherboard?

 

so you use an array..  but also  use a pool thats in a Raid 0   so you emlimated a cache pool... does that mean you use like instead of a 1 ssd of 1tb that you fill up 

youd use like say 4 6tb drives in raid 0    so you get more disk space and faster.. and thats your cache drive?

 

as id like to speed up my unraid....  i just didnt like with freenas you needed to buy drives of 3 sets the same size...

 

and least the parity build is done...  and cpu ultiziation dropped more too  i guess it was just hogging the cpu the cache plugin

Link to comment
6 hours ago, comet424 said:

  it used to work fine maybe something bugger it up...

Sometime will be that, when something in race condition or different situation.


 

6 hours ago, comet424 said:

so youd suggest then i get another of this ecc ram 32gig  to make it 64gb  and then use the tips and tweaks plugin?  and that lets you adjust more ram caching then?

Depends on caae, RAM cache like a temp storage, let say you have 32G, if you transfer data in 24G (~80%), then all can cache in RAM and transfer as fast as source even destination was slow, it flush to destination in background. But if you transfer data more then 24G, then it will queue as usual and transfer in "throttled".


For docker run in RAM disk, it is because high performance and no need care about

write endurance issue, of course you need to understand data lost when power off.


SFF means small form factor. I use a mini PC ( ~ 1 lite size ) with two 32G sodim and a 256G Nvme to host all docker, no spinners disk. (Morefine S500+)

 

6 hours ago, comet424 said:

youd use like say 4 6tb drives in raid 0    so you get more disk space and faster.. and thats your cache drive?

Array and Raid0 pool are independent, Raid0 pool are a large high performance pool, it can form by different size disks, but any disk fault will lost all, so you need backup. I form two pool in 8 and 12 disks.

Edited by Vr2Io
Link to comment

ah ok so my tips and tweaks is set for 

10 background and 20 dirty ratio... so i should set it to 80 and 81?  to get the best speed

and i seen that before what is a race condition

 

and whats caae?

and ya i have 32 gig stick at the moment to run everything on my unraid vms dockers etc..

and when you say  transfer data in 24g.. is that 24 gig files?  or transfering 24 gig at a time like in a windows copy just selecting a bunch of files that equals around 24gig?

 

and what is the issue with power off..  and data lost?  i do run the server on a ups 

 

so you dont use spinners...     

so for your setup    so u got 8 drives in 1 pool and 12 disks in a 2nd pool and whats the size of the array in disks.    so your 2 pools they are both in raid0 striping.. and is that for caching, to the array?  as id like to get best speed.. but ssds are too expensive if you want say like a 4tb ssd  and then they wear out.. how many spinners would you need to get to a ssd speed

 

always learning...  and should i get 64gig of ram for my main server then?

 

and does the cache on the spinner drives do anything for unraid  like help it  or unraid doesnt see the cache

 

as exos i have is 256mb and wd are 512mb

 

Edited by comet424
Link to comment
8 hours ago, comet424 said:

10 background and 20 dirty ratio... so i should set it to 80 and 81?  to get the best speed

10 and 20 are default setting, it should be a good default value to minimize data lost against suddenly power lost, i.e. a situation when data in memory and waiting to write to disk, as you have UPS protect, so you can set to a higher value and safe. Pls read help text and do some test for better understanding.

 

image.thumb.png.546822cdb03cf5903c8a80b258613333.png

 

8 hours ago, comet424 said:

or transfering 24 gig at a time like in a windows copy just selecting a bunch of files that equals around 24gig?

Yes, Windows copy. When you have 32G and set to 80, this not means fixed allocate 24G for cache (figure just simple talk, not actual value)  it means max use 24G for cache before flush to disk, actual cache size depends on how much memory were free.

 

 

8 hours ago, comet424 said:

and i seen that before what is a race condition

 

and whats caae?

I mean slow parity rebuild case, concurrent parity rebuild and cache directory in race for read/write to array, as result slow down all.

 

 

8 hours ago, comet424 said:

always learning...  and should i get 64gig of ram for my main server then?

This almost no negative effect for adding memory, many memory also have different use case, as you have UPS to protect, so the remain question is does this useful in your use case, does those small amount size enough to not use SSD cache, pls evaluate and test yourself.

 

 

 

8 hours ago, comet424 said:

and does the cache on the spinner drives do anything for unraid  like help it  or unraid doesnt see the cache

 

as exos i have is 256mb and wd are 512mb

Disk cache are manage by disk itself, a disk have 512M not means better then 256M, those cache are use for disk internally for different purpose, you just need care about their performance or user review report.

 

 

8 hours ago, comet424 said:

so you dont use spinners...     

so for your setup    so u got 8 drives in 1 pool and 12 disks in a 2nd pool and whats the size of the array in disks.    so your 2 pools they are both in raid0 striping.. and is that for caching, to the array?  as id like to get best speed.. but ssds are too expensive if you want say like a 4tb ssd  and then they wear out.. how many spinners would you need to get to a ssd speed

 

Spinners in another big build, pure disk/file build and always power down, it have array and two RAID0 pool, all independent. The performance of RAID0 pool depends on how many disks involve in the operation (because disks are different in size, not always use 8 or 12 disks ), but usually provide several hundred MB to 2GB throughput.

 

Array : 7x 8TB

Pool_1 : 4x 10TB + 4x 18TB = 112TB

Pool_2: 6x 12TB + 6x 14TB = 156TB

 

For example, 8 disks pool writing to 12 disks pool, got ~900MB read and ~2GB write.

 

image.png.54cf9f225448a0e8de3d1eade4b93a79.png

 

Edited by Vr2Io
Link to comment

so for the dirty and background ram settings.. when you test higher.. how do you test for if the performance is getting better?  

 

so far the 32gig fits in my needs cuz i dont run all my vms at once..i just kept it down.. but if i using like 60% of the ram already for vms and dockers  and if i wanna increase the dirty background i should get more ram then right?  

 

so far like with the ssd  cache it was fast  in past when i had it but after copying more then a 1TB of data  it slowed down and then you had to run the mover..  so it fell out of my needs  thats when i was orginally building my unraid copying from my old windows home server nas....  so i not sure if 1tb is good enough cache now as i dont always copying to the array.. but maybe...  as i wanna do windows backups like spaceinvader video.. and such..  

 

 

when you say spinners are powered down   you mean like powered down  in the array..  i used to have that but then they start up randomly or stay on  

 

and reason i figured maybe raid 0  cache pool like 2 6tb drives then maybe its faster then a sata ssd.  and it be more then 1tb  as 4tb still pricy

 

sooo  both your pool 1 and 2  are raid 0  then  so thats fast copying...   what do you copy

and for your array  is that powered down too.. so you run 2 comps then  

 

 

what i want is like my array to be spun down...  and and do windows backups  but keep spun down i guess to goto a pool?

 

also i find my plex is slow to load on nvme   and i find loading a video slow on plex too..  do you recommend videos   be on a pool raid 0  and not on the array

 

i always looking to improve my setup try to speed it up and always learning of other peoples setups that i could tweak mine from what i learn from others setups  

 

also should file server be on a seperate computer then the videos...  or on one.. so far i got my data on my array..  over 2 14tb drives  and the rest is the plex and misc videos

and i apprecaite the help talk so far i always learn something new everyday

Edited by comet424
Link to comment
6 hours ago, comet424 said:

when you say spinners are powered down   you mean like powered down  in the array..  i used to have that but then they start up randomly or stay on  

Power down means server complete power down, as mention I seldom access those cold data, only power on when I need. Array and pool can spindown correctly, but it still use 200w rather than 400w+. If someone need access recent / some media, I will copy out to small build for access, no way to pay high electricity bill.

 

6 hours ago, comet424 said:

also i find my plex is slow to load on nvme   and i find loading a video slow on plex too..  do you recommend videos   be on a pool raid 0  and not on the array

Even 12 disks RAID0 can't beat NVMe performance, so don't think that. Does Plex perform media scan too frequency or too much file count ( small size media ), could you split them and import to different Plex for speed up things. I am not sure and I haven't use Plex. Like my CCTV recording, if you hold 1 week recording, it will very slow, so keep 3 days is solution.  ( 1 day have 43K file count )

 

6 hours ago, comet424 said:

also should file server be on a seperate computer then the videos...  or on one.. so far i got my data on my array..  over 2 14tb drives  and the rest is the plex and misc videos

Really depends on how much data access and media access, does both always in race condition.

 

6 hours ago, comet424 said:

sooo  both your pool 1 and 2  are raid 0  then  so thats fast copying...   what do you copy

and for your array  is that powered down too.. so you run 2 comps then  

Array just partially backup of pool, another build ( Morefine mini PC ) is 7x24 docker / application host.

 

I may try put 1 pool to ZFS, but no timeline as ZFS new to me.

Edited by Vr2Io
Link to comment

ok  so if nvme is better then spinners in raid0 cache spinner  but its still expensive if say you want 3 4tb but then i guess i probably dont use that much.. that was when i was transfering from server to server changing over...   but i dont have space for nvmes anymore  my board only has 2 spot 

i use 1 for vms  and 1 for the appdata... and then i was thinking of doing one for plex  a sata ssd  so its seperate... i dunno i just find it slow

 

and i find it slow loading up a video off the spinners...

 

and without the cache directorys i see the spinners have finally spun themselves down now too.. as they were always running and its just basiclly me  just accessing the server

 

my plex ony scans when i move a new video to the directory as i ripped most of my dvds/blurays and shoved them in the basement  when i add a new one plex sees it and scans then...  

 

i was thinking maybe 1 server for media and other server just data files...  

i have cctv spinner drive i got so far 6 hard drives.. but i couldnt get shinobi to work properly and the video were choppy when i set it up  not like my reolink desktop app so i shelved it..  still want to get it working so i not always looking lol but thats part of the server too

figured maybe a cctv needs its own computer?  or is it good enough on unraid with everything else...

then looked at a video card to see if it will speed it up  but havent got arround to it now

 

 

so the array is a parital back of the pool..  so that means  example if your media is on the pool  it runs faster  then say on the array?

i was thinking of something like that in my head maybe media would connect faster not so much buffering at start  if it was a raid 0  or media was on a 2nd server and the first server runs the plex and connects to the 2nd server for the media like a freenas as i read its faster then unraid..  i just try to think and google. or other peoples builds to try to improve things..

 

you must have some of those 24 bay hard hot swaps  or like those back blaze  for all those hard drives..  what kinda motherboard you use too.. as i find my gaming motherboards never have enough pci  slots to add extra stuff... or what not...

 

and ya 400 to 200 watts its a big difference.. least in the winter 400 watts can keep your room warm

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.