Are these performances "typical" ?


iomgui

Recommended Posts

core i3 8100, 16 GB ram

4 disks, including 1 parity disk (same model as every other disks)

Disks fully tested Under Windows 10, read write performance of 220 to 250Mo/s (average 240) on big files.

I tested unraid first without a parity disk, transfer though ethernet of big video file was using full capacity of 1000Mbps ethernet, so was at 110Mo/s stable.

But now with a parity disk, performances are down around 70 to 80 Mo/s (maximum) with a global cpu usage that is stable at 35%, but with nothing installed (not even Community application plugin, yet). Though even if global cpu usage is stable, CPU0 to CPU3 usage varies quickly between 10 to 50% (ram usage is low at 8%).

 

I m not complaining, i just wanted to know if it is "to be expected" or if i missed Something in configuration ? 

 

Edited by iomgui
Link to comment

Everything runs as slow as the slowest disk so check they are on separate busses and not shared channels or whatever - eg if your parity and the disk you are copying a file to are sharing a channel and the max throughput is 150mb/s then you will get like half that speed etc. You would need to provide a bit more info on how disks are plugged in etc for a more specific kind of answer. 

Link to comment

(all same disks all same performances Under Windows). I Also tested the bus in window by copying files on all disk at the same time, as i had 4 it was easy to make. Which confirmed they were able to all perorm at 200 250Mo R/W in the same time on all drives. For info they are attached to a SLI 9211 8i in IT mode, in a PCIe 4x socket.

Also i dont think it is the problem because when the parity disk built itself it took almost a day but was between 200 to 250Mo/s write for the parity disk according to unraid.

 

Thanks for the input though.

Is CPU load linear compared to write speed (for the same type of files)? In other words, if unraid was able to write the file at 160Mo/s and the parity disk at the same time, would the CPU load be around 70% (knowing that at 80Mo/s CPU load is at 35%)

Edited by iomgui
Link to comment

It was on auto.

I changed it to reconstruct write and it now writes at the eth bandwitch speed, and also CPU usage is down but not stable like before, between 6 to 15%.

 

Many thanks for the tip its kinda an incredible jump in performance … i figure that the previous one was doing things sequencially ? explaining the speed divided ? but i dont understand the big drop in cpu usage … i dont complain … does it mean the auto mode should be "reconstruct write" nstead of "sequential read modify write" unless it concerns only a few people / conf ?

 

Link to comment

Thanks a lot thats very clear. 

Yes, i m not an English native speaker nor am i leaving in an English country. I ll try to be more clear : in my case reconstruct write had the Following consequences :

1/ increase in data transfer speed, up to the limit of what my ethernet was able to manage (1000 Mbps)

2/ extreme decrease of the cpu load from 35% (at 80Mo/s) down to 6 to 12% (at 110 Mo/s)

Thats why i was wondering how it managed a lower cpu load for better data transfer and write speed.

 

I also understand than when migrating large amount of existing data to unraid, reconstruct write should be the best mode.

For now  i m only trying to understand how to use unraid, how it performs and how i could do what i need with it. Though, when the time will come to transfer my data, i think i will consider building the array without the parity disk, and launching the parity dilk built at the end (my data being safe as my older disks will wait that last step to be blanked).

 

Thanks again for the help and explainations.

 

Link to comment
7 hours ago, jonathanm said:

Or, let them be your backup. Unraid can reconstruct a failed disk, but it can't help recover a deleted or corrupted file. You still need backups.

This is good advice

Also, I note that your speeds might indicate parity disks negotiating a SATAII (300mb/s) link, as that is roughly the ballpark of the speeds you are seeing. Currently my server is limited in this exact way due to using an HP port expander that supports SAS6g but only SATAII.

Edited by Xaero
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.