[SOLVED] Why's my unRAID server so slow?


Recommended Posts

I'm testing unRAID to see if it is a viable replacement for my existing Ubuntu server running RAID 6 using mdadm.  One key thing I'm looking for is to have a system that uses less power and unRAID seems like a good solution.  One thing I want is gigabit speed, and this I'm not getting. 

 

To this end I have purchased the following equipment:

Intel i3-4130T

ASRock Z97 Extreme4 LGA 1150 Intel Z97 MB

4 WD Green 4TB drives

8GB DDR3

 

My old Ubuntu server has an AMD Phenom II X4 810, a mishmash of old and new 2TB drives and an ancient 250GB drive for root, and 8GB of the same memory as the unRAID box. 

 

I can copy from Windows large files, 1GB - 4GB in size, and sustain speeds of over 100MB/s to the Ubuntu box.  Longer copies, those in the 20GB range, often slow down to 70TB over time.  When I copy 4GB files to the unRAID server, it starts ok, but slows down to around 36MB/s pretty quickly and there it stays.  So pretty sucky performance overall. 

 

There's nothing about the older system that should be better than the newer one.  Benchmarks show the i3 is older than the old Phenom, same amount and type of memory, both using onboard SATA and ethernet, and so on.  Hard drives are different, but in the older system they are upwards of 5 years old, so I doubt these Green drives are really any slower, and certainly not so slow as to not keep up with a gigabit network feeding them data.

 

So anyone have any idea where the bottleneck might be?  Are there some settings I'm missing in unRAID?  I'm attempting to install Ubuntu to the fourth drive so I can do some testing with it on the same hardware and see where that's out, but that is being surprisingly difficult for some reason. 

Link to comment

unRAID is never fast when writing data to the drives (it is optimised to acting as a media server where reading is more important).  This is inherent in the way it maps data to drives and uses the parity drive.  The sort of speed you quote is not atypical for an unRAID system that has a parity drive and no cache drive.

Link to comment

I am guessing you are testing with a parity drive ?

 

If it is writing speed you want like the numbers you are quoting you have a few options:

 

1) do not use a parity drive (probably not a wise solution, drive failure will mean instant data loss

2) use a parity drive and come to terms with the fact that writing speed will be somewhere between 20 and 40 Mb.

3) use a parity drive and a cache drive (that will give you maximum writing speed because it actually delays writing files to the array, downside is that drive failure of your cache drive itself will mean instant data loss for everything on it. upside is that when you use it you will not notice it is there (everything will look like it is on your array but write speeds will be max).

 

Option 3 is what most people use. With the new beta version there are also options to mirror your cache drive so you can actually loose the downside and keep the upside.

Link to comment

Thanks for the feedback.  I'm using a 3 drive system with the free license, so two storage drives and a parity drive, no cache.  Only option for testing purposes that I can see.  This system will primarily be used as a media server, plus run crashplan and a few other minor functions. 

 

Well with regards to Helmonder's post:

1.  Obviously I want parity, without it unRAID seems kind of pointless.  It would be a lesser LVM/JBOD solution IMO. 

2.  This surprises me a bit.  After all using software RAID on linux there is no cache disk, and there's technically more work to do, especially with RAID 6.  So I certainly wasn't expecting a performance hit, especially with the decent level of hardware I threw at it.  But somehow software RAID is 2.5 times faster.  That is odd. 

3.  I have absolutely no qualms about using a cache drive.  I'd throw an ssd at it if it mattered, although I doubt it does.  For my use case the potential unreliability of a single point of failure in the cache drive isn't a major concern.  Anything I lose in one day's worth of transfers could be recovered.  I'd just like to verify that this will be as fast before I spend the money on it. 

 

So I think unRAID will be a good solution for me.  I just need to figure out how the return policy works for registration keys so I can purchase a pro license and give it a proper test. 

Link to comment

2.  This surprises me a bit.  After all using software RAID on linux there is no cache disk, and there's technically more work to do, especially with RAID 6.  So I certainly wasn't expecting a performance hit, especially with the decent level of hardware I threw at it.  But somehow software RAID is 2.5 times faster.  That is odd. 

It is a actually a side-effect of the fact that unRAID maintains each disk as a single file system and only spins up those drives actually required.

 

To write a sector to a data disk the sequence is:

  • Read a sector from the data disk to get current contents
  • Read a sector from the parity drive to get current contents
  • Write the new sector to the data disk
  • Using the results from the previous three steps work out what the new value for the parity disk sector should be and then write that sector to the parity drive

One of the performance options that has been examined is to force all drives to be spun up whenever data needs to be written and then when writing a sector to the data disks, read all the other disks to calculate the new values for the sector on the parity disk.  This avoids the need to do the first two reads mentioned above but at the expense of keeping all drives permanently spinning. 

Link to comment

Thanks for the feedback.  I'm using a 3 drive system with the free license, so two storage drives and a parity drive, no cache.  Only option for testing purposes that I can see.  This system will primarily be used as a media server, plus run crashplan and a few other minor functions. 

 

Well with regards to Helmonder's post:

1.  Obviously I want parity, without it unRAID seems kind of pointless.  It would be a lesser LVM/JBOD solution IMO. 

2.  This surprises me a bit.  After all using software RAID on linux there is no cache disk, and there's technically more work to do, especially with RAID 6.  So I certainly wasn't expecting a performance hit, especially with the decent level of hardware I threw at it.  But somehow software RAID is 2.5 times faster.  That is odd. 

3.  I have absolutely no qualms about using a cache drive.  I'd throw an ssd at it if it mattered, although I doubt it does.  For my use case the potential unreliability of a single point of failure in the cache drive isn't a major concern.  Anything I lose in one day's worth of transfers could be recovered.  I'd just like to verify that this will be as fast before I spend the money on it. 

 

So I think unRAID will be a good solution for me.  I just need to figure out how the return policy works for registration keys so I can purchase a pro license and give it a proper test.

 

I use an SSD as cache drive, not so much because it is a lot quicker, mostly because it does not need to spin up...

 

Unraid is not unraid (hence the name),

 

RAID is faster then a regular write because it writes a file to different disks (every disk having part of the file, but no disk having everything). Pro: quick, Con: A single drive is utterly useless.. you need the whole array up to access any drive, the whole array also needs to be spun up when accessing a single file. Also when you loose 2 drives in a raid5 array all data is lost, the system can only cope with one drive failure (there are also raid setups that can cope with 2 drives failing, but also in those cases, if the third drive failes you loose ALL data, also the data on the disks that where fine. Further con: You mostly need similar drives (same size/type)

 

Unraid writes complete files to one disk and then writes a checksum to the parity drive, this means that an individual write takes twice as long writing only, and then there is the overhead of creating the checksum.

Con: it is slower

Pro: You can pull out any drive, put it in your windows system and access and use the files.. The parity will cope with one drive failing and you will not loose any data, but say you have a 10 drive system and 9 drives fail, then the data on disk 10 is still perfectly accessable. You can also mix and match any type and size of drive (only constraint: your biggest drive needs to be  the cache drive).

 

When using the cache drive, the cache is a seemless part of the system, you just copy files to your shares as you would without cache drive, only unraid does not copy it to the array but to the cache drive (so no parity, no multiple writes and maximum speed), nightly everything on the cache drive is moved to the array where it is safe again and because this is done nightly (of weekly, or monthly, whatever you want) you do not notice the speed)

 

If you want to assess your speed with the cache drive then just disable your parity in your testsystem, the speed you get then is the speed you will get with the cache drive (possibly a bit faster if you use a faster drive/ssd)

Link to comment

I've gone over a lot of the pros and cons of unRAID vs. RAID5/6.  I just wasn't expecting a speed hit like this, but I think the cache will fix that.  My concern with the speed is that some days I sit at my desktop and rip a number of blurays then copy them to my server.  That takes a lot of time as it is, I don't want to increase that by 2.5x so I'm just being cautious before making a final decision. 

 

There's much to unRAID that I really appreciate.  I do like that it spins down the drives when not in use, and only activates the needed drives.  The structure and fact that drives can be removed and read by another linux system is nice also.  Adding and swapping drives that are different sizes is a huge advantage to me.  That's what I want most.  Any time drive sizes increase and I want to add to my existing server I have to decide to continue with the existing size I have, redo the entire array with larger size drives, or set up two arrays and lose multiple drives in each to parity.  unRAID makes that a much easier transition. 

 

I'm moving from an Ubuntu system with 9 2TB drives in a RAID 6 array.  I have purchased 4 4TB drives for the new system.  I should be able to set up the array with 2 drives, parity and cache, and move most of the data over (about 8TB at the moment).  Then I can use one of those 2TB drives as the cache, and a few more of them in the array. 

 

Using an SSD for the cache is tempting.  The costs are reasonable these days, and they generate little heat and consume less power.  That's part of what I'm going for with this build.  I want less heat generation, and as an added benefit I'll get less power consumption.  The motherboard I bought has an M.2 slot. 

 

Thanks for all the info guys.  I think this tells me what I need to know.  I'll run some tests sans parity tonight just to make sure there's not some other issue with this new hardware before I make my final decision. 

Link to comment

... (only constraint: your biggest drive needs to be  the cache drive)...

Assume you meant parity drive here instead of cache drive. Actually, the constraint is that no single data drive can be bigger than the parity drive.

 

The cache drive can be any size, so an SSD would work. For caching, it only needs to be able to hold one days worth of writes in the default setup.

Link to comment

I've gone over a lot of the pros and cons of unRAID vs. RAID5/6.  I just wasn't expecting a speed hit like this, but I think the cache will fix that.  My concern with the speed is that some days I sit at my desktop and rip a number of blurays then copy them to my server.  That takes a lot of time as it is, I don't want to increase that by 2.5x so I'm just being cautious before making a final decision. 

 

There's much to unRAID that I really appreciate.  I do like that it spins down the drives when not in use, and only activates the needed drives.  The structure and fact that drives can be removed and read by another linux system is nice also.  Adding and swapping drives that are different sizes is a huge advantage to me.  That's what I want most.  Any time drive sizes increase and I want to add to my existing server I have to decide to continue with the existing size I have, redo the entire array with larger size drives, or set up two arrays and lose multiple drives in each to parity.  unRAID makes that a much easier transition. 

 

I'm moving from an Ubuntu system with 9 2TB drives in a RAID 6 array.  I have purchased 4 4TB drives for the new system.  I should be able to set up the array with 2 drives, parity and cache, and move most of the data over (about 8TB at the moment).  Then I can use one of those 2TB drives as the cache, and a few more of them in the array. 

 

Using an SSD for the cache is tempting.  The costs are reasonable these days, and they generate little heat and consume less power.  That's part of what I'm going for with this build.  I want less heat generation, and as an added benefit I'll get less power consumption.  The motherboard I bought has an M.2 slot. 

 

Thanks for all the info guys.  I think this tells me what I need to know.  I'll run some tests sans parity tonight just to make sure there's not some other issue with this new hardware before I make my final decision.

 

I had similar perf to you. When I added a cache drive I was averaging 70-100 MBps write to the array. So let us know your results, but I suspect you'll be pleased.

Link to comment

Remember the cache drive is unprotected data and if it dies the data dies with it. I initially used a cache drive but for the saving in speed I opted to go back to a non-cache system which works best knowing that I am not relying on the mover app to move data I thought was copied to the raid.

Just my 2 cents :)

Link to comment

Remember the cache drive is unprotected data and if it dies the data dies with it. I initially used a cache drive but for the saving in speed I opted to go back to a non-cache system which works best knowing that I am not relying on the mover app to move data I thought was copied to the raid.

Just my 2 cents :)

 

This costs more, but you *could* mirror 2 drives for cache if you really wanted. I've got a spare disk I keep in the box that I move in and out of the cache slot depending on how much writing I'm doing. If I'm adding a bunch of new media I'll throw it in. If I'm copying a movie or two I usually leave it out. I then manually hit the mover button if I'm leaving my machine for awhile. If I were to mirror the cache drive I'd probably just leave it in permanently.

Link to comment

crowdx42, for my usage I don't think I have to worry about the single point of failure issue.  And I've acquired an M.2 SSD for the cache, so less concern there.  I can hold the data on my desktop until I know that the cache has synced. 

 

vanstinator, the way you use yours sounds similar to how I will probably use mine.  I'll probably leave the cache in most of the time, but it's good to know you can swap it in and out as needed.  Probably it would be best when doing my initial duplication from my old server to unRAID that I turn off the cache.  It'll be easier to do that and just let it run for hours on end, then worry about only copying 500GB a drive because of the cache drive size, plus it'll cut down on abuse of the cache drive during that time.  Copy 8TB is going to take a long time no matter what.  Might as well have that going on while I'm at work or in bed.  Also, glad to know there's a button to sync the cache at will. 

Link to comment

Cache drives make unRAID so much better in my opinion. Plus the option of using a RAID volume as a cache drive is even better (such as mirroring or striping two SSDs for use as a cache volume).

 

One thing I would recommend is also mapping your cache drive in Windows so that you can watch the usage as you perform writes. I find this useful in knowing both a) the mover is functioning as intended, and b) watching the usage of the cache drive on "big write" days when a few hundred GB might need to be moved.

 

Screenshot example attached. 

Capture.PNG.a49094a492bcf11e63a201c1be1e0687.PNG

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.