Jump to content

Performance question


mikester

Recommended Posts

My Unraid server uses an Asus P5PE-VM motherboard, which was all the rage 6 months ago when I built the system.  Originally I had a single stick of 512MB DDR400 memory in it, and with 7 drives (6 SATA and 1 IDE) the reported speed during a parity sync averaged 25 MB/s.  About a month ago I came across 1 GB of DDR333 dual channel memory, so I popped it in (in place of the single stick) and the parity sync speed dropped a little to 22 MB/s.  I thought this speed was normal, and a little disappointing because I can't write to the array while someone else is reading for it, or it causes movies to skip, etc.

 

Well now I am seeing posts of people with newer motherboards getting 55-60MB/s parity sync rates.  I'm thinking if I could get double the performance, that might fix the problem I'm having with reading and writing at the same time.  But I'm trying to figure out exactly what the bottleneck is.

 

Here is how my system is setup - 2 SATA drives and 1 IDE drive on the onboard ports, and 4 SATA drives on a PCI card.  The processor is a Celeron D 2.66 GHz, which doesn't seem to be a problem based on other posts I've seen.  Originally I thought the memory might be the bottleneck, but now I'm thinking that maybe the PCI bus is (but with only 4 drives on the bus?).

 

I'm thinking of replacing the motherboard with an Asus P5B-E and 1 GB of DDR2 800 RAM.  This would give me 8 onboard SATA  + 2 IDE, which should all be attached to the southbridge.  I also have a 2 port PCI-e SATA controller I'm not using, so I can throw that in for a total of 12 ports, none of them using the PCI bus.

 

But before I spend the ~$200 that will cost, I want to make sure there is a reasonable chance that will increase my performance.  Also, are there any cheaper options that will still give me better performance?

Link to comment

The rates they are reporting are with two or three drives... (I think) 

Their performance will be faster with less to read when calculating parity.

 

Best to compare apples to apples, and at least compare performance with the same number of drives. 

 

Also ... your parity performance is dependent on your slowest drive. (Odds are, one of your IDE drives)  Arrays with all SATA drives will be faster.

 

As you said, lots of possibilities.  If possible,assign your fastest SATA drive to parity.  That has the most impact when writing to the array.

 

When reading, only the drive being read is involved... unless simultaneous writing is occurring.  in that case, what happens when you read a different drive than the one being written to?  Do you still have the bottleneck?

Link to comment

I just recently added the single IDE drive to the system and it didn't impact performance, i.e. the parity check speed was about the same as before. 

 

All of the drives are fast.  The parity drive is a Seagate 500GB 7200.10 SATA2 drive.

Link to comment

What version of unraid are you running...  early versions had write priority over read.

current versions  have read priority over writing.

 

Your network could easily be the bottleneck when doing simultaneous read/writes, especially if is running in half-duplex mode.

What are you using for your router/switch?

Link to comment

I just upgraded to 4.0 and it didn't make a difference.

 

Now that I think about it, the times I had problems reading and writing at the same time were probably dealing with the same drive, so that kind of makes sense.

 

One solution to that problem I have used in the past ... Always write to a single disk designated as your staging disk.  With unraid, it can be much smaller than the rest.  Then batch move the files to the correct disk afterhours (which will be far faster than the network transfer).  That way your primetime writes are always to a different disk than your reads.

 

The same approach also solves a different problem - security.  If you can secure drives 2-n such that only you can change them, but anyone can write to the staging disk (disk 1), you are basically allowing folks to put files up on the server for review and then you move them to production.

 

Can you guess that I manage an IT department and was a former computer auditor?  LOL!

 

 

Bill

Link to comment

computer auditor?!?! YIKES, GET HIM! hehehe

 

On a serious note, I really like this idea of the staging drive. As my system grows, I'll have to consider it. Of course, currently, the only one writing to the machine is me anyway and it's not in full production as a server so no issues :)

Link to comment

I woud like to see a staging area that is on a drive in the UnRaid box, but is not parity protected.  This greatly improves the write throughput, then I can move the recordings over to the parity protected drive at a later time.

 

 

 

That would be nice.  It is one of the reasons I previously asked for a JBOD section of the Unraid - it was already on "ye old laundry list" of enhancements.  I think I listed redundant backups as my key reason for it, but it would be helpful in many ways.

 

 

Bill

Link to comment

Regarding sync performance - the fastest parity sync can go is the speed of the slowest disk being read (or written if parity is the slowest).  Virtually all hard drives you would use in unRAID these days are 7200 RPM, so that results in around 55-60 MB/sec, slowing to perhaps 35-40 MB/sec as sync approaches the inner cylinders, where the data transfer rate drops off considerably.

 

With 2/3/4/+ drives in the system, you should get these rates.  As you start adding more drives, the next bottleneck becomes the shared PCI bus.  For example, with 12 drives it will probably max out around 11-13MB/sec because 8 of the 12 drives are sharing a single 133MB/sec bus.

 

If you use a motherboard with PCIX or PCIe disk controllers, you would get a higher sync rate with more drives, but eventually the next bottleneck will be the southbridge/northbridge/memory bandwidth.  I'm not sure when things max out here since we haven't tried to do this yet.  Finally, the last bottleneck would be the CPU doing parity calculations - but I don't think anything 2GHz or more would ever run into this bottleneck.

 

Regarding write performance - here the bottleneck is exclusively the "read/modify/write" penalty associated with RAID, combined with long rotational latency.  One day we'll try to post a note on the wiki which explains this - but in general, take the performance of a single drive and divide by a factor between 3 and 4 to get the maximum sustained write performance.  So for a typical 60MB/sec drive, the fastest we would be able to sustain is going to be around 15MB/sec or so.

 

Regarding a "staging disk" - this is a solution we plan on implementing as part of the "user share" feature.  The way it would work is that you would be able to configure a second array, either as a 1-drive "array" or a 2-drive mirrored array.  All file creates would go to this array.  Later, a background process could be scheduled to start moving files off this "fast" array to the main unRAID array.  Once the files are moved, the User Share file pointer would point to the new files - from the host perspective it wouldn't notice that the files moved at all.

Link to comment

So it sounds like my best performance will come from spreading out the drives across different controllers on different buses?  I just ordered a P5B-E, and I have a 2 port PCI-e SATA controller and a 4 port PCI SATA controller.  So it seems like if I distribute the drives across all 3 controllers, I should get the best performance, correct?

Link to comment

Regarding a "staging disk" - this is a solution we plan on implementing as part of the "user share" feature.  The way it would work is that you would be able to configure a second array, either as a 1-drive "array" or a 2-drive mirrored array.  All file creates would go to this array.  Later, a background process could be scheduled to start moving files off this "fast" array to the main unRAID array.  Once the files are moved, the User Share file pointer would point to the new files - from the host perspective it wouldn't notice that the files moved at all.

 

 

 

Tom

 

That's great to hear.  Now since 4.0 can handle 14 drives and I only have 12 I know what I'll do with the extra 2 drives.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...