Jump to content

Does dual channel matter much in unRAID?


RockDawg

Recommended Posts

I'm having a hard time imagining how an unRAID server could be memory bound. It's disk-intensive, and disk I/O is a few orders of magnitude slower than memory. I'm guessing that a faster bus would help the most.

 

Unfortunately, we are not dealing with a simple performance problem.  Just as cars have 0-60, 1/4 mile, top speed, skidpad, braking, slalom, and now fuel efficiency as unique measures of performance, we have the same thing with Unraid - "performance" is ill-defined: are we talking about write performance (the internal bus and cpu are tasked) or read performance (mostly the network)?  To take that further, are we looking for the fastest read transfer of a single file (probably not, since most of our tranfers are media, thus faster than what we need is wasted - if unraid was a network backup server we would have a different set of performance criteria) or are we looking for the largest number of same-sized transfers (i.e. X HD streams until it hiccups)?  For many of us, write performance is a non-issue since we can always disable parity, copy files onto the server, then let it calc parity overnight.  For read, I want to know that I have the ability to stream at least one HD and two SDs - anything more than that is gravy.

 

Once I get all my parts pulled together and build the darn thing (decided on the board, case, and PS today!) I will conduct some experiments to figure out some of these oft-asked questions.

 

I personally find top read transfer an unsatisfying metric (like top speed in a car), so I will be proposing a new metric: the number of simultaneous HD transfers until hiccup (assuming each file is on a different drive), the same for SD, and a combo metric (number of simultaneous HD and SD streams (of specific formats).  Thus we could brag about, "I can stream 2/8/1+6 with no problem!!"

 

Back to the original topic, I *suspect* that memory will become an issue as we have multiple simultaneous streams since buffering should become more critical, but that remains to be proven ...

 

 

Bill

Link to comment

"performance" is ill-defined: are we talking about write performance (the internal bus and cpu are tasked) or read performance (mostly the network)?

 

But, again, how can you possibly give the memory and CPU more data than it can handle? On writes you're pulling from multiple drives, so your I/O bus is going to be busy. Even when you get the bits in memory, doing a simple XOR on them takes no effort. Then the data is never looked at again. With DMA reads don't even involve the CPU.

 

By the way, I agree with you that a more real-world metric would be useful. Unfortunately I've found that a million things factor into hiccups, not just speed.

 

Back to the original topic, I *suspect* that memory will become an issue as we have multiple simultaneous streams since buffering should become more critical, but that remains to be proven ...

 

But even single-channel memory is much much faster than your network could ever be. Here are some numbers from the net:

 

DDR2 400: 6400 MB/s

PCIe: 250 MB/s

SATA II: 300 MB/s

Gig-Ethernet: 125 MB/s

 

We would need to be doing something pretty heavy-duty like encryption to tax the main memory. Simple parity computations aren't going to do it. Like I said, I suspect that any buffering problems will be from the I/O bus and disks not being able to fill the buffer fast enough, not main memory being too slow.

 

All that said, I'm looking forward to your experiments. I hope you prove me wrong. I'd much rather upgrade my memory than my motherboard and network. :)

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...