Advice on 1st UnRaid build.


Recommended Posts

Hi Guys,

 

I've decided as a little summer project to consolidate all my USB/FW HDs.

 

Also note, OS X deals with base 10, therefore 2TB is 2TB to OS X, just to save confusion later! :P

 

What is HAS to do

- ALL these drives MUST be added to the parity.

 

1. Time Machine

- I've been made painfully aware that Time Machine won't span disks so one of the below 2TBs will be dedicated to that, I will just keep the TM backup size below 2TB.

- Requires a pw to gain access to it, i.e. not 'public'.

 

2. Backup of 2+TB of DSLR photographs/videos.

- Given that I can span using a different backup software this is the way forward here.

- I need the ability to just plug in another HD, add it to the user share which holds the photos and it sorts itself out, no messing about re-building arrays.

- Essentially Drobo functionality. You plug another HD in and it dynamically adds it to the array increasing the available space. (See note 1)

- Requires a pw to gain access to it, i.e. not 'public'.

 

3. Streaming of DVDs

- I stream my DVDs/Blu-rays to my PS3.

- This again will have to span drives, however I'd like like to set which drives it spans to (see note 2 later).

- Public accessibility, so anyone on my network can access it.

- A sub requirement of this is PS3 Media player, which I see you can install so that is fine, I just need opinions on the CPU/CPU speed required to encode 1080p ok, however again I could just turn the Mac Pro on when streaming 1080p material.

 

4. Speed.

- Faster the better.

- MUST be a MINIMUM of 60Mb/sec writes/reads, this is what FW800 offers me and my gigabit network has managed 70MBytes/sec transfers.

- It will be connected to an Airport Extreme, if this makes any difference.

 

5. Reliability

- Obviously storing important data therefore redundancy

- Will the possibility of dual drive failure redundancy be coming at some point?

 

 

Note 1:

Say we have the following setup:

 

1. 2TB - Dedicated to Time Machine

2. 2TB + 2TB - DSLR Photo/video library - User share span (so looks like one continuous drive to OS X).

3. 1.5TB - DVD collection library - User share span (so looks like one continuous drive to OS X).

 

Now, if I fill up either of 2 or 3, and I wanted to add another 2TB drive to it, can I do that without data loss.

Would it simply be a case of, plug it in, format it in UnRAID, add it to that user share, now I have 6TB available?

 

Note 2:

Can I force disks to be a user share? I *think* this is possible from what I have read but want to be sure.

Disk 1: Time Machine only

Disk 2&3: Photos only

Disk 4&5: Videos only

 

So time machine files don't get spread about, or photos don't get put on the videos HD, etc?

 

Possible?

 

 

The hardware

Case: Coolermaster Elite 335 - £29

- 7 x 3.5"

- 4x 5.25" for 5 HDDS later on with adapter

 

Mobo: Asus P5B-VM DO - £50

As recommended, also has onboard GPU.

 

PSU: Suggestions?

- As high as possible efficiency!

 

Mem: 2x1Gb DDR2

CPU: Suggestions?

- Something that can encode 1080p H264 for playing on a PS3, thinking a Wolfdale or Conroe of some kind?

- Although, I assume CPU speed matters for the parity calculations so faster better?

 

 

That's the base hardware, as for the hard disks:

 

Parity: 2TB WD Black 7200RPM (need to buy) - Does a 7200RPM disk really speed things up?

Cache: 1TB Hitachi 7200RPM (already own) - I have a 320Gb 2.5" 5400RPM disk floating about, what would the speed difference be between the two?

Backup disks: 3x2TB WD20EARS + 1.5TB Samsung  (already own)

 

So as I understand it, during the day things will be copied to the Cache drive, and at certain points it will then spin up the other HDs and copy + parity it across? Can I 'use' the drive during this time? I.e. time machine backs up every hour, would setting the cache to be emptied every hour just slow it down or bugger it up?

 

Finally, how does the parity drive work when say there is 5x2TB = 10TB, is it that good it can restore data from a 10TB array when 2TB goes missing!?!?!

 

I think thats everything!

 

Apologises for the amount of questions, but I need to be sure before spending a few hundred on this project and then finding it doesn't do what I want!

 

Cheers! :)

Link to comment

4. Speed.

- Faster the better.

- MUST be a MINIMUM of 60Mb/sec writes/reads, this is what FW800 offers me and my gigabit network has managed 70MBytes/sec transfers.

 

So a cache drive is a must then.  You'll want a nice fast one, such as a 7200 rpm drive or possibly even an SSD.

 

- Will the possibility of dual drive failure redundancy be coming at some point?

 

It is on the unRAID roadmap, but no telling when it will actually be a reality.  The current solution is to just copy your most critical data onto two separate drives within the server.

 

Now, if I fill up either of 2 or 3, and I wanted to add another 2TB drive to it, can I do that without data loss.

Would it simply be a case of, plug it in, format it in UnRAID, add it to that user share, now I have 6TB available?

 

It basically is that easy, yes.  Most of us would suggest that you preclear any new drive with at least 2 passes before trusting it with any data, but that is up to you.

 

Note 2:

Can I force disks to be a user share? I *think* this is possible from what I have read but want to be sure.

Disk 1: Time Machine only

Disk 2&3: Photos only

Disk 4&5: Videos only

 

So time machine files don't get spread about, or photos don't get put on the videos HD, etc?

 

Possible?

 

Quite possible, and quite easy.  You would use the included/excluded disks option for each share to accomplish this.  In your example, your Time Machine share would have:

 

included disks: disk1

excluded disks: disk2, disk3, disk4, disk5

 

Technically you can leave the excluded disks blank, but I always like to fill them in just to be sure.

 

PSU: Suggestions?

- As high as possible efficiency!

 

There are tons of options here.  Start here.  The SeaSonic X Series X650 Gold is probably some serious overkill for your build, but it is top of the line in terms of efficiency and features.

 

Mem: 2x1Gb DDR2

 

I would suggest a single 2 GB stick as it will allow you future expansion.  If you already have two 1 GB sticks laying around, then definitely use them.  But if you are buying new, then a single 2 GB stick is better.

 

CPU: Suggestions?

- Something that can encode 1080p H264 for playing on a PS3, thinking a Wolfdale or Conroe of some kind?

- Although, I assume CPU speed matters for the parity calculations so faster better?

 

You'll definitely want a dual core CPU, but past that pretty much anything should work.  You might want to look through the PS3MS thread to see what other people are using.  CPU speed does not affect parity calc speed as any modern CPU is more than fast enough to handle the minuscule load added by a parity calc.  Many pocket calculators could probably handle it  ;)

 

Parity: 2TB WD Black 7200RPM (need to buy) - Does a 7200RPM disk really speed things up?

 

In most cases no, and since you are using a cache drive, definitely no.  Stick with a green parity drive and save money, heat, and power.

 

Cache: 1TB Hitachi 7200RPM (already own) - I have a 320Gb 2.5" 5400RPM disk floating about, what would the speed difference be between the two?

 

Either would be a good choice.  I can't give you any exact figures, but the 7200 rpm drive will be something like 5 - 10 mb/s faster.  I would suggest trying the slower drive first, then upgrading if you find it to be too slow.

 

So as I understand it, during the day things will be copied to the Cache drive, and at certain points it will then spin up the other HDs and copy + parity it across? Can I 'use' the drive during this time? I.e. time machine backs up every hour, would setting the cache to be emptied every hour just slow it down or bugger it up?

 

That is correct.  The mover script runs (by default) at 3:40 am every day and transfers all closed files (mean not currently in use) into the parity protected array.  You can change that time to whenever you want, and you can also edit the frequency of the transfer (you can have it run every hour if you wish).

 

You can still use all the data both on the cache drive and in the rest of the array during these transfers.  You might notice slightly slower performance, but nothing too significant.

 

Also, the cache drive and data drives will only spin up if there's actually new data to transfer.  So you won't be wasting power if you set the mover script to run every hour but there's only new data every few hours.

 

Good luck, and have fun with your build!

Link to comment

4. Speed.

- Faster the better.

- MUST be a MINIMUM of 60Mb/sec writes/reads, this is what FW800 offers me and my gigabit network has managed 70MBytes/sec transfers.

 

So a cache drive is a must then.  You'll want a nice fast one, such as a 7200 rpm drive or possibly even an SSD.

 

Well I guess that 1TB I have will be drafted in to be a cache drive then! Not quite flush enough for a test project to quite go SSD yet ;) MacBook Pro has to go SSD first!

 

Mem: 2x1Gb DDR2

 

I would suggest a single 2 GB stick as it will allow you future expansion.  If you already have two 1 GB sticks laying around, then definitely use them.  But if you are buying new, then a single 2 GB stick is better.

 

What would the server need more than 2Gb for?  ???

 

Either way, the mobo has four slots and as it's not doing anything stupidly memory intensive then 4 sticks would be fine at the 333Mhz speed the wolfdales run at.

 

CPU: Suggestions?

- Something that can encode 1080p H264 for playing on a PS3, thinking a Wolfdale or Conroe of some kind?

- Although, I assume CPU speed matters for the parity calculations so faster better?

 

You'll definitely want a dual core CPU, but past that pretty much anything should work.  You might want to look through the PS3MS thread to see what other people are using.  CPU speed does not affect parity calc speed as any modern CPU is more than fast enough to handle the minuscule load added by a parity calc.  Many pocket calculators could probably handle it  ;)

This Parity thing seems quite magical haha.

 

How on earth does it 'backup' say 10TB (5x2TB) using just 2TB!?!?! I'd imagine the maths behind it is quite impressive.

 

 

Parity: 2TB WD Black 7200RPM (need to buy) - Does a 7200RPM disk really speed things up?

 

In most cases no, and since you are using a cache drive, definitely no.  Stick with a green parity drive and save money, heat, and power.

 

Lovely, that saves me a few quid too!

 

 

Cache: 1TB Hitachi 7200RPM (already own) - I have a 320Gb 2.5" 5400RPM disk floating about, what would the speed difference be between the two?

 

Either would be a good choice.  I can't give you any exact figures, but the 7200 rpm drive will be something like 5 - 10 mb/s faster.  I would suggest trying the slower drive first, then upgrading if you find it to be too slow.

 

Since the 7200RPM disk isn't doing anything, cache it is!

 

 

 

Thanks for your replies, settled my mind quite abit.

 

Will post back over the summer as I test it!

 

 

Link to comment

What would the server need more than 2Gb for?  ???

If you run torrent software on your system or Virtual Machines the extra RAM can come in VERY handy.  I will soon be putting a total of 10GB in my machine as a prep for an update to my system so I can run my Virtual Machines on it.

 

This Parity thing seems quite magical haha.

 

How on earth does it 'backup' say 10TB (5x2TB) using just 2TB!?!?! I'd imagine the maths behind it is quite impressive.

actually, the math behind it is quite simple.  Look through the unRAID Wiki if you get a chance to find out how the parity actually works.  It is an XOR  or each bit on every drive and then writing that result to the parity drive.

Link to comment

This Parity thing seems quite magical haha.

 

How on earth does it 'backup' say 10TB (5x2TB) using just 2TB!?!?! I'd imagine the maths behind it is quite impressive.

 

Think of it this way.  If A + B + C = D,  then as long as I know any 3 of those 4,  I can determine what the missing one is.

 

If 2 + 3 + 4 + 5 = 14  --- then if I say...  2 + what + 4 + 5 = 14  you can figure out "what".  And if I say 2 + 3 + 4 + what = 14 -- you can also figure out "what"

 

Parity just adds the extra "thing" (D in the equation above) needed to allow for the calculation of "what" -- no matter "what" drive fails.

 

And that's why Parity just needs to be as big as your biggest drive.  Now the math is using XOR's instead of adding up,  but it's the same concept.

Link to comment

What would the server need more than 2Gb for?  ???

If you run torrent software on your system or Virtual Machines the extra RAM can come in VERY handy.  I will soon be putting a total of 10GB in my machine as a prep for an update to my system so I can run my Virtual Machines on it.

 

Have absolutely no need for that what so ever! Mac Pro has 12Gb in it for Virtualising Windows 7 x64 and Windows XP.

 

Box will be dedicated solely to backing up and streaming media, therefore I should be fine with 2Gb.

 

As for the Parity:

 

Wow, that simple.

 

So in theory, couldn't you just make it A + B + C = D + E?

 

Although I suppose B + C could be a number of different combinations so maybe:

 

A + B + C = D as with 1 parity but then for a second simultaneous equation:

 

E + F + G = H.

 

How you generate the second equation however is probably the issue!  :D

Link to comment

If only it were that simple.  Chuck's explanation was a good algebraic analogy, but parity actually works at the level of 0s and 1s.  This means that there's only two possible states (a binary system).  There are two types of parity, even parity and odd parity.  I believe unRAID uses odd even parity (thanks Joe L!), but it doesn't really matter.  On the data disks, 0s and 1s represent real data.  On the parity disk, 0 represents an even number and 1 represents an odd number.  Here's what an extremely small array in which every disk can only hold a single bit might look like:

 

Disk 1   Disk 2   Disk 3   Parity
1 0 0 1

 

The XOR calculations is:

 

1 + 0 + 0 = an odd number, so a 1 is written to the parity drive

 

Now say disk 1 dies:

 

Disk 1   Disk 2   Disk 3   Parity
? 0 0 1

 

We know that parity is odd, so we have:

 

? + 0 + 0 = an odd number

 

? must equal 1.

 

Now lets try to add a second parity disk:

 

Disk 1   Disk 2   Disk 3   Parity 1   Parity 2
1 0 0 1 1

 

Essentially the XOR calculation is just writing its output 0 for even or 1 for odd to two disks instead of one.  Again, disk 1 dies:

 

Disk 1   Disk 2   Disk 3   Parity 1   Parity 2
? 0 0 1 1

 

Based on Parity 1, we already know that the missing disk's data must be 1.  Parity 2 didn't help us at all.  But what if both disks 1 and 2 die?

 

Disk 1   Disk 2   Disk 3   Parity 1   Parity 2
? ? 0 1 1

 

XOR calculation is:

 

? + ? + 0 = an odd number

 

We can't solve this, there isn't enough information.  Again, Parity 2 did not help us at all.

 

There are a few schemes in which a dual parity is possible.  The one on the unRAID roadmap is called P+Q parity.  I don't claim to understand it completely, but I understand it to work something like this (if you are colorblind, let me know and I'll use something besides color to indicate Q parity):

 

First of all, it won't work on extremely small disks like above.  We have now upgraded to huge disks that can hold a whopping 3 bits each!  Here's our healthy array:

Disk 1   Disk 2   Disk 3   Parity 1
1 0 0 1
0 1 0 1
0 0 0 0

 

In unRAID the parity drive can be the same size as a data drive, and a single parity configuration (called P parity) works.  However, what if the parity drive were required to be some percentage larger than the largest data drive?  We could employ a little trick to get a second parity calculation (called Q parity):

 

Disk 1   Disk 2   Disk 3   Parity
1 0 0 1
0 1 0 1
0 0 0 0
0

 

The bolded numbers are added up diagonally to get another XOR calculation, which in this case is 0 (even).  Now say this new Q parity result is written to a second parity disk instead:

 

Disk 1   Disk 2   Disk 3   Parity 1   Parity 2
1 0 0 1
0 1 0 1
0 0 0 0
0

 

Why stop there?  We can actually run Q parity calculations on each of the results from the original P parity:

 

Disk 1   Disk 2   Disk 3   Parity 1   Parity 2
1 0 0 1
0 1 0 1
0 0 0 0 0
0

 

If a single drive dies, then P parity works as before and the Q parity is not needed.  But if two drives die (disks 1 and 2), we can recover both:

 

Disk 1   Disk 2   Disk 3   Parity 1   Parity 2
? ? 0 1
? ? 0 1
? ? 0 0 0
0

 

The recovery equations look like this:

 

P Parity

a. ? + ? + 0 = an odd number

b. ? + ? + 0 = an odd number

c. ? + ? + 0 = an even number

 

Q Parity

d. ? + 0 + 0 = an even number

e. ? + ? + 0 = an even number

 

In equation d. we can easily figure out that ? must equal 0.  So let's plug that back into equation a.:

 

P Parity

a. ? + 0 + 0 = an odd number

 

We now can figure out that ? must equal 1.  So a. and d. are complete, and we have achieved at least partial data recovery:

 

P Parity

a. 1 + 0 + 0 = 1 (data recovered!)

b. ? + ? + 0 = an odd number

c. ? + ? + 0 = an even number

 

Q Parity

d. 0 + 0 + 0 = 0 (data recovered!)

e. 1 + ? + 0 = an even number

 

Equation e. now becomes easy to solve:

 

e. 1 + ? + 0 = an even number

 

? must equal 1

 

Which then allows us to solve equation b.:

 

b. ? + 1 + 0 = an odd number

 

? must equal 0.

 

Check us out, we have achieved almost complete data recovery!

 

P Parity

a. 1 + 0 + 0 = 1 (data recovered!)

b. 0 + 1 + 0 = 1 (data recovered!)

c. ? + ? + 0 = an even number

 

Q Parity

d. 0 + 0 + 0 = 0 (data recovered!)

e. 1 + 1 + 0 = 0 (data recovered!)

 

In this example, we will never be able to solve equation c. because neither P nor Q parity has given us enough to work with.  That's where my understanding of P+Q parity ends.  How do you solve for the diagonal rows that have less than three digits?  The only possible solution that I can think of would be to disallow allow writes to those rows (thereby losing some disk space), though I'm sure there's some more elegant answer.

 

Well, I hope I got that all correct, and I hope that helps clarify why simply adding a second parity disk using the same P Parity scheme won't help anything.  P+Q Parity is on the horizon, and we are all excited to see it.

 

To all who are more experienced than I or actually know what they are talking about, kindly correct any mistakes I've made and explain to me how equation c. could be solved.  :)

Link to comment

I understand that, thanks for the explanation.

 

 

I have another Q tho.

 

I have 6Gb of ECC DDR-3 which was pulled from my Mac Pro, given that stability is key ECC memory would actually be beneficial.

 

On the Intel side only the Xeons support ECC and they are ridiculously expensive still so that idea is out.

 

What about AMD, what AMD chips/motherboards support ECC DDR3?

 

Given that i'd need to spend money on memory anyway I could use the memory that is laying around for this...

Link to comment

 

On the Intel side only the Xeons support ECC and they are ridiculously expensive still so that idea is out.

 

LoL.. sorry that made me snicker coming from a MAC user. i always find macs to be ridiculously over priced. But, i will admit some very nice, stable hardware.

 

On the serious side, I do not know the specs on your ram, if you have x8 chips on that ram,

this board http://www.newegg.com/Product/Product.aspx?Item=N82E16819117226 and

this Xeon http://www.newegg.com/Product/Product.aspx?Item=N82E16819117226

would be a rock solid server for under $400 shipped.

 

If your ram is not compatable, you could go the I3 route and UDIMM's for about the same price if not less.

Link to comment

 

On the Intel side only the Xeons support ECC and they are ridiculously expensive still so that idea is out.

 

LoL.. sorry that made me snicker coming from a MAC user. i always find macs to be ridiculously over priced. But, i will admit some very nice, stable hardware.

 

On the serious side, I do not know the specs on your ram, if you have x8 chips on that ram,

this board http://www.newegg.com/Product/Product.aspx?Item=N82E16819117226 and

this Xeon http://www.newegg.com/Product/Product.aspx?Item=N82E16819117226

would be a rock solid server for under  shipped.

 

If your ram is not compatable, you could go the I3 route and UDIMM's for about the same price if not less.

 

Xeons are always expensive, PC or not  ;)

 

Idea is to either save some money or not spend much more than I would otherwise.

 

Right, i've decided on this lot:

 

Asus M4A78LT-M 760G (Socket AM3) DDR3 Motherboard: £38.00

BeQuiet Pure Power L7 530W Power Supply: £35.82 (Not *quite* bronze rated, but comes VERY close to the cheapest Bronze CPU I could find, it has 'poor' full load efficiency which is why it isn't bronze!)

Coolermaster Elite 334 Midi Case - Black: £25.82

AMD Athlon II X2 Dual Core 245 2.90GHz (Socket AM3) - OEM: £20.82

Total: £144.55

 

All I need is a CPU cooler (B-grade CPU!) and I should be sorted.

 

Memory I have already (6Gb of ECC DDR3!)

Link to comment

In this example, we will never be able to solve equation c. because neither P nor Q parity has given us enough to work with.  That's where my understanding of P+Q parity ends.  How do you solve for the diagonal rows that have less than three digits?  The only possible solution that I can think of would be to disallow allow writes to those rows (thereby losing some disk space), though I'm sure there's some more elegant answer.

 

I was having trouble with the same thing, but I think I finally wrapped my head around it.

 

See:

http://lime-technology.com/forum/index.php?topic=7874.msg81542#msg81542

Link to comment

Right, ordered the above plus:

 

Arctic Cooling Freezer 7 Pro Rev 2 CPU Cooler: £13.32

Total inc Vat: £160.54

 

As well as a: SanDisk 2gb Cruzer Micro U3 Smart 2.0 USB Flash Drive for UnRAID to reside on.

 

Memory is the 6Gb I pulled out of my Mac Pro, I will use 2/3 sticks in the server for 4Gb.

 

Plan is:

 

Undervolt the CPU as much as possible - testing with Prime95 in Windows XP.

Once that has managed two days stability testing I will then boot UnRAID and start testing that, initially with just the free version.

 

Once I'm happy its running properly I will buy the PLUS version and populate the array!

 

Good times :)

Link to comment

May I suggest running memtest86 for a stint as well as Prime95? (It's on the USB stick when you build it.)    I think it'll work the memory interfaces in the processor a bit harder - Prime95 is a good shakedown for general CPU stuff though.

Link to comment

May I suggest running memtest86 for a stint as well as Prime95? (It's on the USB stick when you build it.)    I think it'll work the memory interfaces in the processor a bit harder - Prime95 is a good shakedown for general CPU stuff though.

 

It's ECC memory, so *should* be fine, if it is enabled however is the issue. It should be tho.

 

I will test anyway, just to be sure.

Link to comment

There are two types of parity, even parity and odd parity.  I believe unRAID uses odd parity, but it doesn't really matter. 

You are right, it doesn't really matter for the great explanation.

But for the record, unRAID uses even-parity

Link to comment

In this example, we will never be able to solve equation c. because neither P nor Q parity has given us enough to work with.  That's where my understanding of P+Q parity ends.  How do you solve for the diagonal rows that have less than three digits?  The only possible solution that I can think of would be to disallow allow writes to those rows (thereby losing some disk space), though I'm sure there's some more elegant answer.

 

I was having trouble with the same thing, but I think I finally wrapped my head around it.

 

See:

http://lime-technology.com/forum/index.php?topic=7874.msg81542#msg81542

 

Thanks.  I'm not quite sure I get that example, since parity doesn't use numbers like 2, 3, and 4, only 0 and 1, but I guess I can see the point there.  I talked to a few of my math genius friends last week and they explained it to me.  Basically, each incomplete diagonal line wraps around and starts at the top again, like this:

 

Disk 1  Disk 2  Disk 3  Parity 1  Parity 2
1 0 0 1
0 1 0 1
0 0 0 0 0
1 1 0 0

 

Or, to put it graphically, you are mapping a Cartesian coordinate system onto a torus!

Link to comment

The example is a bit misleading, since it's not long enough to show where each DP starts and ends, nor does it even denote where each DP starts and ends -- however the example isn't that important.  The meat of it is in the text:

 

"However, since each diagonal misses one disk, and all diagonals miss a different disk, then there are two diagonal parity sets that are only missing one block."

 

I should make a better example with different colors for each DP showing where they start and end -- but I think you'll have it figured out before I get around to it.

Link to comment

brainbone - I did figure it out (or rather had it explained to me), see my example in the post above yours.

 

As far as I can see, "wrapping to the top" won't solve the issue (except when you reach the last sectors of the disk) -- it's no different than continuing down a few more sectors.  

 

What matters is what disks are included in the Diagonal Parity (DP).  Each DP excludes a different disk from its XOR, unlike Row Parity (RP) -- sometimes RP is excluded from DP, sometimes one of the data disks, you just exclude as you roll through.  This simple step of excluding a different drive from each successive DP should allow you to always find a usable DP (you may need to recursively use the DP and RP of other rows to recover a given DP) to rebuild enough row data to be able to finally use RP to finish reconstruction in any double disk failure.  Or, as the original author wrote much more eloquently than I; each diagonal misses one disk, and all diagonals miss a different disk.

 

In your example above, if, say, data disk 1 and 2 fail, you wont be able to rebuild from DP for a sector on disk 1 or disk 2, because both drives are included in the XOR.  But, if either one of them were excluded from DP, you would be able to rebuild using DP for the one that wasn't.  Then the one that was excluded could be rebuilt using RP.

 

Read this document, starting on page 6, "4 Row-Diagonal Parity Algorithm".  You should be able to stop at "5 Proof of Correctness".  Read the last 3 paragraphs in section 4 over until it sinks it -- took me a few times.

 

(Edited to correct some glaring errors in my explanation)

 

Link to comment

Right, most of the kit arrived yesterday and for the last 14 hours it's been pre-clearing the first two WD drives, one 1TB & one 2TB EARS.

 

So I guess a build log is in order, although I'm still waiting for the USB stick i'm going to use to arrive, as well as SATA cables, mobo only came with two, should have checked this really!

 

Anyway

 

I can confirm that the 'Asus M4A78LT-M' works.

 

Network, graphics, etc, fine. Only thing I haven't tested yet is WOL (Wake On Lan).

 

I will try and run it to level two support, and also confirm all the level three requirements, but without the 13+ drives thing, (basically run the parity check at the start, mid and end of two months).

 

However that will have to wait until September, because I move out in June, and the device will have to be unplugged ::P

 

 

Edit 1: Network performance: I've seen peaks of 90MB/sec and sustained of about 65MB/sec to my Time Machine disk. Good times!

Link to comment

Edit 1: Network performance: I've seen peaks of 90MB/sec and sustained of about 65MB/sec to my Time Machine disk. Good times!

 

That's with no parity disk installed, right?

 

Correct.

 

I need to wait for some SATA cables before I can get the thing running properly.

 

Time Machine is setup as well as my photo library backup.

 

This thing is *quality*.

 

 

Edit 1:

 

Wake-On-Lan works, that means this motherboard works 100%, amazing.

 

Managed 80MB/sec sustained transfer of big files and 40-50MB/sec of my photo library files (5-30MB photos).

 

Edit 2:

 

Scratch that, setting two transfers going has given 80MB+/sec sustained and peaks of over 100MB/sec! That is basically nearly saturating gigabit!

Link to comment

Well, I think my performance was short lived.

 

Transferring 2MB files results in... 10MB/sec with peaks of 30MB/sec.

 

It *might* be because I have a preclear disk in progress... we shall see.

 

Also is my memory usage ok?

 

total       used       free     shared    buffers     cached

Mem:       3627168    3593144      34024          0     635796    2800800

-/+ buffers/cache:     156548    3470620

Swap:            0          0          0

 

 

My system isn't fully operational yet, have been fighting time machine to get it all to run correctly :/

 

Still need to transfer all my DVDs to it and let the parity disk preclear. In addition to that I need some help making sure the server is running correctly, fix all errors [if there are any] then turn the parity disk on.

 

Once all that is done then I will do a build log.

 

Edit: Anybody know WTF is going on here: http://lime-technology.com/forum/index.php?topic=12927

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.