ATLAS My Virtualized unRAID server


Recommended Posts

Thanks for the replies. Any suggestions for a low cost/low power CPU to replace the i3-540? Would the Intel Core i5-650 work http://www.newegg.com/Product/Product.aspx?Item=N82E16819115220? It doesn't specify vt-d on the newegg page but here it's listed as a feature http://ark.intel.com/products/43546/Intel-Core-i5-650-Processor-(4M-Cache-3_20-GHz). Is that motherboard and 4GB of ram enough for unraid and WHS 2011? I'm looking to do many of the things you mentioned in your other post (ripping, burning, handbrake, itunes, media monkey, sickbeard). I used to run windows 7 as my media server and while I love the parity and simplicity of unraid I really miss the flexibly and some features of windows.

 

Thanks

Link to comment

Good catch. So from this list http://ark.intel.com/search/advanced/?s=t&VTD=true&Sockets=1156 it looks like the lowest cost options are

 

 

Not sure I want to spend the money and power to go quad core. Seems like overkill for my purposes. Thanks again for the help.

Link to comment

my Intel board only specifies the i3 CPUs as supported (also Xeons) but im running a i5-650 in it without problems

 

Just means the onboard GFX doesn't work on mine.

 

Everything else is good.

 

I can verify that the X9SCM will not post with an I5-2500 on it.

Perhaps with a bios hack it might. but I would not try it.

 

In general it is not a good idea to use a cpu or memory type that is not "certified" for the board. especially for a server. you might end up with instabilities, overheating or even undervolting.

Link to comment

John,

Quick question.  How does your CPU you show up in unRAID?  Mine shows up as a Pentium Pro:

 

Screen%20Shot%202012-01-04%20at%201.31.57%20PM.png

 

Wonder if the apps I'm running can't take advantage of whatever the Xeon has to offer because the virtualization isn't passing it the right info.  Just my $0.02 and something I'm worried about.

 

BTW, when I was running on bare metal, it WAS definitely showing up correctly.

Link to comment

The web GUI shows "unknown"

 

unMENU has it correct

 

so does the syslog on boot.

 

System Info

CPU Info (from /proc/cpuinfo)

processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Xeon(R) CPU E31240 @ 3.30GHz
stepping	: 7
cpu MHz		: 3292.520
cache size	: 8192 KB
fdiv_bug	: no
hlt_bug		: no
f00f_bug	: no
coma_bug	: no
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx rdtscp lm constant_tsc up arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida arat epb pln pts dts
bogomips	: 6585.04
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


Sensor info (from /usr/bin/sensors)

Link to comment

Johnm - which Beta have you found to work well/recommend for the following controller configs?

 

1. M1015 and  AOC-SASLP-MV8

2. 2x AOC-SASLP-MV8

3. M1015 and 2x AOC-SASLP-MV8

 

Been reading thru the various Beta threads but they have all become a blur as to which Betas work with which controller configs.

 

 

Link to comment

it is a blur...

 

12 is "possibly" bugged with SASLP-MV8's (BLK errors)

13&14 is defiantly bugged with all LSI cards. suspending and waking them up will put them offline (redball) every time.

 

for 1 & 3 i would probably try 11

for 2 anything but 12/12a maybe 14?

 

I am running 12 (i do have an mv8 in mine though).

 

I am still having issues with 3TB drives timing out and redballing again.

it looks like a 3tb/unRAID issue possibly? it has to do with "if all of my drives are asleep and i start writing to the drive, it might redball a drive because it is not yet awake." it has been mentioned by others also. I have now had every single drive red ball in last month. one at a time since going to 14 and then back to 12. i have lost no data ,but i would not call it stable.. having to parity rebuild once or twice a week is  not fun. I thought it was just loose drive bays, but that is not the case.

 

I also just got my sas expander for atlas so.. rebuild time.

 

i plan to migrate my virtual server to bare metal over the weekend, possibly roll back to 11 myself and see if this is resolved.

I am sure it is not because i am virtulized. i do have a spare norco and X9SCM to test this.

Link to comment

The web GUI shows "unknown"

 

unMENU has it correct

 

so does the syslog on boot.

 

System Info

CPU Info (from /proc/cpuinfo)

processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 42
model name	: Intel(R) Xeon(R) CPU E31240 @ 3.30GHz
stepping	: 7
cpu MHz		: 3292.520
cache size	: 8192 KB
fdiv_bug	: no
hlt_bug		: no
f00f_bug	: no
coma_bug	: no
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss nx rdtscp lm constant_tsc up arch_perfmon pebs bts xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm ida arat epb pln pts dts
bogomips	: 6585.04
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:


Sensor info (from /usr/bin/sensors)

 

Thx dude, shows up correctly in unMENU as well for me. 

Link to comment

Johnm - which Beta have you found to work well/recommend for the following controller configs?

Been reading thru the various Beta threads but they have all become a blur as to which Betas work with which controller configs.

I was running one MV8 on 5beta11 beautifully. Later I moved the MV8 to another box and migrated the array on to my new virtual ESXi (passing thru M1015 with expander thx to John's tutor).

Kept unRAID at beta11, and it works great. I hope the LSI issues are resolved for beta15+. :-\

 

Link to comment

Johnm - which Beta have you found to work well/recommend for the following controller configs?

Been reading thru the various Beta threads but they have all become a blur as to which Betas work with which controller configs.

I was running one MV8 on 5beta11 beautifully. Later I moved the MV8 to another box and migrated the array on to my new virtual ESXi (passing thru M1015 with expander thx to John's tutor).

Kept unRAID at beta11, and it works great. I hope the LSI issues are resolved for beta15+. :-\

 

 

That's my plan for the weekend.

Link to comment

Should I be getting temperature readings from my 3 drives with 12a? I'm doing RDM with the vmkfstools -z switch on a Supermicro MBD-X9SCM and Xeon E3-1230.

 

From reading the other RDM VMWare thread, it sounds like it should be working on 12a. Maybe it's because I'm not using a controller card like the IBM M1015.

Link to comment

If you are using RDM, I am pretty sure you can not get drive temps or use spindown.

 

As you mentioned, if you do passthrough with a card like an SASlp-MV8 or an LSI card, you get both (pending the unraid version supports it).

 

The drives do spindown but no temps. I'm going to go ahead and pull the trigger on a M1015. 6 on eBay for $80 each now. Thanks for the help.

 

Is this the cable I'll need?

 

http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025406&p_id=8186&seq=1&format=2

Link to comment

The drives do spindown but no temps. I'm going to go ahead and pull the trigger on a M1015. 6 on eBay for $80 each now. Thanks for the help.

 

Is this the cable I'll need?

 

http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10254&cs_id=1025406&p_id=8186&seq=1&format=2

 

Yes, that's the correct one.

I went with the .75m one myself to have some extra play for better cable management.

 

-PCRx

Link to comment

So. i took atlas apart today. a little  cleaning and rebuilding.

 

1.

I plugged in another drive to bring 9x 3TB drives online for a total of 24TB live storage.

I still have 4 more more 3TB drives that are precleared and on standbay. that is a toatal of 36TB so far.

My plan was to have it filled by now. but, with the cost of drives, I'll wait the shortage out.


 

2.

I added and tested my port expander. An Intel RES2SV240 24port SAS expander.

To install it, I just used the pci bracket to bolt it in dead space. I then powered it with a molex plug.

 

I installed and tested it both ways to test the speed.

 

Method #1 using link aggregation for only 4 faster ports.

Plug in both ports from the HBA to the expander for 4 Fast SAS ports (16 drives total).

 

Sorry for the low light poor cellphone images.

G1uMJl.jpg

you can also see my unique backplane setup, 5 SAS and 1 SATA.

Notice i still have the factory fans on the back unplugged. I have no heat issues yet with the fanwall doing all of the work in the whole server.

I do have new fans to install... when i get to it...

 

tuv9pl.jpg

Cable management was pointless at this point since i was about to change out all of the  wires.

 

Method #2, using a single channel

You used a single port on from the HBA into the expander to split that port into 5 ports (20 drives).

You then use the other HBA channel for 4 Full speed SATAIII ports for the parity and cache.

This gives you a total of (20 slower drives and 4 fast drives)

 

 

for the test, I put all 9 drives on the single port. Including the parity.

 

w5yjkl.jpg

as you can see the spare m1015 and saslp-mv8 just sitting in there now.

 

 


The test method.

Run a parity check... that was it.

File copies would not be affects so why test them?

I know, not very scientific, but since the  biggest lag on this server would be the parity check....

 

The test results.

with 9x 3TB drives on the expander, there was zero performance hit in going from 3 HBA's (2x M1015's and 1 SASLP-MV8) to Method #1 using link aggregation, to method #2, using a single channel.

 

I had about the same performance. 115MB/s and 8ish hours to complete parity in all 3 configurations.

 

the bottom line is, 9 drives is not enough to saturate a single channel on an m1015 through an Intel RES2SV240. I'll have to re-test this once i get more drives plugged in..

 

Once i get more drives on this beast, I'll do a much more scientific test.

Perhaps at a later date,  i can put the expander and an M1015 into goliath and run the test in there. I do have enough 1TB, 1.5TB and 2TB drives to completely fill it.

 

 

I'll leave it running in method #2 for now. I'll report back any issues.

 


server issues..

I am still having the occasional drive timeout issue. Some forum members suggest it is that the drives are taking to long to wake up from sleep before unraid wants to use them. I am not the only 3tb user experiencing this at the moment.

 

I need to figure out how to have a cron job spin my drives up before my mover kicks off.

It should be simple, i just don't have time to research it.

 

one last server porn shot for the road.. back with its cover on and in the rack.

5msMvl.jpg

Yes, that is a sock shoved into the drive bay.. i do that to keep the cooling even while i put more drives into the caddies.

there are 2 shoved into the fanwall also..

Link to comment

lol, It was just for a day.

 

I pulled one of my spares while I was testing something with the drive in another PC. I could have also shoved a spare cage from my extra norco into the bay.

 

But yeah, you got to do what you have got to do to get the job done.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.