ATLAS My Virtualized unRAID server


Recommended Posts

in another thread someone asked about the performance of the new Seagate ST3000DM001 3TB drives. I decided to run a speed report to on the unraid VM.

 

Since it is relevant to this thread also and other might be interested... I copied the results here.

The script missed my raided Cache drive since I configured it as IDE in ESXi. It should be around 400MB/sec

 

I ran a speed report for Goliath (virtual server on atlas)

It is a mix of Hitachi LP's and Seagate ST3000DM001's

 

the ST3000DM001 reports as a 9YN166

 

sdb = 9.72 MB/sec usb-Lexar_JD_FireFly

sdc = 186.09 MB/sec 9YN166

sdd = 119.20 MB/sec Hitachi_HDS5C3030ALA630

sde = 166.84 MB/sec 9YN166

sdf = 114.72 MB/sec Hitachi_HDS5C3030ALA630

sdg = 175.19 MB/sec 9YN166

sdh = 112.25 MB/sec Hitachi_HDS5C3030ALA630

sdi = 122.82 MB/sec Hitachi_HDS5C3030ALA630

sdj = 125.28 MB/sec Hitachi_HDS5C3030ALA630

sdk = 104.36 MB/sec Hitachi_HDS5C3030ALA630

sdl = 118.08 MB/sec Hitachi_HDS5C3030ALA630

sdm = 114.54 MB/sec Hitachi_HDS5C3030ALA630

sdn = 111.43 MB/sec Hitachi_HDS5C3030ALA630

sdo = 112.96 MB/sec Hitachi_HDS5C3030ALA630

sdp = 118.56 MB/sec Hitachi_HDS5C3030ALA630

sdq = 119.82 MB/sec Hitachi_HDS5C3030ALA630

sdr = 166.07 MB/sec 9YN166

sds = 169.03 MB/sec 9YN166 (Parity)

sdt = 152.24 MB/sec 9YN166

 

as you can see there s a big performance boost in the seagate over the Hitachi. about 50MB/sec difference.

then again. the Hitachi is a 5400 RPM 5 platter beast and the Seagate is is a 7200 RPM 3 platter high  density monster. hard to compare fairly.

 

keep in mind, this is raw disk performance on the live server with other things using the server. this is not what you will get over the wire and this is  without parity overhead...

Link to comment

Hey Johnm,

 

Thanks for the great write up, it was very helpful in getting my server off the ground.  A couple of questions -->

 

1.  I ran into an issue where one of my MV8 appeared to have a interrupt conflict with something (NIC's would be my best bet).  I had the card in the second PCIe 8x slot (second closest to the CPU).  I could have resolved it either through disabling HPET, or simply moving the card.  I chose to move the card, but both methods were tested and working.  Did you run into this issue, or have you seen this before?

 

2.  I put my unRaid image onto the SSD.  I was just wondering if their was any way to hide it from showing up in unRaid's disk selection?

 

Thanks again!

 

EDIT - I'm also seeing "Aug  5 17:53:28 Athena vmsvc[1792]: [ warning] [vmsvc] Error in the RPC receive loop: RpcIn: Unable to send.  (Errors)" which are coinciding with losing connection to a share.  Any thoughts?

 

EDIT #2 - I think I've tracked it down to the modded driver for the 82579lm causing issue's... trying a fresh install and we'll see how it goes.

 

EDIT #3 - Yup, looks like that was the issue.  So long 82579lm, maybe one day we'll have a stable drive for you :)

Link to comment
  • 2 weeks later...

 

 

In the advanced tab: PCIe/PCI/PnP Configuration

Set PCI ROM Priority to "EFI Compatible ROM"

(NOTE: for Ver 2.0a BIOS this is replaced with "Disable OPROM for slots 7&6" set them to "Disabled")

bCnazm.png

 

 

This screen looks like the attachment on my system after upgrading to BIOS 2.0a.

 

I don't see Disable OPROM for slots 7&6. Or do you mean the PCI-E Slot 4,6 or 7 OPROM lines? They only show-op when there actually is a card in the respective slot. Slots 6 and 7 hold a M1015 and 4 an ARC1200 in my case. I think 6 and 7 are PCI-E x8 slots, am I right?

 

What are the recommended settings on this screen?

 

Can you give me pointers how to install the ARC1200 driver in ESXi 5? Can I just drop the .vib on a datastore and use excli to install it?

bios.jpg.12a15ed9ab245cfa1e2357f2e51651ae.jpg

Link to comment

I have been meaning to disable that entire line from the how to.

 

That was added before I upgraded to 2.0a per a suggestion.

It seems all that does is bypass the card bios while booting.

As far as I can tell you can boot 4 HBA cards with or without oprom settings disabled. (if someone else can confirm?)

 

slots 4&5 are 4x and slots 6&7 (closest to the CPU) are 8X if I recall correctly.

 

for the Areca controller, check and see if driver rollout 2 for esxi5 has the driver you need.

Link to comment

I have been meaning to disable that entire line from the how to.

 

That was added before I upgraded to 2.0a per a suggestion.

It seems all that does is bypass the card bios while booting.

As far as I can tell you can boot 4 HBA cards with or without oprom settings disabled. (if someone else can confirm?)

 

slots 4&5 are 4x and slots 6&7 (closest to the CPU) are 8X if I recall correctly.

 

for the Areca controller, check and see if driver rollout 2 for esxi5 has the driver you need.

I don't believe I disabled mine but I'm not able to check right now. 

 

I wish you could VIEW the bios settings from IPMI while an OS is running.  I don't need to change anything but it sure would be nice to be able to view them without rebooting.

Link to comment

Hi all,

 

I made the installation of esxi and trying to install unRAID with passthrough on my 3 SAS2 supermicro cards.

I've been able to set the three SAS CARD as passthrough in configuration>advanced settings.

 

Now the problem is that when I try to create a VM machine I can't see any of the cards, the PCI device row in VM Edit is not enabled.

In the configuration page the SAS cards are listed as "Unknown RAID" and it keeps saying that a restart of the host is needed. I did it but it keeps saying the same message: this device needs a host reboot to start running in passthrough mode.

 

Do you guys have any suggestion?

 

Cheers

Max

Link to comment

Hi all,

 

I made the installation of esxi and trying to install unRAID with passthrough on my 3 SAS2 supermicro cards.

I've been able to set the three SAS CARD as passthrough in configuration>advanced settings.

I don't know that anyone has passed though an SAS2LP-MV8 yet? The instructions here are for the SAS1 cards.

 

Unfortunately, I do not have one of those to test. sorry.

Hopefully someone else will have an answer for you.....

Link to comment

Hi all,

 

I made the installation of esxi and trying to install unRAID with passthrough on my 3 SAS2 supermicro cards.

I've been able to set the three SAS CARD as passthrough in configuration>advanced settings.

 

Now the problem is that when I try to create a VM machine I can't see any of the cards, the PCI device row in VM Edit is not enabled.

In the configuration page the SAS cards are listed as "Unknown RAID" and it keeps saying that a restart of the host is needed. I did it but it keeps saying the same message: this device needs a host reboot to start running in passthrough mode.

 

Do you guys have any suggestion?

 

Cheers

Max

 

Power cycle, not reboot.

Link to comment

Hi Johnm,

 

As you are aware, In my build, im planning to use M1015 and RES2SV240.

 

Since my M1015 seems damaged, and I'm replacing it, can i use RES2SV240 by itself for now?

 

I connected the top 5 rows of Norco 4224 to the RES2SV240.  I'm planning to connect the 6th row, directly to the M1015 (Once I get a new one).

 

 

Link to comment

unfortunately not. the expander is in a splitter for the SAS port from the m1015. without a SAS controller, there is nothing to split.

 

A shame the M1015 is bad. I fried one of my own not to long ago.

 

it really sucks, i even installed esxi5 already. 

 

Any alternative to M1015, or should i just buy another one of M1015?

 

an Update:

 

I found a recommendation on this forum, about Dell PERC H200.

 

http://forums.overclockers.com.au/showthread.php?t=1045376

 

would this work? specially with my RES2SV240?

Link to comment

Would it be possible to pass-through 1 or 2 of the on-board SATA300 ports of the X9SCM and use the 2 SATA600 ports as datastore in ESXi? I heard there was an hack,but can't find it.

 

The only hack, truly a work around, would be to use RDM for those devices, but that is not pass-through.

Link to comment

Yo,

 

I have a Norco 4224 and tried Unraid about a year ago but was unhappy about its speed in terms of copying with the parity overhead.

I have been running Open Indiana with Napp-it this past year and I must say it is fast,very fast!

But I liked the addons that unraid supplies and I was wandering if it would be possible to virtualize and use ZFS as cache for an Unraid server? It would be easy to mirror several SSDs and that would make the write cache very fast!

I already own all of the needed hardware....SM X9SCMF, Xeon E1230, 16gb ram and 3 Flashed IBM M1015.

 

Gr33tz

Link to comment

I took the risk and ordered Dell Perc H200, got it at a good deal.

 

will give it a try.

 

unfortunately not. the expander is in a splitter for the SAS port from the m1015. without a SAS controller, there is nothing to split.

 

A shame the M1015 is bad. I fried one of my own not to long ago.

 

it really sucks, i even installed esxi5 already. 

 

Any alternative to M1015, or should i just buy another one of M1015?

 

an Update:

 

I found a recommendation on this forum, about Dell PERC H200.

 

http://forums.overclockers.com.au/showthread.php?t=1045376

 

would this work? specially with my RES2SV240?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.