ATLAS My Virtualized unRAID server


1227 posts in this topic Last Reply

Recommended Posts

Thank you BetaQuasi and PCRx for the understanding and VMDK sample file!  I now have an unRAID VM booting properly in ESXi 5.0u2.

 

(Side note:  shame the AOC-SAS2LP-MV8 doesn't have a ESXi hack (yet) to passthrough...fortunately I have a AOC-SASLP-MV8 card.)

Does the SAS2LP-MV8 cause pink screens?  If it does you can try the .msiEnabled = "FALSE" part of the hack the same way.  Editing the passthru.map (sorry going from memory on the name) is a little more dificult but ESXi will give you all the information you need for the edit.  I don't have one of those cards and I'm at work currently but I could post up how to lookup the entries when I get home.  Basically you go to advanced configuration like you do when you setup the card for pass through and select the card.  From there you get the device ID and vendor ID from the ESXi display add two other entries for d3d0(?) and false under the appropriate columns and it is done.  If you don't get a pink screen without the edits then you probably don't need the ESXi hack anyway - assuming you can pass it through and it works.  My Highpoint RocketRaid 622A and 1742 cards both work in pass through to a Windows VM but I will have to check tonight to see if I did the steps above to get it working or not.
Link to post
  • Replies 1.2k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

A brief introduction: I have a need to consolidate several servers into one box. this would include my main media storage (unRAID), my client backup server (WHSv1 Migrating to WHS2011), my Usenet d

Hardware Build and ESXi install.     Hardware install notes: Original Hardware unboxing   The 650Watt Corsair Power Supply pictured was not going to cut it. I used the spare 750watt Season

Posted Images

Browsing through the thread, and noticed most people are not on ESXi 5.1 any reason for that?  or is the old saying "it ain't broke don't fix it" ringing true?

 

That, mixed with some memory of reading something about 5.1 breaking M1015 passthrough?  Might be wrong.

Link to post

Browsing through the thread, and noticed most people are not on ESXi 5.1 any reason for that?  or is the old saying "it ain't broke don't fix it" ringing true?

 

That, mixed with some memory of reading something about 5.1 breaking M1015 passthrough?  Might be wrong.

 

I'm running 5.1 and I'm passing through three M1015's. Been working good for me. FYI.

 

Sent from my DROID RAZR using Tapatalk 2

 

 

Link to post

Browsing through the thread, and noticed most people are not on ESXi 5.1 any reason for that?  or is the old saying "it ain't broke don't fix it" ringing true?

 

--Sideband Samurai

Will admit I haven't check on this recently but I don't believe PCI pass through works - PCIe does but PCI does/didn't when originally released.  So I am staying on 5.0
Link to post

Browsing through the thread, and noticed most people are not on ESXi 5.1 any reason for that?  or is the old saying "it ain't broke don't fix it" ringing true?

 

--Sideband Samurai

ESXi 5.1 has had a couple of serious problems.

 

First, passing through PCI devices would cause ESXi to crash.  There was a patch released in December that fixes this.

 

Second, you can't pass through USB controllers.  When you select a USB controller for passthrough and reboot the host to apply the settings, the controller won't show up as a passthrough device.  I haven't seen VMware release a fix for this yet.

Link to post

Browsing through the thread, and noticed most people are not on ESXi 5.1 any reason for that?  or is the old saying "it ain't broke don't fix it" ringing true?

 

--Sideband Samurai

 

The thread predates 5.1 and 5.0 by years?

Link to post

The thread predates 5.1 and 5.0 by years?

 

True, but the install is almost exactly the same.  no need to re-write the setup..

I can confirm this. I never used before ESXi and followed the thread to install 5.1

 

Sent from my GT-P7500 using Tapatalk 2

 

 

Link to post

The thread predates 5.1 and 5.0 by years?

 

True, but the install is almost exactly the same.  no need to re-write the setup..

 

Not suggesting a re write, just explaining why 5.1 was not used for the first 50 pages...

Link to post

First attempt ever at running ESXi. 

 

Installed the latest version 5.1.  Got everything installed and running except for unRaid.

 

I have Supermicro X9SCM-F-O.

2 AOC-SASLP-MV8

2 Adaptec 1430SA

 

My VM of Windows 2008 Server runs great.

 

When I setup pass through - system will boot up.  I can even see all the drives and the array.  But as soon as I start doing anything I get PINK screen.

 

From what I have read here - some users report needing to do the MV8 Hack while others say they did not need it.  I have tried it both ways.  With the hack I get the PINK screen as soon as I try to start the VM.

 

Maybe the issue is with the Adaptec cards?

 

What am I missing?  Getting pretty frustrated and would love to be pointed in the right direction.

 

I appreciate any help....heck I will even pay for help.

 

One thing of note - with my old Motherboard I had one MV8 board - and I could view it in the boo process as it identified the drives.  On this new Supermicro board I never see the MV8 or the Adaptec cards go through the boot process.  I was not able to do the Disable INT 13h to new MV8 card that I got with this new motherboard.

 

Any ideas?

 

Just remembered one more strange issue.  I don't seem to be able to assign more than 2GB of ram to the VM.  Whenever i try, 4, 6, 8GB of ram - I get an error and the VM won't even start.

Link to post

First attempt ever at running ESXi. 

 

Installed the latest version 5.1.  Got everything installed and running except for unRaid.

 

I have Supermicro X9SCM-F-O.

2 AOC-SASLP-MV8

2 Adaptec 1430SA

 

My VM of Windows 2008 Server runs great.

 

When I setup pass through - system will boot up.  I can even see all the drives and the array.  But as soon as I start doing anything I get PINK screen.

 

Have you installed the ESXi510-201212001 patch?  If not, install it and see if that fixes your pink screen problem.

Link to post

I am going to give this a try when I get back home. Can anyone confirm if I need to do the MV8 hack with ESXi 5.1?  And will my adaptec 1430 cards be ok? 

 

Anyone know why I don't see the sata cards load the drives during boot up?  I can't control-m to turn of int-18. 

Link to post

Never realized how difficult it could be to install a patch for VM ware.  5 hours later and I am still working at it.  Trying to get the Update Manger installed is a nightmare - it wants active directory, SQL server, and all kinds of stuff..

 

Anyways - I am just ranting.

 

I figured out my  memory issue with unRaid VM - you have to reserve the full amount of memory assigned to the VM. 

Link to post

5 hours (and counting) to install the patch?  Wow.

 

Read this VMware blog post that explains the quickest way to patch ESXi using the command line.  The blog post talks about using scp/winSCP to upload the patch to the host but an easier method is to upload it to the datastore using the vSphere client.  Then start the SSH service, login via SSH to the ESXi host, and run the esxcli command to install the patch.  Nice and easy.

Link to post

I appreciate all the quick tips. 

 

Got the patch installed.  Actually got unraid to boot up with all 4 controllers passed through (2 Adaptec 1430SA & 2 MV8).  (Did not pass through the 2nd on board nic yet).

 

Unraid web gui was super slow - and even though the system booted - several drives were missing.  After a few minutes of this semi working boot up.  System crashed.  Pink screen, Purple screen. 

 

I have taken the MV8's and made sure INT13 was disabled. 

Installed the latest patch

 

What CPU and Memory combination are people finding most effective for unraid?  For systems with +20 drives.

 

UPDATE - I added back the MV8 hack.  And so far so good!  Looks like everything is working....the web gui is very responsive.  However I started a parity check...and I am getting less than 1mb per second.  May take 2-3 months to do a parity check.  Something is still not right - but this is as far as I have been able to get.

 

UPDATE - Since getting unraid to work in esxi - here are the issues I face.

1. Party Check is horrible.  Less then 1MB per second. (Tested read speeds on all drives - they range from 20mb/s to 70mb/s.  Seems normal to me.)  (Tested write speeds - they range from 14mb/s to 33mb/s.  Again seems normal but wish it were faster.)  The parity checking is my problem.

2. If all the drives load up - seem pretty stable. But upon a reboot - several drives from the MV8 did not load up.  It takes 2-3 reboots to get all the drives to show up.

3. I have the 2nd NIC (Intel 82579LM) working in ESXi (it at least sees it) - when I try to pass through the nic to unraid - the unraid VM does not see it.  So far I am unable to get the 2nd nic card working for unraid.

 

 

Link to post
UPDATE - Since getting unraid to work in esxi - here are the issues I face.

1. Party Check is horrible.  Less then 1MB per second. (Tested read speeds on all drives - they range from 20mb/s to 70mb/s.  Seems normal to me.)  (Tested write speeds - they range from 14mb/s to 33mb/s.  Again seems normal but wish it were faster.)  The parity checking is my problem.

2. If all the drives load up - seem pretty stable. But upon a reboot - several drives from the MV8 did not load up.  It takes 2-3 reboots to get all the drives to show up.

3. I have the 2nd NIC (Intel 82579LM) working in ESXi (it at least sees it) - when I try to pass through the nic to unraid - the unraid VM does not see it.  So far I am unable to get the 2nd nic card working for unraid.

 

1. What are the specs for the Unraid VM and what CPU are you using? How long have you let it run? ie, does it run at that speed for 10minutes and then speed up or does it run at that speed the whole way? Might be a good idea to do a smart check on your HDDs?

 

2. Is this rebooting the ESXi server or the VM? Either way, do all the drives show correctly when the ESXi server boots and the MV8 does it's BIOS load?

 

3. Maybe create UnRaid its own 'switch' in ESXi, give that access to the 2nd network card and then create a VMware Network card for UnRaid. I think the card is called VMnet3, Unraid has drivers for it (I am using it)

Link to post

UPDATE - Since getting unraid to work in esxi - here are the issues I face.

1. Party Check is horrible.  Less then 1MB per second. (Tested read speeds on all drives - they range from 20mb/s to 70mb/s.  Seems normal to me.)  (Tested write speeds - they range from 14mb/s to 33mb/s.  Again seems normal but wish it were faster.)  The parity checking is my problem.

2. If all the drives load up - seem pretty stable. But upon a reboot - several drives from the MV8 did not load up.  It takes 2-3 reboots to get all the drives to show up.

3. I have the 2nd NIC (Intel 82579LM) working in ESXi (it at least sees it) - when I try to pass through the nic to unraid - the unraid VM does not see it.  So far I am unable to get the 2nd nic card working for unraid.

 

1. What are the specs for the Unraid VM and what CPU are you using? How long have you let it run? ie, does it run at that speed for 10minutes and then speed up or does it run at that speed the whole way? Might be a good idea to do a smart check on your HDDs?

 

ESXi Specs:

Supermicro 9SCM-F

8GB RAM (Waiting for memory to show up - will upgraded to 32GB)

Intel Xeon Processor E3-1230V2

2 - 128GB SSD (I had wanted to mirror these but appears ESXi does not support raid/mirror)

2 - 1TB for storage space for ESXi (I had wanted to mirror these but can't figure out a way to mirror)

2 - Supermicro MV8  (Dedicated to unraid)

2 - Adaptec 1430SA (Dedicated to unraid)

An array of 23 drives (1.5TB, 2TB, & 3TB)  (The 3TB have not yet been added to unraid configuration)

 

Unraid VM

CPU's 4 - Virtual Sockets 2, Cores Per Socket 2  (I increased hoping it would speed up parity check)

5GB Ram (most I can give it right now)

2 VM8 Passed Through

2 Adaptec 1430SA Passed Through

Using shared nic.

 

I have only let parity check run for 4-5 minutes before stopping it.  While writing this reply, I have let it run the whole time and it seems the web gui is no longer responding.  But good news is that unraid is still running and I can access via ssh. 

 

Before starting this project - I ran a full parity check and smart check.  Everything checked out.

 

2. Is this rebooting the ESXi server or the VM? Either way, do all the drives show correctly when the ESXi server boots and the MV8 does it's BIOS load?

When I reboot ESXi - and start Unraid VM - the drives are 50/50 if they all show up.  It seems the ones that tend to be missing are the VM8 drives (but not all of them).  The MV8 Bios does not load - I have disabled the INT-13.  In fact before passing them through I don't think I ever saw them in ESXi. 

 

(Side note - I am not 100% sure I have the bios configured right - I have never been able to see any of the controllers detect drives during the boot up. - hence I was never able to hit Control-M or Control-A to turn off INT-13 or Disable the bios (adaptec).  I had to take the cards and put them in my old system.      So maybe this has something to do with my bios settings?  Running 2.0B.

 

 

3. Maybe create UnRaid its own 'switch' in ESXi, give that access to the 2nd network card and then create a VMware Network card for UnRaid. I think the card is called VMnet3, Unraid has drivers for it (I am using it)

 

Good idea - I will give this a try.

 

Link to post

UPDATE - Since getting unraid to work in esxi - here are the issues I face.

1. Party Check is horrible.  Less then 1MB per second. (Tested read speeds on all drives - they range from 20mb/s to 70mb/s.  Seems normal to me.)  (Tested write speeds - they range from 14mb/s to 33mb/s.  Again seems normal but wish it were faster.)  The parity checking is my problem.

2. If all the drives load up - seem pretty stable. But upon a reboot - several drives from the MV8 did not load up.  It takes 2-3 reboots to get all the drives to show up.

3. I have the 2nd NIC (Intel 82579LM) working in ESXi (it at least sees it) - when I try to pass through the nic to unraid - the unraid VM does not see it.  So far I am unable to get the 2nd nic card working for unraid.

 

1. What are the specs for the Unraid VM and what CPU are you using? How long have you let it run? ie, does it run at that speed for 10minutes and then speed up or does it run at that speed the whole way? Might be a good idea to do a smart check on your HDDs?

 

ESXi Specs:

Supermicro 9SCM-F

8GB RAM (Waiting for memory to show up - will upgraded to 32GB)

Intel Xeon Processor E3-1230V2

2 - 128GB SSD (I had wanted to mirror these but appears ESXi does not support raid/mirror)

2 - 1TB for storage space for ESXi (I had wanted to mirror these but can't figure out a way to mirror)

2 - Supermicro MV8  (Dedicated to unraid)

2 - Adaptec 1430SA (Dedicated to unraid)

An array of 23 drives (1.5TB, 2TB, & 3TB)  (The 3TB have not yet been added to unraid configuration)

 

Unraid VM

CPU's 4 - Virtual Sockets 2, Cores Per Socket 2  (I increased hoping it would speed up parity check)

5GB Ram (most I can give it right now)

2 VM8 Passed Through

2 Adaptec 1430SA Passed Through

Using shared nic.

 

I have only let parity check run for 4-5 minutes before stopping it.  While writing this reply, I have let it run the whole time and it seems the web gui is no longer responding.  But good news is that unraid is still running and I can access via ssh. 

 

Before starting this project - I ran a full parity check and smart check.  Everything checked out.

 

2. Is this rebooting the ESXi server or the VM? Either way, do all the drives show correctly when the ESXi server boots and the MV8 does it's BIOS load?

When I reboot ESXi - and start Unraid VM - the drives are 50/50 if they all show up.  It seems the ones that tend to be missing are the VM8 drives (but not all of them).  The MV8 Bios does not load - I have disabled the INT-13.  In fact before passing them through I don't think I ever saw them in ESXi. 

 

(Side note - I am not 100% sure I have the bios configured right - I have never been able to see any of the controllers detect drives during the boot up. - hence I was never able to hit Control-M or Control-A to turn off INT-13 or Disable the bios (adaptec).  I had to take the cards and put them in my old system.      So maybe this has something to do with my bios settings?  Running 2.0B.

 

 

3. Maybe create UnRaid its own 'switch' in ESXi, give that access to the 2nd network card and then create a VMware Network card for UnRaid. I think the card is called VMnet3, Unraid has drivers for it (I am using it)

 

Good idea - I will give this a try.

 

OMG! make the unRAID vm single CPU, 2GB ram. By making it 4 core, you are delaying context switches. Even with lots of plugins, 2 core and 4G will do.

 

Post syslog

Link to post

Ok - I changed back to one CPU with 4GB memory.

 

During my attempt to let the parity check run longer than 10 minutes.  The system became unresponsive in the web gui.    From SSH I stopped the parity check and eventually had to reboot the system as the web admin never came back. 

 

Unfortunatly during that time I seemed to have lost disk 10.  Which is now marked by a red ball.    I don't think it is really bad....but now I am starting to get worried because if I recall - if I remove the drive and add it back - it is going to want to rebuild the drive and right now I don't think any of those features are stable. 

 

I don't know if there is a way to add it back and force unraid to "think" it is good.

 

(Side note - NIC2 is now working after removing the pass through and setting it up in ESXi )

 

Here is my syslog.

syslog.txt

Link to post

Thanks so much for this thread.  It inspired me to build my own ESXi box.  ;D

 

I have some of the components already, and started my own build post here.  I am about to purchase the rest of the components and would like to get a sanity check that my hardware choices make sense.  Any feedback or suggestions are warmly welcomed.  :)  Thanks!

 

Proposed components yet to be purchased:

CPU: Intel Xeon E3-1220 Sandy Bridge $199 

Motherboard: Supermicro X9SCM-IIF-O $193

RAM: 32GB - 4x Super Talent DDR3-1333 8GB ECC Micron $208

Controllers: 3x IBM M1015 ~$80 each from ebay?

Cables: 6x 1m Forward Breakout Cable $60

 

Anything different that you would recommend?

 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.