UnRAID on VMWare ESXi with Raw Device Mapping


Recommended Posts

  • 2 weeks later...
  • Replies 461
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Okay - so i got it all working, and am able to create and export an NFS based datastore that's backed by UnRaid +10!

 

There are other shares that will be on UnRaid, so I created a Thick VMDK (right click on datastore browser and "inflate"). this way UnRaid doesn't fill in the space without ESXi knowing.

 

So in a scenario like this, if the VMDK changes, will UnRaid automatically calculate parity "on-the-fly" or am I going to have to run a parity-sync?

 

I guess this would apply to any scenario where a large file (say video file) gets updated (instead of recreated). Does UnR monitor the changes and automatically calculate parity, or is parity calculated on file create, and then again on parity sync, whenever that runs?

 

Thanks!

 

I believe I have reached storage/server consolidation nirvana now!

Link to comment
  • 2 weeks later...

Pretty easy, easier them RDM in fact "If you have the right supported cards and Mobo's".

 

RDM is at the drive level, VMDirectPath is at the card level, so once you assign that card to your VM, all drives attached to that card goes along for the ride plus all this can be done thru vSphere Client and no telnet required.

 

Another cool thing about this is that I don't have to keep mapping those drives in my client vm, as the drives come along with the mapped VMDirectPath setup I can easily upgrade the drive storage simply by stopping the raid, pulling the 1tb drive, add the 3 tb (ESXI doesn't support 3tb), tell unraid to refresh and the drive is there (done in Unraid and vShpere is not needed there).

 

I personally think it's easier to manage, and b'cos of the support of the BR10i in UnRaid (Thanks TOM!!) this is really a nice and easier way to go if you considering upgrading/switching drives in the client VM at some point with no hassle.

 

Just remember that once you map that card via VMDirectPath all drives are allocated to that vm, so have a couple drives connected to your Mobo-sata and use them as datastore for the VM client os's you want/need.

 

Pro's

 

-Easier to configure and setup via GUI (vSphere Client)

-All drives are mapped , no need to manually map each drive

-Unraid supports the card features (so you get to see Temp/sleep)

 

Con's

-All drives associated with that card is mapped to a single VM

 

Link to comment

Pretty easy, easier them RDM in fact "If you have the right supported cards and Mobo's".

 

RDM is at the drive level, VMDirectPath is at the card level, so once you assign that card to your VM, all drives attached to that card goes along for the ride plus all this can be done thru vSphere Client and no telnet required.

 

Another cool thing about this is that I don't have to keep mapping those drives in my client vm, as the drives come along with the mapped VMDirectPath setup I can easily upgrade the drive storage simply by stopping the raid, pulling the 1tb drive, add the 3 tb (ESXI doesn't support 3tb), tell unraid to refresh and the drive is there (done in Unraid and vShpere is not needed there).

 

I personally think it's easier to manage, and b'cos of the support of the BR10i in UnRaid (Thanks TOM!!) this is really a nice and easier way to go if you considering upgrading/switching drives in the client VM at some point with no hassle.

 

Just remember that once you map that card via VMDirectPath all drives are allocated to that vm, so have a couple drives connected to your Mobo-sata and use them as datastore for the VM client os's you want/need.

 

Pro's

 

-Easier to configure and setup via GUI (vSphere Client)

-All drives are mapped , no need to manually map each drive

-Unraid supports the card features (so you get to see Temp/sleep)

 

Con's

-All drives associated with that card is mapped to a single VM

 

 

yeah - that "adding drives" bit is what has me worried. 2 months from now, i'll have to re-learn it all ... if i keep RDM..

 

can i just switch from RDM to vmdirectpath?

 

Also - does the br10i support 3tb? (or do you have to flash it with the LSI firmware)?

 

the drives being given to unraid does not worry me in the least. I have an NFS datastore there so ESXi can still use that storage.

Link to comment

Pretty easy, easier them RDM in fact "If you have the right supported cards and Mobo's".

 

RDM is at the drive level, VMDirectPath is at the card level, so once you assign that card to your VM, all drives attached to that card goes along for the ride plus all this can be done thru vSphere Client and no telnet required.

 

Another cool thing about this is that I don't have to keep mapping those drives in my client vm, as the drives come along with the mapped VMDirectPath setup I can easily upgrade the drive storage simply by stopping the raid, pulling the 1tb drive, add the 3 tb (ESXI doesn't support 3tb), tell unraid to refresh and the drive is there (done in Unraid and vShpere is not needed there).

 

I personally think it's easier to manage, and b'cos of the support of the BR10i in UnRaid (Thanks TOM!!) this is really a nice and easier way to go if you considering upgrading/switching drives in the client VM at some point with no hassle.

 

Just remember that once you map that card via VMDirectPath all drives are allocated to that vm, so have a couple drives connected to your Mobo-sata and use them as datastore for the VM client os's you want/need.

 

Pro's

 

-Easier to configure and setup via GUI (vSphere Client)

-All drives are mapped , no need to manually map each drive

-Unraid supports the card features (so you get to see Temp/sleep)

 

Con's

-All drives associated with that card is mapped to a single VM

 

 

yeah - that "adding drives" bit is what has me worried. 2 months from now, i'll have to re-learn it all ... if i keep RDM..

 

can i just switch from RDM to vmdirectpath?

 

Also - does the br10i support 3tb? (or do you have to flash it with the LSI firmware)?

 

the drives being given to unraid does not worry me in the least. I have an NFS datastore there so ESXi can still use that storage.

 

 

Yeah you can switch from RDM to vmDirectPath fairly easily.

 

Don't shoot me here.. doing this from memory.. but if I remember correctly... you have to remove the drives from the VM client (Edit Machine settings->Hardware tab -> and remove all the RDM's drive in the configuration

unlink all you RDM's ( I did an rm of that link, but perhaps you might want to check to see if that the proper way of doing it).

 

after that just add your PCI card.

- in vSphere client goto your ESXI Server COnfiguration -> advance settings->in right panel hit "edit" -> add the PCI card (A reboot of ESXI Host is required to set the VMDirectPath up),

- then goto your UnRaid vm (edit Machine settings->Hardware tab->add a PCI device ->select your sata/raid controller that should now appear there).. apply the change and you're done.

 

Just a note...worthy of mention here.

once you configured the PCI in your VM Client, Unraid (5.08D) will see it and use the drives associated with it and will match them accordingly to the config you already have, no need to redo that.

 

Regarding the 3tb and the Br10i depending on what version of firmware you got when you purchased you may need to update the firmware. (if from ebay just do the firmware upgrade cause they sell them with old firmware).

 

I used this link (1068E) to get me going with the upgrade.

 

http://lime-technology.com/forum/index.php?topic=12767.0

 

 

Hope it helps with your decision.

Link to comment

Finally got my setup completed running beta 8c on my ESXi box with vmdirectpath on a BR10i and the Plop VM. It's running great, spindown and temperature are flawless as mentioned before me and it's been running like a dream.

 

Thank you all in the thread for your inputs, comments and being the innovators among the innovators who tested the hardware first !  8)

 

I was curious as to if anyone had their setup running on a UPS and who they had it done.

 

 

I've been looking and so far it seems the most common implementation is using an APC UPC with parachute and having it summon a batch file or script that would send the kill signal to the VM's and then ESXi.

 

Anyone else running something like this ?

Link to comment

I'm doing exactly that using this tool: http://communities.vmware.com/docs/DOC-11623; I have a Cyberpower UPS, not APC, but any should work as long as they support executing a script when there is a power loss. In my setup, I pass my UPS USB connection through to a Windows 7 VM, which executes the above script to initiate a shutdown on ESXi. At that point it will shutdown or suspend all VMs depending on how you've configured those options in ESXi.  Make sure you've installed vmware-tools on unRAID so that it can cleanly unmount the array and shutdown, and watch out for your startup/shutdown order and shutdown timeout values (especially if unRAID is the last VM to shutdown).

Link to comment

Regarding the 3tb and the Br10i depending on what version of firmware you got when you purchased you may need to update the firmware. (if from ebay just do the firmware upgrade cause they sell them with old firmware).

 

I used this link (1068E) to get me going with the upgrade.

 

http://lime-technology.com/forum/index.php?topic=12767.0

Are you saying 3TB drives are working with the Br10i in IT mode?  This would be great if true.  I haven't found anyone able to confirm this here or elsewhere.  If you can confirm that this works you might want to post in the thread you linked so that the first post can be updated with this info.

Link to comment

Good evening!  I just added a cache drive that's a RDM through the Motherboard.  As such, temps & spin down don't work, and I'm okay with that.  The question is, can I disable hdparm for that drive?  My log is filling up fast with errors like:

Jul 26 22:37:21 REBEL-DATA emhttp: shcmd (1460): /usr/sbin/hdparm -y /dev/sdf >/dev/null (Drive related)

Jul 26 22:37:37 REBEL-DATA emhttp: _shcmd: shcmd (1460): exit status: 52 (Other emhttp)

 

hdparm works for the external array I'm passing through via IOMMMU, so I'd like to not kill all of hdparm, but just for /dev/sdf

 

Any ideas?

 

Thanks!

Link to comment

Good evening!  I just added a cache drive that's a RDM through the Motherboard.  As such, temps & spin down don't work, and I'm okay with that.  The question is, can I disable hdparm for that drive?  My log is filling up fast with errors like:

Jul 26 22:37:21 REBEL-DATA emhttp: shcmd (1460): /usr/sbin/hdparm -y /dev/sdf >/dev/null (Drive related)

Jul 26 22:37:37 REBEL-DATA emhttp: _shcmd: shcmd (1460): exit status: 52 (Other emhttp)

 

hdparm works for the external array I'm passing through via IOMMMU, so I'd like to not kill all of hdparm, but just for /dev/sdf

 

Any ideas?

 

Thanks!

 

You should be able to disable spindown for that drive along and the might stop the hdparm request going to it.

Link to comment

Regarding the 3tb and the Br10i depending on what version of firmware you got when you purchased you may need to update the firmware. (if from ebay just do the firmware upgrade cause they sell them with old firmware).

 

I used this link (1068E) to get me going with the upgrade.

 

http://lime-technology.com/forum/index.php?topic=12767.0

Are you saying 3TB drives are working with the Br10i in IT mode?  This would be great if true.  I haven't found anyone able to confirm this here or elsewhere.  If you can confirm that this works you might want to post in the thread you linked so that the first post can be updated with this info.

 

Just incase, for does who dont know the post has been updated with that fact and proof on the last page of the post.

 

 

http://lime-technology.com/forum/index.php?topic=12767.0

 

 

Link to comment

Well, with a Supermicro X9SCM-F and a Xeon E3-1230 here, I tried to passthrough AOC-SASLP-MV8 to an unRAID VM, but like others without any luck, so ESXi will need to wait two BR10i cards I bought on Ebay. This fact corroborates those statements that not every PCIe peripheral can be successfully passed through VM's.

 

 

 

Well, I successfully made AOC-SASLP-MV8 work with ESXi 4.1 and VMDirectPath.

 

With "Remote Tech Support" enabled, use WinSCP to connect to ESXi, and add there two lines to the /etc/vmware/passthru.map file:

 

# Marvell Technologies, Inc. MV64460/64461/64462 System Controller, Revision B

11ab  6485  d3d0     false

 

Now open your VM's .vmx file and change this:

pciPassthru0.present = "TRUE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

to this:

 

pciPassthru0.present = "TRUE"

pciPassthru0.msiEnabled = "FALSE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

The catch is force the use of IOAPIC mode with the "pciPassthru0.msiEnabled = 'FALSE'" statement.

 

Reboot the hypervisor and start your unRAID VM!

 

Good luck.

 

I'm attempting to replicate what you have done here.

After making all of the configuration changes, rebooting, etc, the UnRaid VM is running fine with the card attached, but I'm not able to find any of the drives....any tricks, things I might be missing, or something I should try?

 

Thanks.

Link to comment
  • 2 weeks later...

Well, with a Supermicro X9SCM-F and a Xeon E3-1230 here, I tried to passthrough AOC-SASLP-MV8 to an unRAID VM, but like others without any luck, so ESXi will need to wait two BR10i cards I bought on Ebay. This fact corroborates those statements that not every PCIe peripheral can be successfully passed through VM's.

 

 

 

Well, I successfully made AOC-SASLP-MV8 work with ESXi 4.1 and VMDirectPath.

 

With "Remote Tech Support" enabled, use WinSCP to connect to ESXi, and add there two lines to the /etc/vmware/passthru.map file:

 

# Marvell Technologies, Inc. MV64460/64461/64462 System Controller, Revision B

11ab  6485  d3d0     false

 

Now open your VM's .vmx file and change this:

pciPassthru0.present = "TRUE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

to this:

 

pciPassthru0.present = "TRUE"

pciPassthru0.msiEnabled = "FALSE"

pciPassthru0.deviceId = "6485"

pciPassthru0.vendorId = "11ab"

pciPassthru0.systemId = "4dfc27f9-93be-d5c1-9198-00259027d9d8"

pciPassthru0.id = "01:00.0"

 

The catch is force the use of IOAPIC mode with the "pciPassthru0.msiEnabled = 'FALSE'" statement.

 

Reboot the hypervisor and start your unRAID VM!

 

Good luck.

 

I just wanted to validate this.

 

I just got around to installing a AOC-SASLP-MV8 on a Supermicro X9SCM with an E3 1240 running ESXi 4.1u1

 

It saw the card and passed it through without the edits. I tested it in windows first. it would install the driver and crash the host.

 

After the above edits, I removed the card and drivers from windows and reinstalled the card/driver.

 

It looks to be running like a champ.

 

next test is inside a virtual unraid itself now.

 

Thank you for the find on this.

+2 thumbs up!

 

edit:typo

Link to comment

I can also now confirm success in this area.  I think I posted a previous message saying that I had issues, but I determined that those issues stemmed from using the Unraid VM prebuilt ISO that was posted much earlier in this topic thread.  It worked pretty well at the time for raw device passthough, but I am now back to using the standard unraid image with the plop ISO to boot from USB.

 

So far things are working pretty well.  I've got 7 total drive attached to the AOC-SASLP-MV8 right now (including the parity and cache drive which I plan on relocating to see if I can get better speed from raw device mapping) but I'm still working on getting data migrated, and completing the upgrade to 5 beta.  However, after making the change, the VM has definitely been stable.  THANK YOU for the discovery.

 

 

Link to comment

Add me to the list of successful ESXi converts!

 

My goal was to take my existing unRAID setup and move it over to ESXi with minimal out of pocket expense.  I have needed to buy better SATA/SAS cards for a while, so this presented a good opportunity.  I had all of the other parts already, so in my eyes the only expense I had was a new motherboard.

 

My Hardware/unRAID config is below as reference for anyone looking to make the move with a similar parts list:

 

OS:  unRAID Server Pro 5.0-beta9 - I am not confident in beta10 yet due to the Kernel change

CPU: Intel Core2Quad Q9400 2.66 Ghz (2 vCPU's allocated to unRAID)

Motherboard:  Intel S3210SHLC  (LGA 775 board with 2 PCIe x8 and 1 PCIe x4 slot)

RAM: 6 GB (1 GB allocated to unRAID) - max of 8GB with this board unfortunately

Drives: 14 WD Green drives - ranging from 1-2TB's.

SAS Expansion Card(s): 2x IBM M1015 flashed to standard LSI 9211-8i 6 Gb/s  with IT Firmware (passed through to unRAID via VMDirectPath)

 

Open Issues/Observations:

1. I am not able to get USB passthrough to work with VMDirectPath.  I have identified the controller needing to be passed through, but the VM freezes when unRAID starts loading.  This is not a show stopper, but it would be nice to have it working properly.  Was anyone able to use VMDirectPath to pass through the unRAID USB drive successfully?

2. I ran a parity check with my hardware before moving it into ESXi - Parity speeds averaged around 75MB/s.  Parity checks inside of ESXi are around 60-65 MB/s, so there is definitely a slight performance hit.

3. I am still new to the unRAID 5.x series, but there are some issues/bugs to be worked out before it goes gold.  I will post my comments/thoughts in the main beta thread.

4. IPMI - I cannot figure out how to get IPMI working on this board yet.  From what I have read it is not as easy/well done as the SuperMicro boards.  If anyone has IPMI experience on an Intel server board, I'd be happy to hear the steps required to set it up.

 

Thank you to everyone who contributed in this thread!  It was extremely helpful in getting things set up.

 

 

Link to comment

Add me to the list of successful ESXi converts!

 

 

4. IPMI - I cannot figure out how to get IPMI working on this board yet.  From what I have read it is not as easy/well done as the SuperMicro boards.  If anyone has IPMI experience on an Intel server board, I'd be happy to hear the steps required to set it up.

 

Thank you to everyone who contributed in this thread!  It was extremely helpful in getting things set up.

 

 

 

Greetings, ftp222!

 

The IPMI on Intel boards need an optional module, I guess the model ID is RMM3.

Link to comment
  • 2 weeks later...

just thought I'd add what I posted in the ATLAS thread...

 

I too, followed the lead of bryanr in his http://lime-technology.com/forum/index.php?topic=7914.0 thread, mapping each drive via command line manipulations, and with 16 drives it was a bit of a pain in the arse.  Not only that, but any time a disk is moved or swapped, it must be done again!!  Gotta be an easier way.

 

I have 16 hot swap bays in my tower with 16 Samsung 2TB drives.  The first 6 are on the "Intel Controller", the next 8 on an LSI 2008 SAS controller.

, and the last two on a Marvell 4 port Sata card (which ESXi has no drivers for).  I also have an LSI 4 port raid controller with 3 1TB Seagates in a Raid5 for my ESXi Datastor.

 

While poking around in the GUI for ESXi (vSphere Client) I found a page where I could assign the entire controller to a VM (configuration/advanced-settings).  I created a new VM for unRAID and instead of going through all that commandline stuff, I assigned the 3 PCI-bus controllers as passthrough, then selected them in the unRAID VM settings.  Voilla.... the VM runs just as if it were (and it is) running on the bare bones.  The drives came right up, and they are not virtually mapped, so I'm free to swap them around and replace them just be re-booting the VM    :o;D

 

Link to comment

Passthrough is actually discussed somewhere throughout this thread... it's just buried lol. I think most of us are using passthrough at this point since yes RDM is a pain in the butt haha. I currently have a BR10i and M1015 passed through to my unRaid VM and they are running just swimmingly!

Link to comment

I am using passthrough via VMDirectPath With MV8's and a dedicated NIC card (Cheap Intel CT PCIe card).

 

I also have 2 M1015's on order. I am not sure now if I will use them on this box or leave it as is.

My plan is to eventually use just one m1015 and an Intel SAS expander. that will free up a PCIe slot for  something else.

 

Perhaps move my second unraid to this ESXi box. Then use the second M1015 > expander and run it to a second RPC-4224 as a DAS. That would free up an entire server (less the case)...

 

unRAID is quite happy.

Parity check at the start is about 120MB/s  and slows down to 88MB/s at the outer part of the disk.

the initial boot is a little slower, but once booted. it is very snappy.

 

there seems to be little to no delay for bying on a hypervisor. also my other VM's do not seem to be effected by the parity check.

 

 

Link to comment

Stoopid noob question alert.

 

With ESXi - how would you go about accessing/using a VM from another room?

i.e. - if I eventually wanted to turn my wife's pc (will be in another room than the ESXi server) into a VM. How would I go about having Monitor/Keyboard/Mouse in her office space that accesses the VM on the server in another room? What other equip would I need (if doable/cost effective?).

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.