Jump to content

ESXi x2 and unRAID


orinoco

Recommended Posts

Currently my setup consists of 2x i7 esxi servers, connected to a nexenta box for the datastores, I came across unRAID by chance yesterday and have spent the last 24 hours completely sucked in, hence very little sleep as I start preparing to build my own unRAID.

 

Currently my setup allows me to move virtual hosts on the fly, whilst they are running as they run from shared storage. I see from nearly all of the builds that the recommendation is to run the vm's from a local drive, however this would prevent me from moving machines on the fly when I needed to.

 

So my question is will I will be able to run a virtualised unRAID, and then set my datastore on part of the unRAID array and make it shared to achieve the same result as currently provided by nexenta?

 

 

Thanks

Link to comment

unRAID needs to see physical disks, either via Raw Device Mapping or by passing through a HBA controller to the VM.  While you could have the boot .vmdk on any valid datastore, you must have physical connections to the disks that are present in the unRAID array.

 

While you could conceivably place a datastore on the unRAID array, it isn't really designed with that in mind, and your performance would be ordinary to say the least. 

Link to comment

Thanks, what I should have said is that I want all of my existing vm's 20 ish in total to be still on a shared datastore to enable the live migration or vmotion, my thought it that the unRAID vm will have direct passthrough to the disks, esxi will boot from usb, with unRAID being housed on its own ssd datastore and then the existing vm's will be housed on the unraid array and be able to run from either esxi machine, not sure if that makes sense? Will unRAID be able to be presented to the esxi machines as shared datastores?

 

 

Thanks

Link to comment

How do you share your datastores at the moment?

 

You can conceivably do this by exporting within unraid using nfs and then creating a datastore from this nfs export.

 

However, to echo the above unraid isn't designed for this i/o pattern.

 

I would go one further though and suggest your performance would be terrible (especially across 20 vm's) as opposed to ordinary.

 

unraid + random i/o really doesn't mix (for writes). Single stream sequential writes are tolerable but random concurrent writes to the protected array (and this will be what you have to do with your vmdk's) will demolish your performance. I don't mean slight degradation - I mean drop off a very tall cliff.

Link to comment

Currently datastores are on running nexenta, drives are connected to a pair of dell perc 6i raid controllers with BBU and onboard cache, configured as Raid 5 on the cards, the datastores are presented as iscsi targets through esxi, via bonded gig nics 4 for iscsi traffic.

 

Thought this may have been a solution to decreasing the machines throughout my ever shrinking house and to have a central store that coul run all of my vms from and serve all of my media via plex to the various cleints

 

 

 

Link to comment

Currently datastores are on running nexenta, drives are connected to a pair of dell perc 6i raid controllers with BBU and onboard cache, configured as Raid 5 on the cards, the datastores are presented as iscsi targets through esxi, via bonded gig nics 4 for iscsi traffic.

 

 

Odd you're doing raid on the card and not passing the luns through direct to nexenta to let zfs handle them. But neither here nor there.

 

What you've just described is several levels of performance beyond unraid. If you don't mind losing that disk performance then by all means give it a whirl.

 

The first thing you'll be giving up moving to unraid are the performance benefits of a striped array.

 

I can't remember exactly, there are very good posts in the forums explaning it properly, but off the top of my head I think unraid needs to do four disk operations for every write due to the nature of how it handles parity. You can immediately see the performance penalties that will impose.

Link to comment

Thanks for the replies, when I was originally building the nexenta machine I originally was not running the drives as Raid 5 on the card but was seeing some performance issues which I now believe were down to a faulty stick of ram, I was advised to run the cards in Raid 5, due to the time it took to build the arrays I never went back.

 

I guess I will still be building an unRAID box, which will have the opposite effect of me reducing my machines to now increasing it, but I have had an esr-424 sat in my shed for 12 months waiting to be used...

 

 

Link to comment
  • 1 month later...

You could easily put an unraid guest on one of your two hosts. That is assuming they support vt-d. While using your existing datastore.

 

Unfortunately, you would not be able to dynamically migrate unraid from one host to the other. Whenever you have mapped hardware like a thumb drive or hard disk, the guest can no longer migrate easily.

 

If you had unraid in server 1 and the motherboard dies.. You could move the array to server 2 physically (assuming it all fits).

 

If you put your unraid into a DAS box, you could migrate it from server 1 to server 2 by simply moving the controller card (or just the data cable if both boxes had an external hba you could pass to unraid) and flash and pointing the guest to the datastore.  (again I'm assuming both boxes support vt-d) . This is obviously not automated and as simple as using vmotion to migrate.

 

 

Sent from my iPhone using Tapatalk

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...