ESXi: Sharing datastore with unRAID Cache.


kaiguy

Recommended Posts

Been lurking this thread for a while now. Finally decided to pull the trigger and ordered up a new mobo, CPU, and RAM to convert my current server into ESXi. Its going to be a nerdy weekend for me! Thanks for all the great info!

 

A few questions that I still have after re-reading this entire thread, if someone wouldn't mind taking some time to answer:

 

1. I currently use a 1 TB WD Black cache drive for unRAID. I don't think I need all that space anymore and don't want to go SSD quite yet. Can I turn the drive into a data store, reserving half of it for use as the unRAID cache and the rest for other guest VMs?

 

2. I currently have 1 parity, 7 data, and 1 cache. Conventional wisdom has all of my mobo SATA ports in use (as its fastest). with the rest on my MV8. Assuming I can do #1, the proper way for this configuration in ESXi would be to pass through the MV8 and use it for my 8 drives (minus cache), with the onboard ports going for non-unRAID-exclusive use, right? And when I want more drives I'm going to need to get a new SAS card?

 

Thanks! Super excited to join the ranks with you all.

Link to comment

Good morning.

 

1.  Yes and no. What You can do is this. Make the 1TB into your datastore. You could then create a 100 -500GB (the size you feel you need) virtual drive in esxi and assign it to unraid as a drive. In unraid you would then set it as a cache drive.

 

The downside is that it might not be such a great idea is you have heavy disk io guests running on the data store. If you have a usnet or torrent downloader going, the cache would be fighting for the drive slowing everything down.

 

On the other hand if you just have a Linux guest or do doing nothing it would be fine.

 

You could test it and look at your performance. If it is not acceptable, upgrade it when you can or forgo cache for now.

 

2. Yes.

That is the best way to do that. Once you get a second card, I would split the drives between the 2 controllers. You might feel a slight performance hit when running parity checks with all 8 drives on a single mv8. You quite possibly have reached saturation point with 8 drives on the 4x controller. If so, it won't be a very large hit.

 

Day to server use should still be the same speed

 

 

Sent from my iPhone using Tapatalk

Link to comment

Been lurking this thread for a while now. Finally decided to pull the trigger and ordered up a new mobo, CPU, and RAM to convert my current server into ESXi. Its going to be a nerdy weekend for me! Thanks for all the great info!

 

A few questions that I still have after re-reading this entire thread, if someone wouldn't mind taking some time to answer:

 

1. I currently use a 1 TB WD Black cache drive for unRAID. I don't think I need all that space anymore and don't want to go SSD quite yet. Can I turn the drive into a data store, reserving half of it for use as the unRAID cache and the rest for other guest VMs?

 

2. I currently have 1 parity, 7 data, and 1 cache. Conventional wisdom has all of my mobo SATA ports in use (as its fastest). with the rest on my MV8. Assuming I can do #1, the proper way for this configuration in ESXi would be to pass through the MV8 and use it for my 8 drives (minus cache), with the onboard ports going for non-unRAID-exclusive use, right? And when I want more drives I'm going to need to get a new SAS card?

 

Thanks! Super excited to join the ranks with you all.

 

1. Yes, you can use the drive for a datastore and then some of the datastore as cache to unRAID. However I am going to say that is not "Best Practice". Best Practice is to passthru controller(s) to the unRAID VM and connect all the drives to those ports. This prevents using it as a datastore.

 

2. Yes, but as described by John and above, the onboard ports can be used by unRAID (datastore vmdk or RDM), just not very pretty, not what you are looking for. A second controller is advised, (MV8 or LSI).

Link to comment

Been lurking this thread for a while now. Finally decided to pull the trigger and ordered up a new mobo, CPU, and RAM to convert my current server into ESXi. Its going to be a nerdy weekend for me! Thanks for all the great info!

 

A few questions that I still have after re-reading this entire thread, if someone wouldn't mind taking some time to answer:

 

1. I currently use a 1 TB WD Black cache drive for unRAID. I don't think I need all that space anymore and don't want to go SSD quite yet. Can I turn the drive into a data store, reserving half of it for use as the unRAID cache and the rest for other guest VMs?

 

2. I currently have 1 parity, 7 data, and 1 cache. Conventional wisdom has all of my mobo SATA ports in use (as its fastest). with the rest on my MV8. Assuming I can do #1, the proper way for this configuration in ESXi would be to pass through the MV8 and use it for my 8 drives (minus cache), with the onboard ports going for non-unRAID-exclusive use, right? And when I want more drives I'm going to need to get a new SAS card?

 

Thanks! Super excited to join the ranks with you all.

 

1. Yes, you can use the drive for a datastore and then some of the datastore as cache to unRAID. However I am going to say that is not "Best Practice". Best Practice is to passthru controller(s) to the unRAID VM and connect all the drives to those ports. This prevents using it as a datastore.

 

2. Yes, but as described by John and above, the onboard ports can be used by unRAID (datastore vmdk or RDM), just not very pretty, not what you are looking for. A second controller is advised, (MV8 or LSI).

 

1. You will run the risk of bring every guest on your ESXi server that is on that datastore to a hault every time you write to your unRAID box.

Link to comment

Got it.  Thanks everyone!  So I really should probably have a separate physical drive for every guest that will get regular use, it seems?  So I'll keep the black 1TB for unRAID cache only, use a new 2 TB green drive for vm backups, and get a couple 7200rpm drives for Win7 and another flavor of Linux?  Sound about right?  Guess spending more money was an inevitability!

Link to comment

Got it.  Thanks everyone!  So I really should probably have a separate physical drive for every guest that will get regular use, it seems?  So I'll keep the black 1TB for unRAID cache only, use a new 2 TB green drive for vm backups, and get a couple 7200rpm drives for Win7 and another flavor of Linux?  Sound about right?  Guess spending more money was an inevitability!

An SSD (even a smallish one) for your main VM will go a long way.  I only have a 60GB SSD in my ESXi build but I only put windows XP on it and it works a treat.  It is 10 times faster running from the SSD than it ever was when running XP on my MacBook Pro.

Link to comment

Got it.  Thanks everyone!  So I really should probably have a separate physical drive for every guest that will get regular use, it seems?  So I'll keep the black 1TB for unRAID cache only, use a new 2 TB green drive for vm backups, and get a couple 7200rpm drives for Win7 and another flavor of Linux?  Sound about right?  Guess spending more money was an inevitability!

An SSD (even a smallish one) for your main VM will go a long way.  I only have a 60GB SSD in my ESXi build but I only put windows XP on it and it works a treat.  It is 10 times faster running from the SSD than it ever was when running XP on my MacBook Pro.

 

I agree, having XP or other environment running on the SSD makes it much more responsive.

Considering the speed you can probably get away with a few vm's that do not have allot of I/O on the SSD.

 

If you need allot of dataspace on the windows environment, I would probably put the main OS as a virtual disk on the SSD and attach a magnetic drive to the virtual machine (or use unRAID over the network).

Link to comment

Got it.  Thanks everyone!  So I really should probably have a separate physical drive for every guest that will get regular use, it seems?  So I'll keep the black 1TB for unRAID cache only, use a new 2 TB green drive for vm backups, and get a couple 7200rpm drives for Win7 and another flavor of Linux?  Sound about right?  Guess spending more money was an inevitability!

 

You will start to notice a performance slowdown after 3-4 average guests on a 7200rpm drive.  1-2 guests if one is slamming the drive.  For a SSD you will probably be able to run as many guests as you have room for.

Link to comment

I didn't do SSD, though I considered it. I absolutely love my Vertex 3 for my desktop machine, but something about putting them in servers still makes me uncomfortable... never claimed it was rational :)

 

I have a LSI 9690 RAID card, so I put 4 Western Digital Black 10k RPM drives in a RAID 5 array and use that as my primary ESXi datastore. All of the drives used by the UNRAID VM (3x2TB and 1x250GB cache) are direct mapped via RDM. So far, I've been very happy with the performance.

 

-A

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.