Jump to content

RAID or other for storing vmware machine files


NAS

Recommended Posts

I have a PC that runs vmware. It is not super critical machine so it is just a higher end PC, not noisy server.

 

It has about 300GB of vmware virtual machines on it of which 3-6 run full time.

 

Disk speed is the bottleneck.

 

SSD isnt an option as it costs to much for 300-500GB of space I need.

 

I am pondering using the onboard RAID or a cheap PCIe raid card. I guess the issue i have is not thoughput but disk access and seek. What RAID level would people recommend or other ideas?

 

I have several spare WD SATA2 500GB disks on a shelf spare

 

I have done some research and have drawn some conclusion but i want to not salt the idea pool just yet :)

Link to comment

I have a PC that runs vmware. It is not super critical machine so it is just a higher end PC, not noisy server.

 

It has about 300GB of vmware virtual machines on it of which 3-6 run full time.

 

Disk speed is the bottleneck.

 

SSD isnt an option as it costs to much for 300-500GB of space I need.

 

I am pondering using the onboard RAID or a cheap PCIe raid card. I guess the issue i have is not thoughput but disk access and seek. What RAID level would people recommend or other ideas?

 

I have several spare WD SATA2 500GB disks on a shelf spare

 

I have done some research and have drawn some conclusion but i want to not salt the idea pool just yet :)

 

Incredibly difficult to gauge as it depends on the number of disks, your write / read balance, raw amount of storage you require, price, expandability etc.

 

Finger in the air, I would say RAID 10 for IOPs  at the expense of raw storage.

 

I've always found vmware awful for disk i/o even when sat on top of decent SAN's with lots of spindles. People do seem to get good performance out of it but I have no idea how!

Link to comment

About 80% of the time the performance of running 3 vmachines from a single SATA2 drive is adequate for me. I dont need blistering performance. Then what seems like randomly i hit a tipping point about 10% of the time where the HDD thrashes and performance becomes poor.

 

If i do lots of disk IO with a couple of machines (lots of file moves say) the remaining 10% becomes bloody horrible.

 

I really dont need much other than to take the bad 20% of the time and make them adequate.

 

What is adequate... hmm... slow would do just not stop. The problem is I dont have a budget for a good RAID card that would support RAID 10... or at least I cant seem to find one at < silly cost?

Link to comment
What is adequate... hmm... slow would do just not stop. The problem is I dont have a budget for a good RAID card that would support RAID 10... or at least I cant seem to find one at < silly cost?

 

I bet Linux Software RAID10 would suffice.

 

To really boost the performance you will need a "decent" Raid card that has caching.

A 4 port AMC/3Ware card with battery backup will do really well.

 

My friend's hosting company sells virtual machines with the images stored on RAID 10, Cached with battery backup.

He gets exemplary performance

 

Also, is there swapping going on?

If so how much.  I use a gigabyte I-RAM for my swap space.

 

From what I've read there are some new RAM based disk devices coming out soon which could also help alleviate temporary storage.

 

For my most used virtual machines I plan to move those vmware images to RAID0 SSD's or one of the larger RAM based disk devices. Then use rsync to back them up to the platter based drives.

Link to comment

All of these solutions seem to be dumping/avoiding unraid and adding yet more complexity.  Surely your write performance is the real problem - especially if two on different disks are being written at the same time - parity contention.  You might try putting both vm images on the same drive so that you don't end up with two drives waiting on the parity drive. 

 

Easiest solution would give up parity but speed things up would be to use a disk outside the array.

 

For a 20% problem it seems like a very expensive/complex solution. 

 

 

Link to comment

I did not see where this setup was being used on unRAID.

 

Also it's posted. "About 80% of the time the performance of running 3 vmachines from a single SATA2 drive is adequate for me."

 

The issue is, only so much I/O can occur to one disk. In addition, it is virtualized.

 

RAID1 may improve reads (Even linux software RAID 1 improves speeds).

RAID0 improves reads and writes at the expense of redundancy.

Link to comment

Excellent tips there.

 

RAID10 seems the way forward i will look into costing it. I think i will also experiment with using more than one disk for the virtual machines, without any RAID at all. I weekly backup the machines with rsync anyway and that will suffice backup wise.

 

Just FYI this has nothing to do with unRAID in the slightest it doesn't use it at all other than dumping files their occasionally. :) I only posted here cause I know the community knows about disks and vmware.

 

 

Link to comment

What about swapping? Have you looked into that possibility?

 

I know when my machine starts swapping, I'll feeling bad. Even worse with swapping in the virtual machine.

I.E. for the XP machine, when it starts to use the pagefile the environment really drags.

 

I would put as much ram as cost effectively possible and adjust it it for vmware and each virtual machine.

If you must swap, then put it on a raid0 array.

 

If your vmware machine is on linux, you can divide up the new drives with a raid0 across multiple drives and raid10 across multiple drives.

Link to comment

The host run XP 64 and has 8GB of ram.

 

So, windows then.

 

In that case, your problem is the windows swapping.

 

See, you can't really disable windows swapping. 

You could go to XP control panels and "disable" swapping there,

but if you do that, then windows starts swapping behind your back.  Fact.

 

The only way to deal with that is to keep windows swapping enabled, but assign it to use some kind of RAM disk.

(Yes, I know how ridiculous the idea sounds at first. :) )

 

One way to do it is what WeeboTech has done:

I use a gigabyte I-RAM for my swap space.

Brilliant.  Although, it requires you spending some money for the Gigabyte I-RAM.

 

Another way is what I have done: My XP(32) is swapping on a 1GB RAM disk which I have creted outside the 3GB visible to my 32-bit XP.

It makes a BIG difference.  My virtual machines are flying.

 

Purko

 

 

Link to comment

Another thing to consider...

 

Some virtualization software give you the choice between sparse virtual disks and fixed-size virtual disks.

There is some (though small) speed penalty from using sparse disks.

So if speed is your top priority then create only fixed-size virtual disks.

 

 

Link to comment

Another way is what I have done: My XP(32) is swapping on a 1GB RAM disk which I have creted outside the 3GB visible to my 32-bit XP.

It makes a BIG difference.  My virtual machines are flying.

 

I've read about this possibility, however I've not seen how it is done, do you have links?

Link to comment

Another way is what I have done: My XP(32) is swapping on a 1GB RAM disk which I have creted outside the 3GB visible to my 32-bit XP.

It makes a BIG difference.  My virtual machines are flying.

 

I've read about this possibility, however I've not seen how it is done, do you have links?

 

I use SuperSpeed's RamDisk Plus for that.

 

Now, one thing to keep in mind (that's not in the manuals) if you go that route:

If you use hibernation, then make sure you set up your ram disk within the RAM that's visible to windows!

You can guess why.

 

I've relocated to my Ram Disk not just the windows swap file, but also all kind of windows tmp crap. (regedit)

It's sweet!

 

Purko

 

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...