Multiple VM's off one SSD


Recommended Posts

Hey, just curious on how people are setting up their VM's in regard to the image files.

 

I currently have two SSD's out of my array, mounted with SNAP, and I have one VM per SSD.  Is anyone running multiple VM's off of a single SSD? How is the performance?  Is anyone running VM's off of their array?  I would think this would be a major degradation in performance with having to keep parity, and obviously, working off HDD since SSD's aren't supported in the array yet..

Link to comment

Sure, My signature below shows which VM's are on each drive. I only use my VM's for daily boring stuff but the performance is just fine on the SSD. I still get around 100 MB/s transfer speed to my unRAID server and my normal use on the Win 8 VM is very fast with both pfsense and ubuntu running at the same time. My other 2 VM's are on a HDD and I usually only have 1 on at the same time, when both are on at the same time it is noticeably slower but then again the HDD is an old slow one...

Link to comment

Hey, just curious on how people are setting up their VM's in regard to the image files.

 

I currently have two SSD's out of my array, mounted with SNAP, and I have one VM per SSD.  Is anyone running multiple VM's off of a single SSD? How is the performance?  Is anyone running VM's off of their array?  I would think this would be a major degradation in performance with having to keep parity, and obviously, working off HDD since SSD's aren't supported in the array yet..

 

Running vdisks off the array isn't actually too bad, but not during the installation of the OS.  If done during installation, the install will take a LONG time, as it is being bottlenecked in write performance.  Post install, VM performance is still not very good, but it really depends on what you're doing with the VM.  For localized virtual desktops (where you pass through a graphics device and attach monitor, mouse, and keyboard to the VM) will highly benefit from their vdisks living in an SSD-backed cache pool.

Link to comment

Cool, I'm looking to get rid of my main HTPC VM, and go with a straight up Nvidia Shield Android TV front end with Kodi.  Then put my Windows 7 Server WMC VM on with my gaming rig VM SSD, and point recordedTV to a separate SSD (previously occupied by HTPC VM), as that's where the high data write will happen.  I don't think putting the tiny 25GB Win7 with my gaming rig is going to affect anything on gaming or tv performance as long as the Recorded TV virtual disk is on a separate SSD.  Thanks for the input guys!

 

Any recommendations on qcow2 vs raw for the Recorded TV virtual disk?

Link to comment

I'm using a 3x SSD drive BTRFS cache pool for my VM storge.  No issues at all.

 

You're using my setup.  What kind of SSDs you have?  I have 2 x Sandisk 512GBs and 1 x Corsair 256GB.

 

3x 128GB Kingston SSDNow

So usable capacity of 256GB?

 

Correct.

Link to comment

I was concerned about creating a cache pool, but you guys are easing my mind on it.  I have a 500GB Samsung PRO Evo 4, a 120GB Corsair, and a 120GB Silicon Power.  Would this limit me to a cache pool of only 240GB?

I found this btrfs disk usage calculator.

 

Set the Preset RAID levels to RAID-1, set number and size of devices, and it tells you total space avail for files, and how much space would be lost.

btrfs_disk_usage_calculator.PNG.d3ac9ace4d168d6b6ee07c9d2501d6ec.PNG

Link to comment
  • 3 years later...
On 6/2/2015 at 12:53 PM, reluctantflux said:

Hey, just curious on how people are setting up their VM's in regard to the image files.

 

I currently have two SSD's out of my array, mounted with SNAP, and I have one VM per SSD.  Is anyone running multiple VM's off of a single SSD? How is the performance?  Is anyone running VM's off of their array?  I would think this would be a major degradation in performance with having to keep parity, and obviously, working off HDD since SSD's aren't supported in the array yet..

Im pulling my hair out trying to figure out how to host multiple VMs on a single unassigned SSD. Each time I try to create a VM, it wants to use the entire disk, leaving nothing available for other VMs. Can anyone point me in the right direction?

Link to comment
8 minutes ago, ImBadAtThis said:

Im pulling my hair out trying to figure out how to host multiple VMs on a single unassigned SSD. Each time I try to create a VM, it wants to use the entire disk, leaving nothing available for other VMs. Can anyone point me in the right direction?

Simplest method for you is probably going to be formatting the drive using the destructive mode of UD and using the space for as many vdisk files as you wish.

Link to comment
4 minutes ago, jonathanm said:

Simplest method for you is probably going to be formatting the drive using the destructive mode of UD and using the space for as many vdisk files as you wish.

That's what Ive done, however, when I manually set my vdisks to be on my unassigned device, the vdisk size option goes away. So the vdisk just occupies the entire unassigned device, instead of a smaller partition.

 

I guess my question is whether space is actually dynamically allocated to a vdisk in this situation, or will a single vdisk take the whole 250G to itself and prevent any other vdisks from being assigned to that drive?

Edited by ImBadAtThis
Additional info
Link to comment
2 hours ago, ImBadAtThis said:

That's what Ive done, however, when I manually set my vdisks to be on my unassigned device, the vdisk size option goes away. So the vdisk just occupies the entire unassigned device, instead of a smaller partition.

 

I guess my question is whether space is actually dynamically allocated to a vdisk in this situation, or will a single vdisk take the whole 250G to itself and prevent any other vdisks from being assigned to that drive?

No, what you are doing is giving the entire unpartitioned drive to the VM. You need to format the disk and mount it so it has a path in /mnt/disks, then you can assign whatever size vdisk you want and point it at /mnt/disks/mountpoint/VM/vdisk1.img. Assuming you use a format type for the device that supports sparse files, the actual space occupied by the vdisk will only be what the VM actually allocates, however it will appear to be whatever size you tell it. You could tell it to put 4 200GB vdisks in 250GB of space, and it will work until any combination of VM's tried to use more than 250GB total and caused one or more vdisk files to become corrupt.

Link to comment
1 hour ago, jonathanm said:

No, what you are doing is giving the entire unpartitioned drive to the VM. You need to format the disk and mount it so it has a path in /mnt/disks, then you can assign whatever size vdisk you want and point it at /mnt/disks/mountpoint/VM/vdisk1.img. Assuming you use a format type for the device that supports sparse files, the actual space occupied by the vdisk will only be what the VM actually allocates, however it will appear to be whatever size you tell it. You could tell it to put 4 200GB vdisks in 250GB of space, and it will work until any combination of VM's tried to use more than 250GB total and caused one or more vdisk files to become corrupt.

Ok. So I guess what I need to do is figure out how to partition the drive. The disk is formatted in XFS, but Im still not given the option to choose a vdisk size. This is the manual location Im typing into the vdisk location option.

 

/dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSBF882120V

Edited by ImBadAtThis
  • Like 1
Link to comment
16 minutes ago, ImBadAtThis said:

This is the manual location Im typing into the vdisk location option.

 

/dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSBF882120V

 

1 hour ago, jonathanm said:

mount it so it has a path in /mnt/disks, then you can assign whatever size vdisk you want and point it at /mnt/disks/mountpoint/VM/vdisk1.img.

 

Link to comment

I have 2x 512gb 960 Evo's in Raid-0 Cache for Unraid, then I run the VM's with Raw image, SCSI driver, and have them set to Unmap... This keeps the files as small as possible.  This in turn allows you to have quite a few images on the same drive, and only becomes an issue if you have multiple drives reading/writing to the images at the same time...  Mostly an issue during VM bootup...  Windows also has to have the drivers installed for the SCSI drivers during install...

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writethrough' discard='unmap'/>
      <source file='MainDrvWin10.SCSI.raw.img'/>
      <target dev='hdc' bus='scsi'/>
    </disk>

 

Edited by Warrentheo
Link to comment
  • 2 years later...

hi community,

I have a couple of questions for you.

I'm running the latest version next of Unraid ( Version: 6.9.0-rc2 ) and I took the advantage to create a second pool device and I put inside a NMVe disk ( 1TB ).

The idea was to allocate my VMs on it, and I did the first one, according to the edit you find attached. It works.

If I understood correctly, if I leave AUTO the Primary vDisk Location, is gonna to put in Domain, that it should be on cache ( /mnt/user/domains ), btw : is correct ?

My idea, instead, was to leave chace free for the classical purpose ( move files to array ), then, instead of AUTO, I put manual and I create a different location : /mnt/vm/Windows 10/vdisk1.img ( where vm is the name of the second pool ).

Now, at the moment I want to create a second ( or multiple ) VM, I put it still on the same path, but i wrote vdisk2.img to not overwrite the first one. When I tried to save it I have got this error and no chance to create the second one. 

Where am I wrong ? Which is instead the right way to have multiple VMs on this NMVe disk ?

 

Apologize for these basic questions, but I'm a beginner in Unraid .

 

Thx for whoever wants to help me in better understanding and to find the solution

 

BR

 

 

 

VM edit.jpg

screenshot.1.jpg

Link to comment
  • 4 months later...
On 6/2/2015 at 9:56 PM, dlandon said:

I have three VMs running off one 512GB SSD.  They are noted in my signature.

 

I don't see any performance issues with any of them.

 

How did you do that? I tried to install Windows 10 and Linux ubuntu on the same SSD on different partitions

with /dev/disk/by-id/ata-NameOfSSD, but that didnt work. My plan now is to split my 1TB SSD in 4 partitions with the same sice, every partition for another vm, but I dont know how i am able to do this.

 

Hopefully u can help me, Thanks!!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.