Vdisks for VM's.


Recommended Posts

Hello.

 

Could anyone please advise on the use of SSD for setting up VM's? Specifically, is it necessary to use a separate SSD for each VM? Im sure that i've read somewhere that it is advisable, but that starts to get expensive by the time you've set up 5 or 6 VM's. :$ Is there a problem to for example putting 4 VM's of 50GB size onto a 250GB SSD?

 

Finally, are there any advantages to the placement of SSD within Unraid. Once again im sure that i've read that VM's perform best on SSD setup as a cache drive. The problem I see is that I already have two conventional HD's setup as cache, If i add an SSD to the cache pool, How do I assign that particular cache drive to the VM? Can it even be done? I am assuming that VM's running outside of the array (Cache or Unassigned Devices) run faster due to no parity involvement when writing data to the Vdisk. Is that correct?

I notice that quite a few are using their SSD's for VM's but outside of the array using "Unassigned Devices". Is there any specific reason for that? Does that not mean that there is no Parity Protection?

Edited by Jetjockey
Change Title
Link to comment

You can run multiple disk images for separate vm's on a single ssd. OR you can pass through the entire ssd to a single vm. I believe you can also partition an SSD and pass through a partition to a vm.

 

You'll never fit 5 50GB images on a 250GB hdd since it's not really 250GB. But you can put as many as data limits allow..

 

SSD should not be part of the array in unRaid. You can use them as a cache disk and run vm's from there, or you can mount them via unassigned devices and run the vm's from there also. There isn't much performance gain one way or the other without tweaking. If you run 2 of the same size SSD's as cache, then you get a mirrored raid, so if one fails you don't lose the images. But if you are running dockers that are using the cache, then you are sharing disk i/o with them. If you choose to run vm's on mounted unassigned devices, then they don't share cache i/o but are not backed up, so if a disk holding them fails, the image is gone unless you back it up on your own.

 

I currently run all my vm images via unassigned devices so I don't share cache i/io for large file transfers and docker usage. I back my vm's up manually. But in the future I'll buy a few more drives, setup a raid10 cache configuration, and put everything back on that. Better speed and redundancy.

 

 

 

 

  • Upvote 1
Link to comment

Thanks for that, makes sense, except 5 x 50GB VM's on a 250GB SSD, I did say 4. :D

 

I wish I had the luxury of SSD cache. I use 2 x 1TB Samsung conventional HDs raided for redundancy. The cost of SSD at those sizes is extortionate. As much as I like my gadgets I can't justify that. I've got 4 x 250GB Evo 850 laying about. Is it possible to stripe them in pairs and then mirror the 2 pairs to give 500GB? I know you can choose your raid config in cache but I think thats only all drives together. Not sure if you can mix and match Raid within the cache pool.

 

Yes, with the Vdisk for my VM's put on an SSD in my array, as long as the VM is running then so do the parity drives. Right, so I need to move the disk out of the array as i cannot place it in the cache pool and specify that particular disk? I'm guessing I backup my Vdisk image, unassign the drive from the array, let the parity check complete, then mount the unassigned drive and alter the pointers in the VM template?

 

Is that correct?

 

Thanks.

Link to comment
1 hour ago, Jetjockey said:

Thanks for that, makes sense, except 5 x 50GB VM's on a 250GB SSD, I did say 4. :D

 

eh, close enough....

 

1 hour ago, Jetjockey said:

I wish I had the luxury of SSD cache. I use 2 x 1TB Samsung conventional HDs raided for redundancy. The cost of SSD at those sizes is extortionate. As much as I like my gadgets I can't justify that. I've got 4 x 250GB Evo 850 laying about. Is it possible to stripe them in pairs and then mirror the 2 pairs to give 500GB? I know you can choose your raid config in cache but I think thats only all drives together. Not sure if you can mix and match Raid within the cache pool.

 

From what I recall, these are your options (someone else can correct me if I'm wrong): leave them as a normal pool and get 500GB of mirrored drives (non striped,) a raid 0 of 1tb, a raid 10, giving you 500gb of redundant striped data.  If you search around a bit, you can find the minor modifications to the cache pool that need to be made to make the magic happen. 

 

1 hour ago, Jetjockey said:

Yes, with the Vdisk for my VM's put on an SSD in my array, as long as the VM is running then so do the parity drives. Right, so I need to move the disk out of the array as i cannot place it in the cache pool and specify that particular disk? I'm guessing I backup my Vdisk image, unassign the drive from the array, let the parity check complete, then mount the unassigned drive and alter the pointers in the VM template?

 

There are procedures for removing a disk and shrinking an array, otherwise unRaid will tell you the array configuration is not right. you can search around for that procedure too on the forum.

 

Link to comment

Right, Shrunk the array as per the unRaid procedure and let the parity rebuild. Excellent!

Mounted the previously unassigned SSD using Unassigned Devices and altered the Win 10 VM template to point to the new location of the Vdisk image.

No luck!  When I restart the VM it wants to reinstall Win 10. It either cannot see or doesn't like the new Vdisk location.

Anybody any ideas?

 

Thanks.

Link to comment

Sounds like you have done everything correctly.

Only piping up as I too have my VMs disks sat on an SSD outside the array using the unassigned devices plugin with no issues.

 

When you have modified the VMs xml it should look similar to this;

 

<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='writeback'/>
      <source file='/mnt/disks/SSD/VMs/Win10VM/vdisk1.qcow2'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>  
    </disk>

 

So the source file starts with /mnt/disks (this is the path that UA plugin uses, the bit in bold)

Also ensure the boot order is 1, or if you still have an ISO attached then remove it.

Lastly you could also try deleting the address type line (in italic above), this will be automatically re-entered the next time you start the VM as KVM allocates it. I only mention this as I've had occasions where I've edited properties of the VM and I've needed to refresh this, when realistically if you're not changing the VMs hardware details you shouldn't have to.

Hope you get it running ok.

Link to comment

Hello.

 

Thanks for that advice, your probably correct that your solution above would have worked. I thought that I had already tried that without success but I probably missed something. Anyway, I reinstalled Win10 and it all works fine on the SSD in Unassigned devices. Now the parity disks no longer spin up when I fire up the Win10 VM. :D

 

Im having mixed success with the whole VM thing. My Win10 VM works extremely well, very quick and responsive. I'm going to try adding another sound card to get audio other than through the HDMI socket.

BTW, my Linux Mint VM is a different story, it's much slower with lag on the mouse. I installed the latest nVidea drivers for the graphics card and that completely crashed the Linux VM requiring a fresh install of the OS. I'm a bit disappointed as it sort of defeats the object of using unRaid to run VM's. I don't mind some loss of speed compared to a standalone machine but not 20% loss of performance.

Link to comment
13 hours ago, 1812 said:

without knowing the exact details of your server, resource assignments, etc.. it is hard to magically diagnose your linux vm problems. you should also search the forums for other people who have had performance issues using linux vm's and see what they did.

 

 

I agree. I should perhaps have added that I've allocated exactly the same resources to my Linux Mint VM as I have to my Win10 VM.

Win10 runs very nicely whilst Linux Mint is not so good. Obviously I don't run both VM's at the same time.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.