Jump to content
Sign in to follow this  
fluisterben

Using cache to access VM images ?

5 posts in this topic Last Reply

Recommended Posts

Somewhere in the forums I read that I 'should' use

/mnt/cache/domains/debian/vdisk1.img

rather than

/mnt/user/domains/debian/vdisk1.img

but why ? The total cache storage size is filling up. If I set it to read from cache, what exactly does it do different? Whenever something gets written from within the VM to its disk, doesn't it use cache either way?
 

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/debian/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>

 

Edited by fluisterben

Share this post


Link to post

"/mnt/user/domains/vmname/vdisk1.img" is the default and I don't know why you should change that. What is your main problem? Full cache drive? How big is the cache? Having a VM with an vdisk size bigger than the cache size sooner or later will end up with problems and propably data loss. You know that, right?

 

For performance reasons it's prefered to have the VM vdisks sit on the cache drive if it's a SSD/NVME or on an unassigned drive. On default the "domain" share is sitting on the cache drive. So fine so far. It's on the user and the specific configuration how he set's up the VM. If you have a small cache SSD, let's say 128GB and you use it for caching your shares you should know how many data you put on your cache before the mover kicks in to transfer data to the array or of the share is cache only. You can overprovision a vdisk so for the VM itself it looks like a 128GB disk, where it's actually maybe only uses 30GB space. As soon as you fill up the VM's vdisk from inside the VM you more and more use space on the cache drive up to the point where used space by shares on cache + docker + vmdisk reaches the max amount possible even if Windows thinks 50 still free inside the VM and your VM will pause or crash. You should be aware of this.

Share this post


Link to post
7 hours ago, bastl said:

By default the "domain" share is sitting on the cache drive.

What do you mean by that? The domain share is set to "Prefer cache", not sure what that means for a vm img of, say, 32 GB, will it then always reside on cache SSD/NVMe ? And again: Isn't each write within the VM routed to cache either way? Why would it perform better for the entire image to be stored there? Only at start/boot and at shutdown that would save you a second or something, not?

Share this post


Link to post
15 hours ago, fluisterben said:

What do you mean by that?

With "sitting on the cache drive" I mean depending on how the share is setup, the vdisk file is stored on ether the cache, the array or an unassigned device. It only can sit on one underlying source and not mixed/split like file shares. It's a single file. With "Prefer cache" as soon as the cache gets full or reaches the a certain threshold new writes will go directly to the array. BUT this is only true for single files writen to the share, not files writen inside a vdisk file. Let's say your cache has 10GB free and you transfer a file with 20GB to a share, it will be directly written to the array. If let's say you write that file inside the VM to your vdisk, the vdisk get's bigger until the cache drive get's full and the copy process will either pause, abort or the VM will crash completly. There is no way for unraid to transfer a part of the vdisk to the array to keep the cache free for a couple GB. Unraid sees the vdisk as a single file and can't split it like "oh thats some files on that vdisk the user not often use, let's put that on the array and keep the rest on the fast cache". That's not how it works.

 

15 hours ago, fluisterben said:

Isn't each write within the VM routed to cache either way?

No it's not. If the vdisk is on an unassigned device or on the array, the writes within that file will directly writen to the underlying storage, where you basically see the raw or close to raw performance of that storage device.

 

15 hours ago, fluisterben said:

Why would it perform better for the entire image to be stored there? Only at start/boot and at shutdown that would save you a second or something, not?

Depending on the usecase of a VM you should store the vdisk in most scenarios on the fastest storage possible. Not even starting up the VM benefits from it, also starting programs and working with them heavily see a positive affect of the underlying storage.

 

Setup a VM with a vdisk on the array or a spinning drive and compare the performance with a VM vdisk on a ssd/nvme device and you will see huge huge performance differences in starting/using and working within that VM.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this