Swap filling up and not pushing to drives


Recommended Posts

I have 8 drives of which  2 are parity and another SSD drive (500GB). The SSD drive is only for caching. I recently created 2 VMs using a total of 3 TB of space. I had set this up to use a single drive from the array. As the servers spun up, my cache drive fills up to around 450 GB and the 'mover' won't free up the space.

 

I tried the mover tuning app, set lower thresholds for trigger. Manually clicking the moving button and so on. But this issue keeps cropping up. Would love to hear from you guys on how to diagnose and solve this

 

Thanks

 

Link to comment

The diagnostics show that you have 2 shares with names of the form C——S and B——-p which have files on the cache and the array.     However they have a Use Cache=No setting, and if you read the help built into the GUI you will see it says that if files get created for any reason on the cache which logically belong on the array they will stay there with the ‘No’ setting.   You need a ‘yes’ setting for mover to take any action on them.

Link to comment

Used terminal 

 

Browsed to /mnt/cache and found the following to be taking up the space

 

371G    ./domains/Ubuntu-Umbrel
11G     ./domains/UbuntuDappnode

 

Is there a way to make sure if these are pending files can be moved to the array or do I bungle up somewhere in setting this up

 

Link to comment

The 2 shares I mentioned no longer exist on the cache which is good.   What share contains files that you want to be moved to the array that appears to not be happening?   Y/u could try turning on mover logging under Settings -> Scheduler to possibly get more information although you do not want to leave that option permanently set as it can be verbose and fill up your syslog.

 

If a file is open or already exists on the array it will not be moved.

Link to comment
1 minute ago, maverhick said:

Used terminal 

 

Browsed to /mnt/cache and found the following to be taking up the space

 

371G    ./domains/Ubuntu-Umbrel
11G     ./domains/UbuntuDappnode

 

Is there a way to make sure if these are pending files can be moved to the array or do I bungle up somewhere in setting this up

 

These look like files for a VM?  The VM must not be running if you want them to be backed up.   There is a plugin specifically targeting backing up VMs - it might be of use to you?

 

 

 

Link to comment
6 minutes ago, itimpi said:

 

These look like files for a VM?  The VM must not be running if you want them to be backed up.   There is a plugin specifically targeting backing up VMs - it might be of use to you?

 

 

 

 

 

Yes, I had configured them to use only Drive 6.  

 

These are the configured file spaces for the VMs

 

/mnt/user/domains/UbuntuDappnode/vdisk1.img
/mnt/user/domains/Umbrel/vdisk1.img

Based on your post since the VMs have to be stopped. Can I stop the VMs and hit move. Will the files be moved to the Array. Secondly, I seem to have misconfigured (?) my VMs. I only wanted them to use Disk 6 as their allocated space. 

How do I make sure that the VMs are only using Disk 6 and not the cache. Is there a way to configure it? 
 

Link to comment

The domains share is not restricted to being on disk6 - in fact at the moment it exists on disk2 and the cache.   If you want to restrict it then you need to set the Include option for the share to only have disk6 listed   Note that changing the setting now will not automatically move those files as it only applies to where new files are placed.

 

Those are large files so it might be faster to move them manually to disk6 rather than relying on mover to do it?  
 

Most people WANT their VM vdisk files to be on cache as performance when they are on the array is much lower due to the overheads of keeping parity updated.

Link to comment

So this is where I do get confused or don't understand how Unraid works. Looks like I misconfigured the drives, but say currently its said to Auto and Unraid decides that is to be on Disk 2 - why does it let the VM fill up the entire cache and not do anything about it?

 

My rationale for using a single disk for these 2 VMs is because I was planning to test out some crypto nodes. ETH and BTC - totalling over 1 TB. If the SSD is only 500 GB, wouldn't it make sense to just write to a single disk?  I was OK with it because I wasn't looking for it to be performant. But looks like a better idea would be to move these nodes out of the array themselves since they don't need to be backed up. 
 

Link to comment
2 hours ago, maverhick said:

why does it let the VM fill up the entire cache and not do anything about it

There are a number of factors you have to consider:

  • unRaid never automatically moves files between array drives.
  • any individual file must fit on a single drive - it cannot be split between drives
  • a vdisk is a single file as far as unRaid is concerned.    It does not know that guest OS inside the VM has stored multiple files inside the vdisk.
  • if a file already exist then it is used in its current position.   If it is growing in size this can cause out-of-space errors on the drive holding the file.
  • the Mover application ignores files that are open so if a VM is running its vdisk files cannot be moved.
Link to comment
2 hours ago, maverhick said:

Thanks @itimpi

  •  Since I created a 2TB vdisk image and the cache SSD was only 500 GB - How was Unraid going to make this work? 

 

 


UnRaid is not going to ‘handle’ that and it is later going to cause you problems if you leave it on that SSD   
 

A vdisk image is created as a Linux ‘sparse’ file which means that physical space used is only that used by writes to the file but it can grow up to the 2TB you created it as when the VM continues to write to it.  The VM will stop working correctly once the free space on the SSD is exhausted.    To avoid such issues the vdisk file needs to be moved to a disk which does have 2TB free.

Link to comment
1 hour ago, trurl said:

Do you really need a vdisk that large? Your VMs can access Unraid storage.

 

Gosh, I thought that the vdisk what I set up is going to the max accessible space for the VMs. I didn't know there was a way to let the VMs just use the Unraid storage and grow as needed. What should be the size of the vdisk and should it be (if I running crypto nodes or thereabouts) using the cache?

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.