Jump to content

Mover not running - even when invoked


Recommended Posts

I am having an issue where my cache pool is 90% full. I do not have mover tuning plugin installed. Even when I manually invoke the mover, it does not run. What could be the issue here? I've restarted the server a couple of times but that didn't help at all. 

 

I had no issues with the mover in prior Unraid OS versions.

 

Does anyone else have this issue? 

 

I'm running Unraid Version: 6.12.0-rc3

Link to comment
8 minutes ago, Frank1940 said:

In a new post, attach your diagnostics file and the name(s) of the share(s) that being moved. 

Shares being moved:
appdata

domains

isos

MovieMedia

Shows

system

temptranscodes

 

just fyi I changed some shares that were using cache to not use cache a couple of days ago.

unraidbeast-diagnostics-20230424-1935.zip

Edited by UnBeastRaid
Link to comment

I went through your shares and selected only those that you are using the cache for.   That list is below:
 

appdata                           shareUseCache="only"    # Share exists on cache, disk1, disk2, disk3

domains                           shareUseCache="yes"     # Share exists on cache, disk1, disk2, disk3, disk4

isos                              shareUseCache="yes"     # Share exists on cache, disk1, disk2, disk3, disk5

system                            shareUseCache="yes"     # Share exists on cache, disk1, disk3
t------------s                    shareUseCache="prefer"  # Share exists on cache

 

The remaining shares are setup (currently) not to use the cache drive.

 

Two of these shares will never be moved off of the cache drive--  appdata  and t------------s.  

 

"prefer"    indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto the Cache disk/pool.     

 

"only"    indicates that all new files and subdirectories must be written to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status. Mover will take no action so any existing files for this share that are on the array are left there.

 

The remaining shares are all set to "no" and one of the condition is this:  "Mover will take no action so any existing files for this share that are on the cache are left there."  This means if you changed the Use cache pool for "Yes" to "No" without running Mover first, the files will remain on the cache pool forever.  (You can fix this by changing the setting to "Yes" and running mover.  This assumes that there is enough space available on the array to store the files.  Double check that you have not limited any of the shares to using only certain disks in the array that might be full.)

 

You can see the shares/files on the Pool Device by clicking the icon at the end of the cache pool entry on the Main tab. 

  • Like 1
Link to comment

One thing I forgot.  Mover will never move a file from the cache drive is that operation would overwrite a file on the array.  This condition does not occur normally but Dockers (and, perhaps, VMs) can write directly to the cache drive In a Disk Share type of operation.  (You can also do it by writing to the cache Disk Share rather than writing to an Array Share!)  Mover-- in this case ---will follow LimeTech's advice and never do a perceived Disk Share to Array Share operation.

Link to comment

Ideally, appdata, domains, system shares should be all on a fast pool (cache) and configured to stay there so Docker/VM performance isn't impacted by slower parity, and so array disks can spin down since these files are always open.

 

Before getting those moved all to cache where they belong, best to get other things moved off cache to make room for them.

 

Currently, appdata is cache:only, and that will be fine for now, but mover ignores cache:only shares. Later we can set it to prefer so its files can all be moved to cache.

 

domains and system shares are both set cache:yes. Set these shares also to cache:only for now since you don't want these moved.

 

t------------s                    shareUseCache="prefer"  # Share exists on cache

What is the purpose of this cache:prefer share? Prefer means you prefer for these files to remain on cache and any that overflow to the array will be moved to cache. This share (along with your too large docker.img) is probably taking up most of your cache. Unless you can give a good reason why you want this share to stay on cache, you should set it to cache:yes instead so it will be moved to the array.

 

Why do you have 100G docker.img? Have you had problems filling it? 20G is often more than enough, and making it larger won't fix problems filling it. docker.img usage should not be growing. The usual cause of filling docker.img is an application writing to a path that isn't mapped to host storage.

 

 

 

 

Link to comment
3 hours ago, trurl said:

Ideally, appdata, domains, system shares should be all on a fast pool (cache) and configured to stay there so Docker/VM performance isn't impacted by slower parity, and so array disks can spin down since these files are always open.

 

Before getting those moved all to cache where they belong, best to get other things moved off cache to make room for them.

 

Currently, appdata is cache:only, and that will be fine for now, but mover ignores cache:only shares. Later we can set it to prefer so its files can all be moved to cache.

 

domains and system shares are both set cache:yes. Set these shares also to cache:only for now since you don't want these moved.

 

t------------s                    shareUseCache="prefer"  # Share exists on cache

What is the purpose of this cache:prefer share? Prefer means you prefer for these files to remain on cache and any that overflow to the array will be moved to cache. This share (along with your too large docker.img) is probably taking up most of your cache. Unless you can give a good reason why you want this share to stay on cache, you should set it to cache:yes instead so it will be moved to the array.

 

Why do you have 100G docker.img? Have you had problems filling it? 20G is often more than enough, and making it larger won't fix problems filling it. docker.img usage should not be growing. The usual cause of filling docker.img is an application writing to a path that isn't mapped to host storage.

 

 

 

 

thanks for your help. The share in question is temptranscodes for tdarr & tdarr nodes. Should I change it to not use cache? 

 

In regards to the docker image, I've tried many things to change it back to 20GB but I have not had any success. I agree that it is way too large. Do you have any suggestions on how to shrink it?

Link to comment
12 hours ago, Frank1940 said:

I went through your shares and selected only those that you are using the cache for.   That list is below:
 

appdata                           shareUseCache="only"    # Share exists on cache, disk1, disk2, disk3

domains                           shareUseCache="yes"     # Share exists on cache, disk1, disk2, disk3, disk4

isos                              shareUseCache="yes"     # Share exists on cache, disk1, disk2, disk3, disk5

system                            shareUseCache="yes"     # Share exists on cache, disk1, disk3
t------------s                    shareUseCache="prefer"  # Share exists on cache

 

The remaining shares are setup (currently) not to use the cache drive.

 

Two of these shares will never be moved off of the cache drive--  appdata  and t------------s.  

 

"prefer"    indicates that all new files and subdirectories should be written to the Cache disk/pool, provided enough free space exists on the Cache disk/pool. If there is insufficient space on the Cache disk/pool, then new files and directories are created on the array. When the mover is invoked, files and subdirectories are transferred off the array and onto the Cache disk/pool.     

 

"only"    indicates that all new files and subdirectories must be written to the Cache disk/pool. If there is insufficient free space on the Cache disk/pool, create operations will fail with out of space status. Mover will take no action so any existing files for this share that are on the array are left there.

 

The remaining shares are all set to "no" and one of the condition is this:  "Mover will take no action so any existing files for this share that are on the cache are left there."  This means if you changed the Use cache pool for "Yes" to "No" without running Mover first, the files will remain on the cache pool forever.  (You can fix this by changing the setting to "Yes" and running mover.  This assumes that there is enough space available on the array to store the files.  Double check that you have not limited any of the shares to using only certain disks in the array that might be full.)

 

You can see the shares/files on the Pool Device by clicking the icon at the end of the cache pool entry on the Main tab. 

Thanks for the comprehensive explanation. I had MovieMedia and shows as using cache and I had change those shares to not use cache and attempted to run the mover. It sounds like I have to change these shares back to use cache and then attempt mover? 

Link to comment
4 minutes ago, UnBeastRaid said:

In regards to the docker image, I've tried many things to change it back to 20GB but I have not had any success.

You do not give any indication of how you failed?   The standard process for recreating the docker image file and reloading applications is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.

Link to comment
8 minutes ago, UnBeastRaid said:

It sounds like I have to change these shares back to use cache and then attempt mover? 

"Use cache" has 4 possible settings and you don't mention what you will set it to. Click on that setting in the webUI for an explanation of what these different settings do.

 

Some of those shares belong on cache as I mentioned. Also, nothing can move open files.

 

Did you understand what I told you above?

Link to comment
6 minutes ago, trurl said:

nothing can move open files

Disable Docker and VM Manager in Settings until you have everything where it belongs.

 

Set appdata, domains, system shares to cache:only for now. Mover ignores cache:only shares, and we don't want these moved from cache. Later we will move them to cache where they belong.

 

t------------s                    shareUseCache="prefer"  # Share exists on cache

This is the only other user share you have with any files on cache. Set it to cache:yes then run mover.

 

After mover completes, post new diagnostics.

Link to comment
55 minutes ago, trurl said:

Disable Docker and VM Manager in Settings until you have everything where it belongs.

 

Set appdata, domains, system shares to cache:only for now. Mover ignores cache:only shares, and we don't want these moved from cache. Later we will move them to cache where they belong.

 

t------------s                    shareUseCache="prefer"  # Share exists on cache

This is the only other user share you have with any files on cache. Set it to cache:yes then run mover.

 

After mover completes, post new diagnostics.

Will report back

Link to comment
1 hour ago, itimpi said:

You do not give any indication of how you failed?   The standard process for recreating the docker image file and reloading applications is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page.

Apologies for being vague but it has been over a year since I attempted to shrink the size of the docker container, and I don't remember what it is that I did exactly. I will follow the instructions you've provided in the link.

Link to comment
6 minutes ago, Frank1940 said:

Is this correct?

image.thumb.png.1fe3a27e29d812adf59dbc4ba4b4b4e9.png

 

2.15TB on the cache drive for domains...

Embarrassingly so. I had not realized this. Stupid question...does this share store VM images? If not, what does it do? What is the recommended size for this share? Honestly, this share hasn't been touched since I built this server a few years ago.

Link to comment
2 hours ago, trurl said:

Cache isn't that large? 

 

I didn't check when initially I saw that size but after your comment, I went back to the diagnostics file.  There are four ~500GB drives assigned as cache drives.  I think that share (domains) contains VM information.  I also looked at the logs folder in the libvert.txt file, found this:

2023-04-25 01:54:04.420+0000: 7909: info : libvirt version: 8.7.0
2023-04-25 01:54:04.420+0000: 7909: info : hostname: UnRaidBeast
2023-04-25 01:54:04.420+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-04-25 02:04:05.658+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-04-25 02:14:06.718+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-04-25 02:24:07.781+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-04-25 02:34:08.845+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error

 

That does look like some process or data is not working properly.   🙄   I don't have any VM experience so I can be of no real assistance about how to go about fixing it. 

 

@UnBeastRaid, get a screenshot of the  Pool Devices section of the MAIN tab...

Link to comment
3 hours ago, Frank1940 said:

four ~500GB drives assigned as cache drives

Only 2 cache drives, nvme0 and nvme1. For some reason smart folder in diagnostics often show duplicates for these devices.

 

These are btrfs raid1 so only 500G capacity. Don't know why or how more than 2TB shows up in domains share on that pool. Maybe something to do with sparse files?

 

Link to comment
7 hours ago, trurl said:

Disable Docker and VM Manager in Settings until you have everything where it belongs.

 

Set appdata, domains, system shares to cache:only for now. Mover ignores cache:only shares, and we don't want these moved from cache. Later we will move them to cache where they belong.

 

t------------s                    shareUseCache="prefer"  # Share exists on cache

This is the only other user share you have with any files on cache. Set it to cache:yes then run mover.

 

After mover completes, post new diagnostics.

Here are the new diagnostics post mover completion. Cache usage is at 34GB. The docker image hasn't occupied the cache drive...or does it?

unraidbeast_diagnostics_post_mover.zip

Link to comment
3 hours ago, Frank1940 said:

 

I didn't check when initially I saw that size but after your comment, I went back to the diagnostics file.  There are four ~500GB drives assigned as cache drives.  I think that share (domains) contains VM information.  I also looked at the logs folder in the libvert.txt file, found this:

2023-04-25 01:54:04.420+0000: 7909: info : libvirt version: 8.7.0
2023-04-25 01:54:04.420+0000: 7909: info : hostname: UnRaidBeast
2023-04-25 01:54:04.420+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-04-25 02:04:05.658+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-04-25 02:14:06.718+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-04-25 02:24:07.781+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error
2023-04-25 02:34:08.845+0000: 7909: error : virNetSocketReadWire:1791 : End of file while reading data: Input/output error

 

That does look like some process or data is not working properly.   🙄   I don't have any VM experience so I can be of no real assistance about how to go about fixing it. 

 

@UnBeastRaid, get a screenshot of the  Pool Devices section of the MAIN tab...

Only 500GB. 2 nvme in raid 1

CleanShot 2023-04-25 at 17.23.45.png

Link to comment
19 minutes ago, UnBeastRaid said:

post mover completion

Looks like you skipped some of my instructions that you quoted. Specifically

8 hours ago, trurl said:

Disable Docker and VM Manager in Settings until you have everything where it belongs.

 

Set appdata, domains, system shares to cache:only for now. Mover ignores cache:only shares, and we don't want these moved from cache. Later we will move them to cache where they belong.

But, since you now have plenty of room on cache, lets proceed to get those shares that belong on cache back there.

 

Nothing can move open files. So, you must Disable Docker and VM Manager in Settings.

 

Set appdata, domains, and system shares to cache:prefer then Run mover

 

Then post new diagnostics.

Link to comment
16 minutes ago, trurl said:

Looks like you skipped some of my instructions that you quoted. Specifically

But, since you now have plenty of room on cache, lets proceed to get those shares that belong on cache back there.

 

Nothing can move open files. So, you must Disable Docker and VM Manager in Settings.

 

Set appdata, domains, and system shares to cache:prefer then Run mover

 

Then post new diagnostics.

just to be clear, post diagnostics without turning on VM manager & Docker...correct?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...