Unraid unable to run Mover after a while. Button doesn't do anything.


Recommended Posts

Attached are the diagnostics.

 

the mover button command does nothing. I ahve been using the mover button daily and it works for a few days until it doesn't.
Jan 30 17:49:12 NAS1 emhttpd: shcmd (1772): /usr/local/sbin/mover &> /dev/null &

 

This continues to happen, this is the third time, shutting down and rebooting the system will temporarily fix it, but it occurs again a day or 2 later. the log folder is not full. it is only 3% allocated.

 

I would like to not have to constantly monitor and reboot this server all the time.

 

previous to this shows a bunch of alloc errors

Jan 30 09:13:33 NAS1 nginx: 2024/01/30 09:13:33 [crit] 10435#10435: ngx_slab_alloc() failed: no memory
Jan 30 09:13:33 NAS1 nginx: 2024/01/30 09:13:33 [error] 10435#10435: shpool alloc failed
Jan 30 09:13:33 NAS1 nginx: 2024/01/30 09:13:33 [error] 10435#10435: nchan: Out of shared memory while allocating message of size 23278. Increase nchan_max_reserved_memory.
Jan 30 09:13:33 NAS1 nginx: 2024/01/30 09:13:33 [error] 10435#10435: *1035124 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost"
Jan 30 09:13:33 NAS1 nginx: 2024/01/30 09:13:33 [error] 10435#10435: MEMSTORE:01: can't create shared message for channel /devices
Jan 30 17:49:12 NAS1 emhttpd: shcmd (1772): /usr/local/sbin/mover &> /dev/null &

 

 

nas1-diagnostics-20240130-1913.zip

Link to comment

Thanks, I turned on logging, and it shows the error "No space left on device" for all files that need to be moved.

Even though there are more than 7TB free on one drive and more than 3.33TB on another drive. All drives are included on the file share, and it has successfully written items there before (as there is data on them). all files in cache are less than 100GB

 

example errors.

Jan 30 22:47:12 NAS1 move: create_parent: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill error: No space left on device
Jan 30 22:47:12 NAS1 move: create_parent: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill error: No space left on device
Jan 30 22:47:12 NAS1 move: file: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill/landscape.jpg
Jan 30 22:47:12 NAS1 move: create_parent: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill error: No space left on device
Jan 30 22:47:12 NAS1 move: file: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill/backdrop.jpg
Jan 30 22:47:12 NAS1 move: create_parent: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill error: No space left on device
Jan 30 22:47:12 NAS1 move: file: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill/logo.png
Jan 30 22:47:12 NAS1 move: create_parent: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill error: No space left on device
Jan 30 22:47:13 NAS1 move: file: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill/banner.jpg
Jan 30 22:47:13 NAS1 move: create_parent: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill error: No space left on device
Jan 30 22:47:13 NAS1 move: move_object: /mnt/cache/Data/media/Anime/Campfire Cooking in Another World with My Absurd Skill: No space left on device
Jan 30 22:47:13 NAS1 move: mover: finished

 

here is my DF

root@NAS1:~# df
Filesystem        1K-blocks       Used   Available Use% Mounted on
rootfs             65779588     301460    65478128   1% /
tmpfs                 32768        680       32088   3% /run
/dev/sda1          30703952     503104    30200848   2% /boot
overlay            65779588     301460    65478128   1% /lib
overlay            65779588     301460    65478128   1% /usr
devtmpfs               8192          0        8192   0% /dev
tmpfs              65793348          0    65793348   0% /dev/shm
tmpfs                131072       4764      126308   4% /var/log
tmpfs                  1024          0        1024   0% /mnt/disks
tmpfs                  1024          0        1024   0% /mnt/remotes
tmpfs                  1024          0        1024   0% /mnt/addons
tmpfs                  1024          0        1024   0% /mnt/rootshare
disk1             200735488        128   200735360   1% /mnt/disk1
disk2             208476928        128   208476800   1% /mnt/disk2
disk3             238575616        128   238575488   1% /mnt/disk3
disk4             126694656        128   126694528   1% /mnt/disk4
disk5             185811968        128   185811840   1% /mnt/disk5
disk6             142112384        128   142112256   1% /mnt/disk6
disk7             176027008        128   176026880   1% /mnt/disk7
disk8            3248902912        128  3248902784   1% /mnt/disk8
disk9            6835221632        128  6835221504   1% /mnt/disk9
disk10            162943360        128   162943232   1% /mnt/disk10
disk11            120648576        128   120648448   1% /mnt/disk11
cache             245097856        256   245097600   1% /mnt/cache
disk11/Data      9630113664 9509465216   120648448  99% /mnt/disk11/Data
disk8/Data       9630108672 6381205888  3248902784  67% /mnt/disk8/Data
disk10/Data      3779061376 3616118144   162943232  96% /mnt/disk10/Data
disk6/Data       9630113792 9488001536   142112256  99% /mnt/disk6/Data
disk1/isos        206711552    5976192   200735360   3% /mnt/disk1/isos
disk2/Data       7667180032 7458703232   208476800  98% /mnt/disk2/Data
disk7/Data       9630113152 9454086272   176026880  99% /mnt/disk7/Data
disk5/Data       7667180928 7481369088   185811840  98% /mnt/disk5/Data
disk3/Data       7667179648 7428604160   238575488  97% /mnt/disk3/Data
disk1/SSDs        200735488        128   200735360   1% /mnt/disk1/SSDs
disk4/Data       7665404416 7538709888   126694528  99% /mnt/disk4/Data
disk4/system      128459648    1765120   126694528   2% /mnt/disk4/system
disk1/Data       7661180160 7460444800   200735360  98% /mnt/disk1/Data
disk9/Data       9630107776 2794886272  6835221504  30% /mnt/disk9/Data
shfs            11646150528       1408 11646149120   1% /mnt/user0
shfs            11646150528       1408 11646149120   1% /mnt/user
/dev/loop2         20971520    2796756    17762940  14% /var/lib/docker
/dev/loop3          1048576       4568      925768   1% /etc/libvirt
ssdpool          1849142912        128  1849142784   1% /mnt/ssdpool
ssdpool/appdata  1849596416     453632  1849142784   1% /mnt/ssdpool/appdata
ssdpool/domains  1883264128   34121344  1849142784   2% /mnt/ssdpool/domains
/dev/sdf1        9766434812 6909136872  2857297940  71% /mnt/disks/WD10TB-A
cache/Data       1405619968 1160522368   245097600  83% /mnt/cache/Data
Link to comment

That 'df' command shows something strange in that there appear to be lots of mount points of the form /mnt/diskxx/Data.   This is is something that I would only expect to see for an Exclusive share - and that can only exist on one drive or pool.  Have you done anything to create these?   The error message can be explained if Unraid is picking the first one on the list (/mnt/disk11/Data) as that has less than the Minimum Free Space free set for the Data share.

 

The other anomaly is that none of the physical drives /mnt/diskxx seem to have any space used so no idea where the data is actually being stored - it may be in RAM and thus will not survive a reboot.

Link to comment
5 minutes ago, JorgeB said:

Yes, that is normal, a zfs dataset is created for every share (when they are created using the GUI), and those appear as mountpoints.

Yes - but there are multiple mount points for the Data share - is that normal as well?

 

I do not use ZFS in the main array so no experience of this, and not currently inclined to try bearing that there is a known problem with performance if you do this.

Link to comment
29 minutes ago, JorgeB said:

Yes, there will be one for each disk where the share exists.

I wonder if there is a problem related to this then?  The OP problem could be explained by mover only trying to use the first one listed.  I do not currently have a suitable setup to test this.

Link to comment
  • 2 weeks later...

Ok, trying to narrow the problem down, i upgraded to the latest version. I still get some errors moving files. It doesn't seem to be related to zfs or the file system. I think it has something to do with the no memory error that occurs.  Once this error occurs, then your unraid starts behaving badly.

in this case, this memory error popped up when i tried to move a few thousand files from one folder to another folder, through the unraid file manager. 

Once this occurs, the move job stops working (but with no error in the gui). You no longer can do any file management, as it will say "a job is running"

And rebooting the system seems to be the only way to fix it.

 

There seems to be some sort of bug or some error handling that needs to take place so it doesn't lock up the file manager, when this occurs.

 

image.thumb.png.06b9087bf790d589e07622bb45449e51.png

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.