Jump to content

mover is super slow


Go to solution Solved by itimpi,

Recommended Posts

so i updated unraid to 6.12 for 6.11.5, and i'm trying to move everything from my pools to the array to format the pools to zfs.

the problem is the mover is REALLY slow, i change the write method to reconstruct write , but it didn't make any difference.

as you can see in the picture the speed is 19.4 mb/s but the problem is it stays like this for a second and then go to zero for a couple of second and be something between 2 mb/s and 22mb/s for a second and go back to zero again.

is this normal? 

i recall when moving 400 gigs from the download ssd it didn't take too long to do it maybe an hour or 2 max. now i've been running mover for almost 3 hours and it only moved around 50-70 gigs 

i disabled docker and vm so the mover doesn't run into problems when moving the appdata 

 

image.thumb.png.e44f834cdf6d06ab4ccc5747b9451ad1.png

tower-diagnostics-20230620-0702.zip

Edited by sohailoo
Link to comment
  • Solution

If you are moving lots of small files (as if often the case for the appdata share)  then mover will be slow as there is a significant overhead per file for checks that mover does before it moves a file.  It can be faster to use something like Dynamix File Manager to move the shares from the cache to a specific disk,

Link to comment
  • 3 months later...

Maybe is time to improve the Mover then instead of staying with that legacy code for ages?

 

... and make it use something like rclone sync / copy which works multi-threaded not single-threaded like rsync does ...

 

I have had a lot of small files in cache, and the share ... this makes the Mover actually do ~700 kb/s transfer only - for a few hundred GiB this will take ages ...

 

Using rclone copy -> 130 mb/s over the whole set with hardly and drops in speed... and that's an 8TB SMR drive (Seagate Archive) as a target here!

Link to comment
  • 1 month later...

I'm having the same problem with the mover being too slow.  This is my first time setting up unraid as a server and I'm in the middle of copying files over.  With the SSD as a buffer to the Array, I'm getting maybe 1 to 5 MB/s in through put.  The system is currently just running mover.  The SSD cache (`cache-storage`) only has 400 MB sized files in there.  I'm transferring a bunch of videos atm.  So these aren't small by any means.

 

727864702_Screenshot2023-11-06at1_29_41PM.thumb.png.122149d7f6efac1b773e13922a600984.pngunraid-diagnostics-20231106-1330.zip

Link to comment

For the record: Im storing my frigate clips (around 4MB each as I remember) on my cache then moving them to the array, the mover was horribly slow (2MB every 2-5 sec) and the cache disk filled up almost entirely. The solution to me was a complete reboot of the system, now its moving with normal speeds (80-150MB/s). I dont know what was the problem but if anyone can help me figure out I would really appreciate it!

Link to comment
  • 4 months later...
On 11/7/2023 at 12:31 PM, stewe93 said:

For the record: Im storing my frigate clips (around 4MB each as I remember) on my cache then moving them to the array, the mover was horribly slow (2MB every 2-5 sec) and the cache disk filled up almost entirely. The solution to me was a complete reboot of the system, now its moving with normal speeds (80-150MB/s). I dont know what was the problem but if anyone can help me figure out I would really appreciate it!

Running into the same problem - My mover has now been running for over 18 hours, yet my drives are barely moving all while my cache drive is filling up...

I also have Frigate and the last 3 days worth of footage hasn't been touched by mover...

Link to comment

As mentioned above, Most Free is terrible for cache enabled shares once multiple drives are basically equivalent in free space.

 

This is mainly because of how Linux caches the writes.  Since RAM is used as the buffer, and writes to drive 1 won't interfere with drive 2 then you'll wind up with multiple drives being written to simultaneously and with parity it takes a huge hit, whether not not reconstruct write is enabled in disk settings.

 

High water is the ideal setting to use for all cache-enabled shares.

Link to comment

Same issue here. I'm trying to move appdata, domains and system to the array in order to convert the SSD to ZSF. All shares are set to high water but I experience the same slow througput. Also there is no constant transfer. Most of the time READS and WRITES show 0 and only occasionally low Mbit values.  

Reboot did help a little but not much in my case (improved from 100KB/s to low one digit  MB/s values). 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...