sohailoo Posted June 20, 2023 Share Posted June 20, 2023 (edited) so i updated unraid to 6.12 for 6.11.5, and i'm trying to move everything from my pools to the array to format the pools to zfs. the problem is the mover is REALLY slow, i change the write method to reconstruct write , but it didn't make any difference. as you can see in the picture the speed is 19.4 mb/s but the problem is it stays like this for a second and then go to zero for a couple of second and be something between 2 mb/s and 22mb/s for a second and go back to zero again. is this normal? i recall when moving 400 gigs from the download ssd it didn't take too long to do it maybe an hour or 2 max. now i've been running mover for almost 3 hours and it only moved around 50-70 gigs i disabled docker and vm so the mover doesn't run into problems when moving the appdata tower-diagnostics-20230620-0702.zip Edited June 20, 2023 by sohailoo Quote Link to comment
Solution itimpi Posted June 20, 2023 Solution Share Posted June 20, 2023 If you are moving lots of small files (as if often the case for the appdata share) then mover will be slow as there is a significant overhead per file for checks that mover does before it moves a file. It can be faster to use something like Dynamix File Manager to move the shares from the cache to a specific disk, Quote Link to comment
jit-010101 Posted September 29, 2023 Share Posted September 29, 2023 Maybe is time to improve the Mover then instead of staying with that legacy code for ages? ... and make it use something like rclone sync / copy which works multi-threaded not single-threaded like rsync does ... I have had a lot of small files in cache, and the share ... this makes the Mover actually do ~700 kb/s transfer only - for a few hundred GiB this will take ages ... Using rclone copy -> 130 mb/s over the whole set with hardly and drops in speed... and that's an 8TB SMR drive (Seagate Archive) as a target here! Quote Link to comment
lamchakchan Posted November 6, 2023 Share Posted November 6, 2023 I'm having the same problem with the mover being too slow. This is my first time setting up unraid as a server and I'm in the middle of copying files over. With the SSD as a buffer to the Array, I'm getting maybe 1 to 5 MB/s in through put. The system is currently just running mover. The SSD cache (`cache-storage`) only has 400 MB sized files in there. I'm transferring a bunch of videos atm. So these aren't small by any means. unraid-diagnostics-20231106-1330.zip Quote Link to comment
JorgeB Posted November 7, 2023 Share Posted November 7, 2023 13 hours ago, lamchakchan said: So these aren't small by any means. Looks like you are using the most free allocation method, that is much slower than the other ones since parity writes will overlap. Quote Link to comment
stewe93 Posted November 7, 2023 Share Posted November 7, 2023 For the record: Im storing my frigate clips (around 4MB each as I remember) on my cache then moving them to the array, the mover was horribly slow (2MB every 2-5 sec) and the cache disk filled up almost entirely. The solution to me was a complete reboot of the system, now its moving with normal speeds (80-150MB/s). I dont know what was the problem but if anyone can help me figure out I would really appreciate it! Quote Link to comment
valiente Posted March 9 Share Posted March 9 On 11/7/2023 at 12:31 PM, stewe93 said: For the record: Im storing my frigate clips (around 4MB each as I remember) on my cache then moving them to the array, the mover was horribly slow (2MB every 2-5 sec) and the cache disk filled up almost entirely. The solution to me was a complete reboot of the system, now its moving with normal speeds (80-150MB/s). I dont know what was the problem but if anyone can help me figure out I would really appreciate it! Running into the same problem - My mover has now been running for over 18 hours, yet my drives are barely moving all while my cache drive is filling up... I also have Frigate and the last 3 days worth of footage hasn't been touched by mover... Quote Link to comment
Squid Posted March 9 Share Posted March 9 As mentioned above, Most Free is terrible for cache enabled shares once multiple drives are basically equivalent in free space. This is mainly because of how Linux caches the writes. Since RAM is used as the buffer, and writes to drive 1 won't interfere with drive 2 then you'll wind up with multiple drives being written to simultaneously and with parity it takes a huge hit, whether not not reconstruct write is enabled in disk settings. High water is the ideal setting to use for all cache-enabled shares. Quote Link to comment
Joopy Posted March 12 Share Posted March 12 Same issue here. I'm trying to move appdata, domains and system to the array in order to convert the SSD to ZSF. All shares are set to high water but I experience the same slow througput. Also there is no constant transfer. Most of the time READS and WRITES show 0 and only occasionally low Mbit values. Reboot did help a little but not much in my case (improved from 100KB/s to low one digit MB/s values). Quote Link to comment
JorgeB Posted March 12 Share Posted March 12 Mover will be extra slow moving many small files, like appdata, since for every file it checks if it's in use before moving it. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.