DizRD Posted June 17, 2022 Share Posted June 17, 2022 I have a docker container that runs rclone to pull down files from my gdrive and put them on my unraid server. My array has 13.8 TB free and my 'unprotectedcache' for the share has 1.79 TB free: I have a /mnt/user/video share mounted in the container as /cdata. inside the container the df -h returns: Filesystem Size Used Available Use% Mounted on /dev/mapper/sdt1 1.6T 230.6G 1.4T 14% / tmpfs 64.0M 0 64.0M 0% /dev tmpfs 125.9G 0 125.9G 0% /sys/fs/cgroup shm 64.0M 0 64.0M 0% /dev/shm shfs 86.4T 73.8T 12.6T 85% /cdata shfs 1.6T 230.6G 1.4T 14% /config shfs 1.1P 60.7T 1.0P 6% /data /dev/mapper/sdt1 1.6T 230.6G 1.4T 14% /etc/resolv.conf /dev/mapper/sdt1 1.6T 230.6G 1.4T 14% /etc/hostname /dev/mapper/sdt1 1.6T 230.6G 1.4T 14% /etc/hosts tmpfs 125.9G 0 125.9G 0% /proc/acpi tmpfs 64.0M 0 64.0M 0% /proc/kcore tmpfs 64.0M 0 64.0M 0% /proc/keys tmpfs 64.0M 0 64.0M 0% /proc/timer_list tmpfs 125.9G 0 125.9G 0% /sys/firmware pdrive2: 1.1P 60.7T 1.0P 6% /data The rclone move command to pull down from my gdrive is returning this error with all files it's trying to transfer down(filename changed manually by me in this log clip): 2022/06/17 14:54:13 ERROR : FILE.mkv: Not deleting source as copy failed: multpart copy: write failed: write /cdata/video/FILE.mkv: no space left on device 2022/06/17 14:54:13 DEBUG : FILE.mkv: Failed to pre-allocate: no space left on device Share is configured with minimum 160GB free: Actual disk utilizations: Any thoughts or suggestions? Quote Link to comment
DizRD Posted June 28, 2022 Author Share Posted June 28, 2022 Anyone have suggestions? Quote Link to comment
JorgeB Posted June 28, 2022 Share Posted June 28, 2022 Share appears to be correctly configured, I assume a transfer to the share via SMB or locally works correctly? If yes looks like a docker issue, you might try the docker support thread, if there' is one. Quote Link to comment
DizRD Posted July 14, 2022 Author Share Posted July 14, 2022 @JorgeB Thanks for the response! I bought another 8TB drive to add to the array. It's all added and fine for the array. I started the copy process again. It put about 400GB on the new drive before coming back with the "no space left on device" error. I decided to eliminate Docker from the equation to see if that made a difference and ran rclone natively pointed at my /mnt/user/video share.. The "no space left on device" error is still returned even thought I have 17TB available on my array. Any thoughts on what to check next? Quote Link to comment
itimpi Posted July 14, 2022 Share Posted July 14, 2022 What have you got set for the Minimum Free Space for the pools/cache? I see you have it set for the share, but I could not check your pools since no diagnostics were provided. If not set correctly it may stop Unraid tidily start bypassing the cache/pool as it gets near full. Quote Link to comment
DizRD Posted July 15, 2022 Author Share Posted July 15, 2022 I have no minimum space set for the cache drive, and it's got 1.7TB free when trying to perform the copy operation. I tried changing the allocation strategy on the share to "most free", but it didn't seem to make a difference either. Attaching Diag deathstar-diagnostics-20220714-1833.zip Quote Link to comment
JorgeB Posted July 15, 2022 Share Posted July 15, 2022 I don't see anything wrong with the configuration, if you try to copy something over SMB to the video share does it work? You should also upgrade to v6.10.3, there were some changes regarding pools space management. Quote Link to comment
DizRD Posted July 15, 2022 Author Share Posted July 15, 2022 So I think I figured out how to solve the problem, even If I don't know what caused the problem.. It seems like somehow size 0 files were created on one of the near full disks. It may have been from powerloss a couple of weeks ago, or maybe then I first was trying the copy but didn't have min size limits at the time.. It "seems" like the system was locked into putting the files on that disk because of those size 0 files, but because of min size limits it would fail? That's my theory anyway. Removing the size 0 files and starting the copy again seems to be working normally now. 1 Quote Link to comment
Solution JorgeB Posted July 15, 2022 Solution Share Posted July 15, 2022 1 minute ago, DizRD said: It "seems" like the system was locked into putting the files on that disk because of those size 0 files, but because of min size limits it would fail? That's my theory anyway. If the files copied to a share already exist Unraid will overwriting them on the same disk, so it was probably that. Quote Link to comment
trurl Posted July 15, 2022 Share Posted July 15, 2022 3 minutes ago, DizRD said: somehow size 0 files were created This is a common symptom of mixing disks and user shares when moving or copying files. Did you do that? linux doesn't know disks and user shares are just different views of the same files, so it will try to overwrite the file it is trying to read if you mix disks and user shares when moving or copying. Quote Link to comment
DizRD Posted July 21, 2022 Author Share Posted July 21, 2022 @trurl Thanks for the extra insight, I appreciate the context in case that's a problem in the future. In my scenario it seems to be that the size 0 files were created when the disk got full before I had limits set on the shares. Removing the size 0 files from the disk and restarting the copy process to the share is working as expected. Thanks! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.