[Plugin] rclone


Waseh

Recommended Posts

I'm having no problem what so ever unmounting by using the script provided in the plugin and through user scripts.

 

In regards to the gui going to the top when you click apply i'll see if i can change that :)

 

So, without stopping the mount script just run the unmount script? Or stop the first then run unmount? I have gotten errors terminating one then running the other and try not to do it. It's a new feature in rclone with cautions around it so I'm not too worried about it. I've mounted the ACD from another machine t monitor things rather than risking the NAS :) Have hit about a TB, another 30+ to go lol.

Link to comment

Seeing some errors from rclone, from the looks of it files with long paths trip up ACD? Limit of file path appears to be 270? I cannot tell if rclone makes longer paths when it encrypts but I think that might be the case? I'm going to wait until my entire job is done before investigating but so far I appear to have run into this about 75+ times. Anyone else seeing similar?

 

Failed to copy: HTTP code 400: "400 Bad Request": response body: "{\"logref\":\"8051344f-b3fa-11e6-9046-fd50d35e6b20\",\"message\":\"1 validation error detected: Value \\u0027te0v5i0g8ro7ft8uqhor5072ft4ei4bi7543loomvfpc5v76s0uaeoo563ev87chlhua9f0q5fo5d53vp6p5l5u91athf57motjfm01upet75li1vhc67v93ahli3p95jfrgvevon6lhabn4jcuukb7ti78hd7qrvj5cjb00bjd6h88pad79rdmmjtltnj2tv236sttpdjkfntmdk1piubcud9qd290dn9bvmuf4mf2gkv0dmh249qon08nm2v8t9gq5ojjls5r0urmau7j5os9p2s1r0ffklonjerih6hjve1cr5ir0\\u0027 at \\u0027name\\u0027 failed to satisfy constraint: Member must have length less than or equal to 280\",\"code\":\"\"}"

Link to comment

I understand the character limit is an Amazon limit, however I *think* my files aren't as long as that to being with. Is it possible that rclone changes the filename length? I'll try to track down the files when the run is done but I'm up to over 95 files now which seems a bit extreme. What's the filename limit in unRAID? EXT4 ends at 255 and Amazon is claiming 280, hence my question...

Link to comment

When I create a copy script, once I abort the script still seems to run.  Is there a way to stop all rclone scripts running or some way to stop copying if the script isn't running in the background?

 

I'm assuming you mean you are using the user script plugin and aborting that. If that doesn't work just kill the process.

Link to comment

Bobbitb, for your request, couldn't you pull the amount transferred out of the log file?  Maybe not ideal, but something to work in the meantime? 

 

I guess the downside is you would have to know what log file it was being written to and only have 1 instance running at a time?

 

cat rclone_output.log | grep Transferred: | tail -2 | head -1
Transferred:   329.954 GBytes (2.197 MBytes/s)

Link to comment

Bobbitb, for your request, couldn't you pull the amount transferred out of the log file?  Maybe not ideal, but something to work in the meantime? 

 

I guess the downside is you would have to know what log file it was being written to and only have 1 instance running at a time?

 

cat rclone_output.log | grep Transferred: | tail -2 | head -1
Transferred:   329.954 GBytes (2.197 MBytes/s)

 

Yes and no. It would kind of be a pain for a number of reason like as you mentioned. There is no native, intuitive way to get the information that is available. It's just kind of a "we have to work with what we've got" sort of situation. Also, what we got is not sufficient. Rclone does not calculate the overall progress. It provides the progress of the current working file, but not of the entire job. One probably could calculate that by first capturing the dry run list of files, comparing it with what has been completed, and with the progress of the current file, but that is not accurate, as the ratio of completed files transferred to those left to transfer is not truly representative of percentage complete. Say I am transferring 100 files, 99 of which are 1 megabyte and 1 that is 20 gigabytes. If it transfers the small files first, well, it's transferred 99 out of 100 files so it would show 99% complete, which would not be even close to accurate. We would need to know the size of all the files to be transferred to calculate that. I'm not able to verify at the moment but if I recall, a dry run does not supply that information. I don't think rclone supplies the information for the user to calculate that themselves. I was just kind of hoping he would add a few more outputs to make the progress a little more transparent to the user so someone else could take on the rest but it seems he's willing to be a little more comprehensive that that. The big thing I am wanting at this point is just a progress bar of some sort on the plugin page of UnRAID.

 

edit: come to think of it though, I suppose if the dry run has a list of the local files to be transferred but does not give size information, there really is no reason that can't be gleaned from the OS itself. Duh. But I don't know if that applies to remotes as well. If the job was syncing from a remote to local instead, that might pose a problem if rclone does not supply the file size information as there wouldn't really be any other way to obtain it.

Link to comment

I track progress by knowing roughly how much I need to backup and visiting the ACD web page to see how much has been pushed up (6tb of 32). To monitor what it's up to I output to a log and then SSH in to tail the log. Using this I've noted errors with file lengths and issues when case of file names change which is pointed out but not uploaded. I've had issues killing the process occasionally but pulling up HTOP I can kill it once I identify the main process.

 

Oh, if you have the log file in the path it's backing up it'll error :P

Link to comment

There's a new version out but I've not been able to upgrade yet, anything significant? I e got a couple of 50gig files that get as far as 70% complete and then something like my router falls over and breaks the connection or I need to stop things to put in a new drive. Picked up a ton of drives on BF and slowly putting them in, at this rate these files are going to take forever lol

Link to comment

An observation, Amazon supposedly has a 50gig per file limit. Some of my backups were far larger so I set them to split at 49gig and rclone to stop at 50gig. I observed 3 of these files proceed to 100%, hang, and then start over again at 0%! I've now reduced the rclone max to 40gig and will try lowering the split size further to say 10gig as it takes ages at the higher numbers and failures kill throughout :(

Link to comment

Thought this maybe helpful in here.

I have my crontab entries setup to run the script at night and stop around 8am.  However, I didn't like killing the script when it was almost done with an upload.  i.e. Not Efficient.  So I modified the kill script to see if there was under an hour left before the "last" file was to estimated to be complete.  If so, it waits and checks every minute to see if its done.  If it's done, or 70 minutes has passed, it then goes ahead and kills the rclone process's.

 

I have also modified my rclone to send files up one at a time.  i.e. 1 thread, and enabled bandwidth restrictions.

 

Please review and let me know your thoughts, or if you find any errors.

 

Crontab Entries

#Rclone Start, Nightly at midnight + 45 mintues
45 0 * * * /boot/rclone/scripts/rc_start.sh > /dev/null 2>&1

#Rclone Enhanced Stop 8am
0 8 * * * /boot/rclone/scripts/rc_enhanced_stop.sh > /dev/null 2>&1

#Rclone Stop,9:15am
15 9 * * * /boot/rclone/scripts/rc_stop.sh > /dev/null 2>&1

 

Start scripts taken from the other rclone thread. (stignz https://lime-technology.com/forum/index.php?topic=46663.msg445897#msg445897)

Start

#!/bin/bash
#description=RClone start.  Usually start this at night when no one is using bandwidth.

LOGFILE=/mnt/user/Public/rclone_logs/rclone-$(date "+%Y%m%d").log
echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1
echo rclone log $(date) $'\r'$'\r' >> $LOGFILE 2>&1
echo "Starting rclone copy" $'\r'>> $LOGFILE 2>&1
rclone sync --transfers=1 --bwlimit 1.6M '/mnt/user/Movies/Processed Movies' secret:'Data/Movies/Processed Movies/' >> $LOGFILE 2>&1
echo Competed rclone copy" $'\r'>> $LOGFILE 2>&1
echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1

 

Enhanced Stop

#!/bin/bash
#description=Stop any active RClone process to save bandwidth when needed.  Usually during the day.
#This enhanced version will wait about 70 minutes for the last file to change.  Once it has changed, it will kill rclone.

function killrclonepids {
        ###echo "Killing pids here"
        PROCESSID=$(pgrep -d "," rclone)
        ###echo "ProcessID: $PROCESSID"
        if [ -n $PROCESSID ]; then
                echo "Attempting to terminate rclone" $'\r'>> $LOGFILE 2>&1
                echo "processid of rclone: $PROCESSID" $'\r'  >> $LOGFILE 2>&1
                killall -v rclone rcloneorig
                echo "rclone job termination requested by cron at "$(date "+%T") $'\r'>> $LOGFILE 2>&1
                echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1
                exit
        else
                echo "No PIDS found"
                exit
        fi

}



LOGFILE=/mnt/user/Public/rclone_logs/rclone-$(date "+%Y%m%d").log
###echo "LOGFILE: $LOGFILE"


#Get last filename
lastFile="$(cat $LOGFILE | grep 'done. avg:' | tail -n1 | awk -F '[:]' '{print $1}')"

#Get time left
timeLeft="$(cat $LOGFILE | grep 'done. avg:' | tail -n1 | awk -F '[/:]' '{print $7}' | sed -e 's/^[ \t]*//' | tr '[hm]' ':' | tr --delete s)"

###echo "LastFile: $lastFile"
###echo "timeLeft: $timeLeft"

# Determine if there is more than 1 hour to go
numberOfOccur=$(grep -o ":" <<<"$timeLeft" | wc -l)
###echo "numberOfOccur: $numberOfOccur"

if [ $numberOfOccur -gt 1 ]; then
        echo "Kill all the pids, more than an hour to wait: $timeLeft" >> $LOGFILE 2>&1
        killrclonepids
else
        echo "Check every minute to see if the file is done yet!" >> $LOGFILE 2>&1
        COUNTER=1
        until [ $COUNTER -gt 70 ]; do
                sleep 1m

                (( COUNTER++ ))

                currentFile="$(cat $LOGFILE | grep 'done. avg:' | tail -n1 | awk -F '[:]' '{print $1}')"
                ###echo "currentFile: $currentFile"

                if [ "$lastFile" != "$currentFile" ]; then
                        echo "The file has changed so kill rclone" >> $LOGFILE 2>&1
                        #sleep for 1 minute to make sure the file is saved
                        sleep 1m
                        killrclonepids
                fi


        done

        #we waited 70 minuts for it to finish.  Kill it anyways.
        echo "Waited 70 minutes kill all pids" >> $LOGFILE 2>&1
        killrclonepids


fi

#clean up any running or missed ways that rclone could still be running
killrclonepids >> $LOGFILE 2>&1

 

Original Stop script

#!/bin/bash
#description=Stop any active RClone process to save bandwidth when needed.  Usually during the day.
LOGFILE=/mnt/user/Public/rclone_logs/rclone-$(date "+%Y%m%d").log
PROCESSID=$(pgrep -d "," rclone)
if [ -n $PROCESSID ]; then
        echo "Attempting to terminate rclone" $'\r'>> $LOGFILE 2>&1
        echo "processid of rclone: $PROCESSID" $'\r'  >> $LOGFILE 2>&1
        killall -v rclone rcloneorig
        echo "rclone job termination requested by cron at "$(date "+%T") $'\r'>> $LOGFILE 2>&1
        echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1
fi

Link to comment

Your stop script looks better than what I'm doing. UserScripts terminates the top level script but not the rest of the rclone scripts which I have to kill by hand. I expect I'll be using your's soon lol. I'm still doing my initial upload and am at 10tb but when complete I'd like to schedule it too, I like what you've done!

Link to comment

Your stop script looks better than what I'm doing. UserScripts terminates the top level script but not the rest of the rclone scripts which I have to kill by hand. I expect I'll be using your's soon lol. I'm still doing my initial upload and am at 10tb but when complete I'd like to schedule it too, I like what you've done!

 

Just a word of warning, I have only tested with 1 file in transfer.  1 file at a time seems to work better for me, that way, when rclone is killed, I only stop the transfer of 1 file instead of the default 4.  This may not work so well with those that have very fast connections, or that transfer a lot of small files.

Link to comment

Yeah, I can see where that could be an issue I hadn't thought of. I currently transfer 9 files concurrent and have 15 threads checking files as I understand it. I do have lots of files now but once initial upload is done I think it'll be mostly large files from media and backups. It'll be 20 or 30 days before I get there but I'll try to report any bizarre behavior when I do! :o

Link to comment
  • 3 weeks later...

sorry, noob here, i have the plugin installed, i added below in settings for rclone custom script, but how do i actually make it start

 

LOGFILE=/mnt/user/Public/rclone_logs/rclone-$(date "+%Y%m%d").log

echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1

echo rclone log $(date) $'\r'$'\r' >> $LOGFILE 2>&1

echo "Starting rclone copy" $'\r'>> $LOGFILE 2>&1

rclone sync '/mnt/user/DIR secret:'DIR' >> $LOGFILE 2>&1

echo Competed rclone copy" $'\r'>> $LOGFILE 2>&1

echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1

 

i can start manually from cmd line via ssh but trying to figure out how to do it from unraid os gui

Link to comment

Hi, I have setup the rclone working with plex docker. Everything working fine on the server itself, but there is one problem trying to access the files using SMB in Windows 10.

 

I have the following in my Samba extra configuration:

 

   [ACD]
   path = /mnt/disks/acd
   read only = yes
   guest ok = yes 

 

I can see the files, but I'm getting very slow speed when trying to watch or copying the files. The speed is only about 2-5Mb. When looked into iftop, I see a bunch of amazon IPs that are connecting and dropped.

 

ec2-54-xxx-xxx-xx.compute-1.amazona

 

Anyone knows why that might be?

Link to comment

sorry, noob here, i have the plugin installed, i added below in settings for rclone custom script, but how do i actually make it start

 

LOGFILE=/mnt/user/Public/rclone_logs/rclone-$(date "+%Y%m%d").log

echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1

echo rclone log $(date) $'\r'$'\r' >> $LOGFILE 2>&1

echo "Starting rclone copy" $'\r'>> $LOGFILE 2>&1

rclone sync '/mnt/user/DIR secret:'DIR' >> $LOGFILE 2>&1

echo Competed rclone copy" $'\r'>> $LOGFILE 2>&1

echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1

 

i can start manually from cmd line via ssh but trying to figure out how to do it from unraid os gui

 

I don't think you can do it from the GUI.  I have mine in crontab.  If your box is up all the time, you can make a temporary entry by doing a crontab -e

 

crontab -e

Add the following, assuming you have the same script name/location and time that I have.

#Rclone Start, Nightly at midnight + 45 mintues
45 0 * * * /boot/rclone/scripts/rc_start.sh > /dev/null 2>&1

save with "ESC+: wq"  (Or the same as "vi" editor)

 

 

Link to comment

sorry, noob here, i have the plugin installed, i added below in settings for rclone custom script, but how do i actually make it start

 

LOGFILE=/mnt/user/Public/rclone_logs/rclone-$(date "+%Y%m%d").log

echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1

echo rclone log $(date) $'\r'$'\r' >> $LOGFILE 2>&1

echo "Starting rclone copy" $'\r'>> $LOGFILE 2>&1

rclone sync '/mnt/user/DIR secret:'DIR' >> $LOGFILE 2>&1

echo Competed rclone copy" $'\r'>> $LOGFILE 2>&1

echo "-------------------------------------------------------------------------------------------------" $'\r'>> $LOGFILE 2>&1

 

i can start manually from cmd line via ssh but trying to figure out how to do it from unraid os gui

 

User scripts plugin.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.