Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

5 hours ago, sauso said:

Whilst Space Invader's Video and @DZMM' guide are great.  There is an element of know how required.  I would not consider this a beginner level thing.  You are a very real possibility of deleting everything on your array with a rouge command.  Even i managed to delete 4TB of local data the other day with a missing /.

 

 

I don't think anything about unraid is for beginners, so I disagree and say any unRAID user should be able to complete these guides and follow space invader's video's (particularly as they can see and copy what he does), 

 

@kagzz is right and maybe a video guide would be helpful just to see the steps in action - I'm not in the mood to do a youtube video, but I might record my screen one day over xmas if I get bored doing a setup from a scratch.

  • Thanks 1
Link to comment
17 minutes ago, DZMM said:

I don't think anything about unraid is for beginners, so I disagree and say any unRAID user should be able to complete these guides and follow space invader's video's (particularly as they can see and copy what he does), 

 

@kagzz is right and maybe a video guide would be helpful just to see the steps in action - I'm not in the mood to do a youtube video, but I might record my screen one day over xmas if I get bored doing a setup from a scratch.

Bit hard to say that when you can write shell scripts.  The vast majority of users using unraid wouldn't know the first thing to do in a terminal.  On the surface absolutely a beginner shouldn't have an issue.  But if you peak under the covers then you at least need to know a bit about linux to ensure you don't stuff anything up.

Link to comment
53 minutes ago, sauso said:

Bit hard to say that when you can write shell scripts.  The vast majority of users using unraid wouldn't know the first thing to do in a terminal.  On the surface absolutely a beginner shouldn't have an issue.  But if you peak under the covers then you at least need to know a bit about linux to ensure you don't stuff anything up.

That's why I did the hard work with the scripts (and I can't write scripts - all my scripts are cobbled together from others, google, trial and error!) and annotated them and wrote a guide so anyone who can install unRAID can copy them and edit them quickly, or at least know how to do safely until they've done it correctly. 

 

Plus this thread is added help for those who can't manage that!

 

Nobody has to write a shell script to get this setup - just install a few plugins, copy and paste, and have a text editor handy if they want to go off-piste.  I think if you don't know how to startup a shell session as a first time unraid user, then it's something you'll have to learn within a few weeks as I can't see how you could avoid it for long.

Edited by DZMM
  • Thanks 1
Link to comment
3 hours ago, sauso said:

Maybe post the step you are having issues with.  Include screenshots of what you doing.

I commend your effort. The total lack of research or effort put in by a lot of newcomers in this topic is totally killing my interest in helping them.

 

I've helped a few in here to totally set it up through a remote session, but all of them put in a lot of work themselves to try to get it to work.

 

What I mostly see happening now is people coming it saying it doesn't work and then we should magically, understand where the fix is. This whole solution is not mainstream and it never will be. Even when DZMM and others here like me have it running without huge issues, there are still some problems we might forget because solving them has become second nature to us. Think about Api bans, using seperate mounts for different dockers, RAM filling up., etc. Things like a video guide will not give the whole picture of how and why it is functioning like it is and thus it will just move the stress to troubleshooting when it's running.

 

So my advice: don't waste more time on the people who want it on a silver platter. Maybe I come across harsh, but I'm used trying to find solutions myself and then presenting what I have tried and where I ran into issues.

 

  • Like 1
Link to comment
2 hours ago, Kaizac said:

I commend your effort. The total lack of research or effort put in by a lot of newcomers in this topic is totally killing my interest in helping them.

 

I've helped a few in here to totally set it up through a remote session, but all of them put in a lot of work themselves to try to get it to work.

 

What I mostly see happening now is people coming it saying it doesn't work and then we should magically, understand where the fix is. This whole solution is not mainstream and it never will be. Even when DZMM and others here like me have it running without huge issues, there are still some problems we might forget because solving them has become second nature to us. Think about Api bans, using seperate mounts for different dockers, RAM filling up., etc. Things like a video guide will not give the whole picture of how and why it is functioning like it is and thus it will just move the stress to troubleshooting when it's running.

 

So my advice: don't waste more time on the people who want it on a silver platter. Maybe I come across harsh, but I'm used trying to find solutions myself and then presenting what I have tried and where I ran into issues.

 

Couldn't agree more.  I am genuinely interested in helping people out because i think this is a fantastic solution.  But people need to help themselves as well.

  • Like 1
Link to comment

Thank you for making this guide and supplying the user scripts. 

 

Setup was straight forward and i had it working for 2 hours while i tested individual files, everything was encrypted and uploading to my drive.

 

I must of done something wrong though,

 

I decided that i should start bulk uploading my content that i wanted to be stored on GD account. First i started by testing and creating the directory folders within mount_unionfs/google_vfs. Upload script ran and it deleted all the directories that i just created in that folder, nothing showed up on my drive either ( a little confused as this is the folder sonarr is meant to create/move and rename files in)

I did the same but this time in the rclone_upload/google_vfs. They uploaded to my drive encrypted.

 

Now anytime i try to run the mount script i get the below error in the log, nothing else to go off. I have searched and someone posted the same issue couple pages ago but with no reply. Also tried deleting remaining files as i read this can stop it mounting?

30.12.2019 19:38:50 INFO: mounting rclone vfs.
30.12.2019 19:38:55 CRITICAL: rclone gdrive vfs mount failed - please check for problems.
Script Finished Mon, 30 Dec 2019 19:38:55 +1300

Link to comment
1 hour ago, KeyBoardDabbler said:

Thank you for making this guide and supplying the user scripts. 

 

Setup was straight forward and i had it working for 2 hours while i tested individual files, everything was encrypted and uploading to my drive.

 

I must of done something wrong though,

 

I decided that i should start bulk uploading my content that i wanted to be stored on GD account. First i started by testing and creating the directory folders within mount_unionfs/google_vfs. Upload script ran and it deleted all the directories that i just created in that folder, nothing showed up on my drive either ( a little confused as this is the folder sonarr is meant to create/move and rename files in)

I did the same but this time in the rclone_upload/google_vfs. They uploaded to my drive encrypted.

 

Now anytime i try to run the mount script i get the below error in the log, nothing else to go off. I have searched and someone posted the same issue couple pages ago but with no reply. Also tried deleting remaining files as i read this can stop it mounting?

30.12.2019 19:38:50 INFO: mounting rclone vfs.
30.12.2019 19:38:55 CRITICAL: rclone gdrive vfs mount failed - please check for problems.
Script Finished Mon, 30 Dec 2019 19:38:55 +1300

Post your rclone config and your mount script please.  Also is your rclone mount folder empty before mounting?

Edited by DZMM
Link to comment
32 minutes ago, DZMM said:

Post your rclone config and your mount script please.  Also is your rclone mount folder empty before mounting?

My mount script is taken from the github. Also yes it is completly empty, i deleted it to make sure and had the script re create the dir.

 

[gdrive]
type = drive
client_id = ***
client_secret = ***
scope = drive
token = {"access_token":""}
root_folder_id = ***

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = ***
password2 = ***
#!/bin/bash

#######  Check if script is already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

#######  Start rclone gdrive mount  ##########

# check if gdrive mount already created

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

# create directories for rclone mount and unionfs mount

mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

# check if mount successful

# slight pause to give mount time to finalise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone gdrive mount  ##########

#######  Start unionfs mount   ##########

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."

else

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End Mount unionfs   ##########

############### starting dockers that need unionfs mount ######################

# only start dockers once

if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"

else

touch /mnt/user/appdata/other/rclone/dockers_started

echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."

docker start plex
docker start ombi
docker start tautulli
docker start radarr
docker start sonarr

fi

############### end dockers that need unionfs mount ######################

exit

 

Link to comment
25 minutes ago, KeyBoardDabbler said:

My mount script is taken from the github. Also yes it is completly empty, i deleted it to make sure and had the script re create the dir.

 


[gdrive]
type = drive
client_id = ***
client_secret = ***
scope = drive
token = {"access_token":""}
root_folder_id = ***

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = ***
password2 = ***

#!/bin/bash

#######  Check if script is already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

#######  Start rclone gdrive mount  ##########

# check if gdrive mount already created

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

# create directories for rclone mount and unionfs mount

mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

# check if mount successful

# slight pause to give mount time to finalise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone gdrive mount  ##########

#######  Start unionfs mount   ##########

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."

else

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End Mount unionfs   ##########

############### starting dockers that need unionfs mount ######################

# only start dockers once

if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"

else

touch /mnt/user/appdata/other/rclone/dockers_started

echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."

docker start plex
docker start ombi
docker start tautulli
docker start radarr
docker start sonarr

fi

############### end dockers that need unionfs mount ######################

exit

 

did you follow step 2 in the first post when you did the initial setup and created the mountcheck file?

Link to comment
4 minutes ago, DZMM said:

did you follow step 2 in the first post when you did the initial setup and created the mountcheck file?

I did do the step first time around until it stopped working. I have just repeated step 2 and now it mounted and working again, maybe i deleted it? 

 

Just to confirm what folder should i create sub folders in and manually drop media files in to start uploading to GD? is it mount_rclone? I did it in mount_unionfs first time around and the upload script deleted them all

 

Thankyou for your help, all the best going into the New Year

Link to comment
8 minutes ago, KeyBoardDabbler said:

I did do the step first time around until it stopped working. I have just repeated step 2 and now it mounted and working again, maybe i deleted it? 

 

Just to confirm what folder should i create sub folders in and manually drop media files in to start uploading to GD? is it mount_rclone? I did it in mount_unionfs first time around and the upload script deleted them all

 

Thankyou for your help, all the best going into the New Year

you must have deleted the mountcheck file.

 

To upload content, add media to the mount_unionfs folders (this is where you dockers should be mapped to) or rclone_upload directly, but NEVER unless you know what you are doing directly to mount_rclone.

Link to comment
15 minutes ago, DZMM said:

To upload content, add media to the mount_unionfs folders (this is where you dockers should be mapped to) or rclone_upload directly, but NEVER unless you know what you are doing directly to mount_rclone.

That is what i thought, Second time i have tried now and both times it clears the main dir.

In mount_unionfs/google_vfs/ i created some subfolders. Waited 5min and ran the upload script. It seems to just delete them? Should i not see the folder in my GD, ready to move files in

 

Script Starting Mon, 30 Dec 2019 23:05:22 +1300

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rClone_Upload/log.txt

30.12.2019 23:05:22 INFO: rclone installed successfully - proceeding with upload.
2019/12/30 23:05:22 DEBUG : --min-age 5m0s to 2019-12-30 23:00:22.231062207 +1300 NZDT m=-299.989311656
2019/12/30 23:05:22 DEBUG : rclone: Version "v1.50.2-094-g207474ab-beta" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "move" "/mnt/user/rclone_upload/google_vfs/" "gdrive_media_vfs:" "-vv" "--drive-chunk-size" "512M" "--checkers" "3" "--fast-list" "--transfers" "2" "--exclude" ".unionfs/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--fast-list" "--bwlimit" "9500k" "--tpslimit" "3" "--min-age" "5m"]
2019/12/30 23:05:22 DEBUG : Using config file from "/boot/config/plugins/rclone-beta/.rclone.conf"
2019/12/30 23:05:22 INFO : Starting bandwidth limiter at 9.277MBytes/s
2019/12/30 23:05:22 INFO : Starting HTTP transaction limiter: max 3 transactions/s with burst 1
2019/12/30 23:05:24 INFO : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2019/12/30 23:05:24 INFO : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2019/12/30 23:05:24 DEBUG : GD - TV/TV - Comic Book: Removing directory
2019/12/30 23:05:24 DEBUG : GD - TV: Removing directory
2019/12/30 23:05:24 DEBUG : GD - Music: Removing directory
2019/12/30 23:05:24 DEBUG : GD - Movies: Removing directory
2019/12/30 23:05:24 DEBUG : Local file system at /mnt/user/rclone_upload/google_vfs/: deleted 4 directories
2019/12/30 23:05:24 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Elapsed time: 0s

2019/12/30 23:05:24 DEBUG : 8 go routines active
2019/12/30 23:05:24 DEBUG : rclone: Version "v1.50.2-094-g207474ab-beta" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "move" "/mnt/user/rclone_upload/google_vfs/" "gdrive_media_vfs:" "-vv" "--drive-chunk-size" "512M" "--checkers" "3" "--fast-list" "--transfers" "2" "--exclude" ".unionfs/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--fast-list" "--bwlimit" "9500k" "--tpslimit" "3" "--min-age" "5m"]
Script Finished Mon, 30 Dec 2019 23:05:24 +1300

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rClone_Upload/log.txt

 

Edited by KeyBoardDabbler
Link to comment
6 minutes ago, KeyBoardDabbler said:

That is what i thought, Second time i have tried now and both times it clears the main dir.

In mount_unionfs/google_vfs/ i created some subfolders. Waited 5min and ran the upload script. It seems to just delete them? Should i not see the folder in my GD, ready to move files in

 


Script Starting Mon, 30 Dec 2019 23:05:22 +1300

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rClone_Upload/log.txt

30.12.2019 23:05:22 INFO: rclone installed successfully - proceeding with upload.
2019/12/30 23:05:22 DEBUG : --min-age 5m0s to 2019-12-30 23:00:22.231062207 +1300 NZDT m=-299.989311656
2019/12/30 23:05:22 DEBUG : rclone: Version "v1.50.2-094-g207474ab-beta" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "move" "/mnt/user/rclone_upload/google_vfs/" "gdrive_media_vfs:" "-vv" "--drive-chunk-size" "512M" "--checkers" "3" "--fast-list" "--transfers" "2" "--exclude" ".unionfs/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--fast-list" "--bwlimit" "9500k" "--tpslimit" "3" "--min-age" "5m"]
2019/12/30 23:05:22 DEBUG : Using config file from "/boot/config/plugins/rclone-beta/.rclone.conf"
2019/12/30 23:05:22 INFO : Starting bandwidth limiter at 9.277MBytes/s
2019/12/30 23:05:22 INFO : Starting HTTP transaction limiter: max 3 transactions/s with burst 1
2019/12/30 23:05:24 INFO : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2019/12/30 23:05:24 INFO : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2019/12/30 23:05:24 DEBUG : GD - TV/TV - Comic Book: Removing directory
2019/12/30 23:05:24 DEBUG : GD - TV: Removing directory
2019/12/30 23:05:24 DEBUG : GD - Music: Removing directory
2019/12/30 23:05:24 DEBUG : GD - Movies: Removing directory
2019/12/30 23:05:24 DEBUG : Local file system at /mnt/user/rclone_upload/google_vfs/: deleted 4 directories
2019/12/30 23:05:24 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Elapsed time: 0s

2019/12/30 23:05:24 DEBUG : 8 go routines active
2019/12/30 23:05:24 DEBUG : rclone: Version "v1.50.2-094-g207474ab-beta" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "move" "/mnt/user/rclone_upload/google_vfs/" "gdrive_media_vfs:" "-vv" "--drive-chunk-size" "512M" "--checkers" "3" "--fast-list" "--transfers" "2" "--exclude" ".unionfs/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--fast-list" "--bwlimit" "9500k" "--tpslimit" "3" "--min-age" "5m"]
Script Finished Mon, 30 Dec 2019 23:05:24 +1300

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rClone_Upload/log.txt

 

Post your upload script.  Remember there's sometimes a few mins lag between files being uploaded i.e. deleted from the local drive and showing up in the mount via the unionfs folder

Link to comment
25 minutes ago, DZMM said:

Post your upload script.  Remember there's sometimes a few mins lag between files being uploaded i.e. deleted from the local drive and showing up in the mount via the unionfs folder

#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_upload" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_upload

fi

#######  End Check if script already running  ##########

#######  check if rclone installed  ##########

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."

else

echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

fi

#######  end check if rclone installed  ##########

# move files

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --bwlimit 9500k --tpslimit 3 --min-age 5m 

# remove dummy file

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

Just the default script for now, 

nothing has showed up in any of my mounted drives or in GD and there is no trace of the deleted folders

 

Just tried in the rclone_upload and the same thing happened

Script Starting Mon, 30 Dec 2019 23:31:10 +1300

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rClone_Upload/log.txt

30.12.2019 23:31:10 INFO: rclone installed successfully - proceeding with upload.
2019/12/30 23:31:10 DEBUG : --min-age 1m0s to 2019-12-30 23:30:10.886125095 +1300 NZDT m=-59.989107918
2019/12/30 23:31:10 DEBUG : rclone: Version "v1.50.2-094-g207474ab-beta" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "move" "/mnt/user/rclone_upload/google_vfs/" "gdrive_media_vfs:" "-vv" "--drive-chunk-size" "512M" "--checkers" "3" "--fast-list" "--transfers" "2" "--exclude" ".unionfs/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--fast-list" "--bwlimit" "9500k" "--tpslimit" "3" "--min-age" "1m"]
2019/12/30 23:31:10 DEBUG : Using config file from "/boot/config/plugins/rclone-beta/.rclone.conf"
2019/12/30 23:31:10 INFO : Starting bandwidth limiter at 9.277MBytes/s
2019/12/30 23:31:10 INFO : Starting HTTP transaction limiter: max 3 transactions/s with burst 1
2019/12/30 23:31:13 INFO : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2019/12/30 23:31:13 INFO : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2019/12/30 23:31:13 DEBUG : GD - TV: Removing directory
2019/12/30 23:31:13 DEBUG : GD - Music: Removing directory
2019/12/30 23:31:13 DEBUG : GD - Movies: Removing directory
2019/12/30 23:31:13 DEBUG : Local file system at /mnt/user/rclone_upload/google_vfs/: deleted 3 directories
2019/12/30 23:31:13 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Elapsed time: 0s

2019/12/30 23:31:13 DEBUG : 9 go routines active
2019/12/30 23:31:13 DEBUG : rclone: Version "v1.50.2-094-g207474ab-beta" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "move" "/mnt/user/rclone_upload/google_vfs/" "gdrive_media_vfs:" "-vv" "--drive-chunk-size" "512M" "--checkers" "3" "--fast-list" "--transfers" "2" "--exclude" ".unionfs/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--fast-list" "--bwlimit" "9500k" "--tpslimit" "3" "--min-age" "1m"]
Script Finished Mon, 30 Dec 2019 23:31:13 +1300

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rClone_Upload/log.txt

 

 

Will it be because of the below line in the upload script;

--delete-empty-src-dirs

 

 

EDIT

Seems to be working now, it is because i had empty folders. Again thanks for all your help

Edited by KeyBoardDabbler
Link to comment
47 minutes ago, KeyBoardDabbler said:

 

Will it be because of the below line in the upload script;

--delete-empty-src-dirs

 

 

EDIT

Seems to be working now, it is because i had empty folders. Again thanks for all your help

Glad you got it working.  I find --delete-empty-src-dirs a bit confusing as well, so I don't use it personally and I do this instead without delete-empty-src-dirs:

 

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_fast1_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9M --tpslimit 8 --min-age 10m --fast-list

and then I remove any empty folders myself and then recreate the main media folders when the upload completes, to make it easier for to add any manual uploads:

find /mnt/user/rclone_upload/tdrive_vfs -empty -type d -delete

mkdir -p /mnt/user/rclone_upload/tdrive_vfs/{documentaries/kids,documentaries/adults,movies_adults_gd,movies_kids_gd,movies_uhd_gd/adults,movies_uhd_gd/kids,tv_adults_gd,tv_kids_gd,tv_uhd/adults,tv_uhd/kids,unfiled/documentaries,unfiled/movies,unfiled/movies_uhd}

 

 

Edited by DZMM
Link to comment

Looks like mergerfs support is available!

 

1 hour ago, Stupifier said:

For those looking for mergerfs for Unraid.

I got super lucky on Reddit & the Mergerfs developer saw our plea for Unraid mergerfs.
https://www.reddit.com/r/unRAID/comments/eiaq99/alternative_to_unionfs_to_merge_folders_on_unraid/

 

Result:

 

He made a docker image for us....once you run, it will build a static image of mergerfs for us to use!
https://hub.docker.com/repository/docker/trapexit/mergerfs-static-build

 


docker run -v /mnt/user/some_unraid_share/mergerfs:/build --rm -it trapexit/mergerfs-static-build

change /mnt/user/some_unraid_share/mergerfs_build_folder ----> an actual location on your unraid OS. After docker run completes and mergerfs is built, you would then likely move/copy it to /bin for active use!

 

 

  • Like 1
Link to comment
On 1/1/2020 at 8:00 AM, DZMM said:

Looks like mergerfs support is available!

 

 

Can a few people help me test mergerfs please.  Benefits:

 

1. no need for unionfs/mount cleanup script as mergerfs doesn't create temp files and new writes are always to the first folder - much cleaner

2. supports hardlinking for torrents.

 

To test, replace in the mount script:

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/tdrive_vfs=RW:/mnt/user/mount_rclone/tdrive_vfs=RO /mnt/user/mount_unionfs/google_vfs

with:

# removing old binary as a precaution
rm /bin/mergerfs

docker run -v /mnt/user/appdata/other/mergerfs:/build --rm -it trapexit/mergerfs-static-build
mv /mnt/user/appdata/other/mergerfs/mergerfs /bin

mergerfs /mnt/user/rclone_upload/tdrive_vfs:/mnt/user/mount_rclone/tdrive_vfs /mnt/user/mount_unionfs/google_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

It's working for me, but will play with for a few days

Edited by DZMM
Link to comment

Just discovered a major benefit!  With unionfs if you wanted to move files that existed in the rclone mount, it would first download them to the relevant folder in rclone_upload, then upload them and finally then delete the original file in the mount - a lot of transfer.  The only way to do quickly was to work directly on the rclone mount.

 

mergerfs just moves the file within the mount without downloading and re-uploading!  This should solve some of the transfer problems people have been having, as the move acts like a normal move within a drive without the lockups that you'd get sometimes with unionfs.

 

Much better handling of the merged folders 🙂

Edited by DZMM
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.