Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Hello coming over from plex guide. can i use my same config file and if so how would i use my blitz service accounts. pretty new to unraid so am starting from scratch. i was able to take my config file and set it up on a windows machine and mapped to the pgunion folder but can't get it going on unraid and i haven't tried the service accounts on windows for uploading.

Link to comment
On 3/12/2020 at 7:51 PM, watchmeexplode5 said:

@DZMM, @jrdnlc

Decided to play around with the discord notifications.

Got something working and put in a pull request.

 

Had to rewrite a clean up a bit of it before adding it to your script but it appears to be working as intended (reports transfer number, rate, amount,, ect). I'm sure there is room for improvement like displaying transferred file names would be awesome!

 

 Couldn't get error reporting but didn't go digging too deep. Just scrapped that bit rather than pulling my hair out over trivial things.

 

Credit to no5tyle or SenPaiBox or whoever wrote it. 

image.png.e1cba4523be6a8c97e0fd006cfe1a7ce.png

I had the chance to give this a try today but it looks like the script log file no log shows the progress? No it just says "Uploading using upload remote gdrive". Before I used to get percentage of each file, file name, and estimated time left. 

 

The upload.log in the appdata folder just says "Waiting for checks to finish", "Waiting for transfers to finish", "Scheduled bandwidth change. Bandwidth limits disabled" and generates nothing else. 

 

Right now the upload script is running but I have to idea what exactly it's uploading. 

 

Would be also great if the discord notification would be for each file upload/progress/files left that way you don't have to go digging in the logs. 

 

 

Edited by jrdnlc
Link to comment

@jrdnlc

 

Hmmm, mines been running for 2 days now with notifications without issue :/I don't know why you would have a log with nothing but a hanging upload output. Nothing was changed in the actual rclone move command besides adding logging, changing -vv  --->  -vP, and placing the command to output into a variable. Those changes shouldn't result in anything hanging. Couple questions:

 

  • Beta or stable Rclone plugin? (I'm running stable, rclone v1.51.0. Didn't test beta so that could be it)
  • Does the script ever finish? (script log have something like --> INFO: Script complete)
  • Are there files needing to be uploaded in your local (to be uploaded) folder or is it empty?

 

Code was adopted from SenpaiBox and no5tyle's (github) works, setting the rclone move output into a variable. The notification is dependent on a specific format expected from rclone's output. When testing with "-vv" it would not function, therefore I utilized "-vP" within the rclone move command. This produces the needed output for the variables to be extracted (and also cleans up the logs significantly imo). That is why you are no longer seeing the rclone move progress every few seconds.  

 

----------------------

I agree that the notifications would be better with more info (upload speed / progress / files / ect). 

That being said, I think progress and such would cause a massive flood of discord notifications. Wouldn't really work great in a static push notification system. That would be better displayed in as a dynamic stats in something like a webgui (like in rclone beta gui). 

----------------------

 

Here is the output of my logs that rclone generates for reference: 

2020/03/14 21:00:45 INFO  : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/03/14 21:00:47 INFO  : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2020/03/14 21:00:47 INFO  : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2020/03/14 21:00:49 INFO  : testdir/test.pptx: Copied (new)
2020/03/14 21:00:49 INFO  : testdir/test.pptx: Deleted
2020/03/14 21:00:49 INFO  : 
Transferred:   	    1.451M / 1.451 MBytes, 100%, 861.632 kBytes/s, ETA 0s
Checks:                 2 / 2, 100%
Deleted:                1
Transferred:            1 / 1, 100%
Elapsed time:         1.7s

 

Has anybody else tried the recent discord notifications and experienced similar issues?

 

Edited by watchmeexplode5
Link to comment

@JohnJay829

I use to use plexguide on a different box a long long long time ago. Make sure to backup your config file prior to modifying it. You should be able to adapt it to work with the scripts fairly easily. If plexguide hasn't changed (which I don't believe it has), it adds all the service accounts as separate remotes ( [GDSA01], [GDSA02], ect. ). This is NOT necessary with these scripts. 

 

You simply need 1 or 2 mounts depending on if you are encrypted (and solely using tdrive for storage).

 

So to edit your config simply copy and paste the values from your [tdrive] and [tcrypt] into the "XXXXXX" for the following config (assume encrytion):

 

[tdrive] values get copied here:

[gdrive]
type = drive
scope = drive
server_side_across_configs = true
client_id = XXXXXXXXXX
client_secret = XXXXXXXXXX
token = XXXXXXXXXX
team_drive = XXXXXXXXXX

 

[tcrypt] values get copied here:

[gdrive_vfs]
type = crypt
remote = gdrive:/encrypt
filename_encryption = standard
directory_name_encryption = true
password = XXXXXXXXXX
password2 = XXXXXXXXXX

If you don't have encryption then you simply need [gdrive].

 

Now configure your mount script (self explanatory with the comments in the mount script). Run the mount script and you should see your files from your tdrive that plexguide was utilizing.  

 

---------------

Service account setup. 

 

If you still have plexguide running / on a hard drive, locate the service account files (.json). If memory serves me correctly they are stored in "/opt/appdata/plexguide/.blitzkeys/" and are named GDSA01, GDSA02....... You only need about 4-15 but you can grab as many as you want. I think plexguide defaults at something like 6ish. Might have to rename and remove the "0" in front of 1,2,3 ect to work with the script to properly work with their names or you can mass rename (see readme on github) but for a handful I'd just manually rename. 


Copy those to unraid "/mnt/user/appdata/other/rclone/service_accounts/" so now you have GDSA1.json, GDSA2.json, ect. 

Rclone upload script should be: RcloneRemoteName="gdrive_vfs", RcloneUploadRemoteName="gdrive_vfs", ServiceAccountFile="GDSA".

Edit the rest of the script for your setup: script explains each variable. 

 

No need to re-add the service accounts into your team drive authentication... plexguide should already have listed them as members and given them access.

 

---------------

 

That should be it. Simple transition, much cleaner config file. Unraid is about 100x more stable and less frustrating than plexguide. Everything can be changed and you get control over whats going on as oppose to plexguide utilizing a million scripts to perform in only 1 manner with no ability to modify functionality.

 

Feel free to chime back in or pm me if you need help. I think everything should be as listed above but again, I'm recalling the plexguide configs and service accounts from memory :/ 

 

 

  • Like 1
  • Thanks 1
Link to comment
19 hours ago, Kaizac said:

Regarding the SA rotation for uploading, does it now rotate automatically when 750gb is maxxed or does it just move up to the next SA when a new upload is started because of timing. Ie. it's only suitable for continuous download/uploading and not for uploading a backlog on full gigabit speed?

Service accounts rotated every time script runs at X interval as @DZMM stated.

 

IMO you should always use SA to avoid any issues. I've seen zero issues using 100 SA rotating every 15mins on a single tdrive (that's an excessive number). 5-25 SA is plenty. Google servers will simply see a new 1/100 user uploading every 15 min.  I saturate a gigabit line on a single service account but speeds will still be dependent on you and your distance to googles servers. Rotating through that many accounts, you physically can't hit the 750gb/day max on a single account and rotate all the way back to that account again. It simply takes longer than a day to rotate through all the accounts (with 100x15min intervals). 

 

The addition of the --drive-stop-on-upload-limit flag should prevent a single SA from attempting to upload your 750+gb in one script run (ie using a single SA) and would resume where it left off when it runs again with a new SA. I haven't tested this though, never stored that much pending to be uploaded so never run into this issue. 

Edited by watchmeexplode5
Link to comment
48 minutes ago, watchmeexplode5 said:

The addition of the --drive-stop-on-upload-limit flag should prevent a single SA from attempting to upload your 750+gb in one script run (ie using a single SA) and would resume where it left off when it runs again with a new SA. I haven't tested this though, never stored that much pending to be uploaded so never run into this issue. 

It works.  I have an asymmetric 360/180 connection (moved before xmas and lost my 1G symmetric connection 😞 ), so I tend to have more than 750GB pending upload - also because I use bwlimits to make sure I've got some spare upload left, even though I use traffic shaping on my pfsense VM. 

 

It stops once any transfers that started before 750GB hits have finished, and then resumes for the next run with a new SA.

Edited by DZMM
  • Thanks 1
Link to comment

@watchmeexplode5 It's good to see someone else state we didn't need all the extra mount points. Also used Plex guide for awhile, plex left my Unraid system for about a year.

 

I'm not having much luck with the unmount script on array stop, having to manually use fusermount -uz command each time. I let people start using plex again so I don't plan to stop it again just yet.

Link to comment
11 minutes ago, Tuftuf said:

I'm not having much luck with the unmount script on array stop, having to manually use fusermount -uz command each time. I let people start using plex again so I don't plan to stop it again just yet.

The unmount script doesn't have any fusermount commands, as the new script structure makes this difficult (mount locations are variable).  The script is intended to be a cleanup script to be run at array start.

 

Do you need it to run at array stop?  If so, just add your own fusermount commands to the script.

Link to comment
On 3/16/2020 at 9:30 AM, DZMM said:

The unmount script doesn't have any fusermount commands, as the new script structure makes this difficult (mount locations are variable).  The script is intended to be a cleanup script to be run at array start.

 

Do you need it to run at array stop?  If so, just add your own fusermount commands to the script.

 

My array was not stopping and I blamed this when I couldn't quite work out where the fuser command was, I'll have to see if there is something else causing it not to stop as it looks to be unrelated.  I don't plan on stopping it just yet, its running its purpose. Main focus is getting things ready to back it all up.

Link to comment

Big thank you for all the hard work into this container and the scripts. Posts here have helped me a lot understand and resolve issues I have had previously on mount_unionfs and mount_mergerfs. 

 

Last night I got mount_mergerfs up and running 5 folders/files uploaded successfully to mount_rclone. There is a couple hundred gb waiting in the local mount. A further 5 but empty folders uploaded but keep receiving this in the upload log.

 

Quote

18.03.2020 21:47:01 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
18.03.2020 21:47:01 INFO: *** Starting rclone_upload script for gdrive_vfs ***
18.03.2020 21:47:01 INFO: Exiting as script already running.
Script Finished Wed, 18 Mar 2020 21:47:01 +1030

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_custom_plugin/log.txt

 

I did a couple of shutdowns last night and not sure if this error is a result of any unclean shutdowns. I have a stock upload scripts besides changing RcloneUploadRemoteName="gdrive_vfs" to match the RcloneRemoteName

 

What should I be doing to fix it? Thanks

Link to comment
10 minutes ago, faulksy said:

Big thank you for all the hard work into this container and the scripts. Posts here have helped me a lot understand and resolve issues I have had previously on mount_unionfs and mount_mergerfs. 

 

Last night I got mount_mergerfs up and running 5 folders/files uploaded successfully to mount_rclone. There is a couple hundred gb waiting in the local mount. A further 5 but empty folders uploaded but keep receiving this in the upload log.

 

 

I did a couple of shutdowns last night and not sure if this error is a result of any unclean shutdowns. I have a stock upload scripts besides changing RcloneUploadRemoteName="gdrive_vfs" to match the RcloneRemoteName

 

What should I be doing to fix it? Thanks

Delete the checker files in the appdata other rclone folder. Something like upload running

Link to comment
18 minutes ago, Kaizac said:

Yeah upload running is the checker file for uploads. Delete it and you should be able to upload

It seemed to get going again but same error in log again. I have the script to run hourly which is the 1st and 3rd events. 4 new empty folders were added to mount_rclone, no files.

 

Quote

18.03.2020 21:25:15 INFO: *** Starting rclone_upload script for gdrive_vfs ***
18.03.2020 21:25:15 INFO: Exiting as script already running.
Script Finished Wed, 18 Mar 2020 21:25:15 +1030

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_custom_plugin/log.txt

Script Starting Wed, 18 Mar 2020 21:47:01 +1030

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_custom_plugin/log.txt

18.03.2020 21:47:01 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
18.03.2020 21:47:01 INFO: *** Starting rclone_upload script for gdrive_vfs ***
18.03.2020 21:47:01 INFO: Exiting as script already running.
Script Finished Wed, 18 Mar 2020 21:47:01 +1030

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_custom_plugin/log.txt

Script Starting Wed, 18 Mar 2020 22:41:04 +1030

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_custom_plugin/log.txt

18.03.2020 22:41:04 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
18.03.2020 22:41:04 INFO: *** Starting rclone_upload script for gdrive_vfs ***
18.03.2020 22:41:04 INFO: Script not running - proceeding.
18.03.2020 22:41:04 INFO: Checking if rclone installed successfully.
18.03.2020 22:41:04 INFO: rclone installed successfully - proceeding with upload.
18.03.2020 22:41:04 INFO: Uploading using upload remote gdrive_vfs
18.03.2020 22:41:04 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
====== RCLONE DEBUG ======
Script Starting Wed, 18 Mar 2020 22:47:01 +1030

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_custom_plugin/log.txt

18.03.2020 22:47:01 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
18.03.2020 22:47:01 INFO: *** Starting rclone_upload script for gdrive_vfs ***
18.03.2020 22:47:01 INFO: Exiting as script already running.
Script Finished Wed, 18 Mar 2020 22:47:01 +1030

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_custom_plugin/log.txt

 

Link to comment
7 minutes ago, faulksy said:

It seemed to get going again but same error in log again. I have the script to run hourly which is the 1st and 3rd events. 4 new empty folders were added to mount_rclone, no files.

 

 

I think you made a spelling error somewhere. In your earlier posts you wrote gdrive_vsf instead of vfs

Link to comment
7 minutes ago, faulksy said:

It seemed to get going again but same error in log again. I have the script to run hourly which is the 1st and 3rd events. 4 new empty folders were added to mount_rclone, no files.

What are you trying to upload? Has your prior run managed to finish its upload before you start the 2nd one?

Link to comment
3 minutes ago, testdasi said:

What are you trying to upload? Has your prior run managed to finish its upload before you start the 2nd one?

Video files ranging from 2-8gb size. After Kalzac helped me delete the checker file I manually ran the script. log was 

Quote

18.03.2020 22:41:04 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
18.03.2020 22:41:04 INFO: *** Starting rclone_upload script for gdrive_vfs ***
18.03.2020 22:41:04 INFO: Script not running - proceeding.
18.03.2020 22:41:04 INFO: Checking if rclone installed successfully.
18.03.2020 22:41:04 INFO: rclone installed successfully - proceeding with upload.
18.03.2020 22:41:04 INFO: Uploading using upload remote gdrive_vfs
18.03.2020 22:41:04 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
====== RCLONE DEBUG ======

 

Scheduled hourly script occurred and same issue

Quote

18.03.2020 22:47:01 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
18.03.2020 22:47:01 INFO: *** Starting rclone_upload script for gdrive_vfs ***
18.03.2020 22:47:01 INFO: Exiting as script already running.
Script Finished Wed, 18 Mar 2020 22:47:01 +1030

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_custom_plugin/log.txt

 

No files uploaded at all in that time. 4 media folders were created in mount_rclone but no files. Storage used on my gdrive hasn't changed since last night

Link to comment
13 minutes ago, Kaizac said:

I think you made a spelling error somewhere. In your earlier posts you wrote gdrive_vsf instead of vfs

Just a typo here. My upload script is ok. I'm not knowledgeable enough so I have to keep things simple

Quote

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

 

Link to comment
9 minutes ago, Kaizac said:

Go to that upload.log file it should show what is happening. It's in appdata other rclonr

It goes to add things and then deletes

Quote

2020/03/17 21:46:48 INFO  : Starting bandwidth limiter at 12MBytes/s
2020/03/17 21:46:48 INFO  : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/03/17 21:46:51 INFO  : Encrypted drive 'gdrive_vfs:': Waiting for checks to finish
2020/03/17 21:46:51 INFO  : Encrypted drive 'gdrive_vfs:': Waiting for transfers to finish
2020/03/17 21:47:10 INFO  : movies/HD/Back To The Sea (2012)/Back To The Sea (2012)-fanart.jpg: Copied (new)
2020/03/17 21:47:10 INFO  : movies/HD/Back To The Sea (2012)/Back To The Sea (2012)-fanart.jpg: Deleted
2020/03/17 21:47:11 INFO  : movies/HD/Back To The Sea (2012)/Back To The Sea (2012).nfo: Copied (new)
2020/03/17 21:47:11 INFO  : movies/HD/Back To The Sea (2012)/Back To The Sea (2012).nfo: Deleted
2020/03/17 21:47:48 NOTICE: Scheduled bandwidth change. Limit set to 12MBytes/s
2020/03/17 22:09:08 INFO  : movies/HD/Back To The Sea (2012)/Back To The Sea (2012).avi: Copied (new)
2020/03/17 22:09:08 INFO  : movies/HD/Back To The Sea (2012)/Back To The Sea (2012).avi: Deleted
2020/03/17 22:22:46 INFO  : movies/HD/Bee Movie (2007)/Bee Movie (2007).avi: Copied (new)
2020/03/17 22:22:46 INFO  : movies/HD/Bee Movie (2007)/Bee Movie (2007).avi: Deleted
2020/03/17 22:29:02 INFO  : movies/HD/A Cinderella Story Once Upon a Song (2011)/A Cinderella Story Once Upon a Song (2011).avi: Copied (new)
2020/03/17 22:29:02 INFO  : movies/HD/A Cinderella Story Once Upon a Song (2011)/A Cinderella Story Once Upon a Song (2011).avi: Deleted
2020/03/17 22:57:46 INFO  : movies/HD/Big (1988)/Big (1988).mkv: Copied (new)
2020/03/17 22:57:46 INFO  : movies/HD/Big (1988)/Big (1988).mkv: Deleted
2020/03/17 23:02:06 INFO  : movies/HD/Fantastic Four (2005)/Fantastic Four (2005).mkv: Copied (new)
2020/03/17 23:02:07 INFO  : movies/HD/Fantastic Four (2005)/Fantastic Four (2005).mkv: Deleted
2020/03/17 23:02:29 INFO  : movies/HD/Animal Kingdom (2010)/Animal Kingdom (2010).mkv: Copied (new)
2020/03/17 23:02:29 INFO  : movies/HD/Animal Kingdom (2010)/Animal Kingdom (2010).mkv: Deleted
2020/03/18 22:41:04 INFO  : Starting bandwidth limiter at 12MBytes/s
2020/03/18 22:41:04 INFO  : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/03/18 22:41:19 INFO  : Encrypted drive 'gdrive_vfs:': Waiting for checks to finish
2020/03/18 22:41:19 INFO  : Encrypted drive 'gdrive_vfs:': Waiting for transfers to finish
2020/03/18 22:42:04 NOTICE: Scheduled bandwidth change. Limit set to 12MBytes/s
 

 

Link to comment
1 minute ago, faulksy said:

It goes to add things and then deletes

 

Then the "error" is expected.

One of the upload is still running (as Kaizac said, 12Mbps is rather slow) so naturally the next run would stop.

The whole upload control file is exactly for this scenario i.e. avoid running multiple uploads of the same file to the same source.

 

You shouldn't be running the upload script on an hourly schedule with such a slow connection to be honest.

At least don't run it on schedule untill everything has been uploaded.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.