Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

2 minutes ago, Kaizac said:

So how does rclone when streaming media know to use the service accounts then?

ok, I see where your confusion is coming from

 

Or, like this if using service accounts:

[gdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json
team_drive = TEAM DRIVE ID
server_side_across_configs = true

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

Fixed the readme - glad someone is reading it!

Link to comment
36 minutes ago, DZMM said:

ok, I see where your confusion is coming from

 


Or, like this if using service accounts:

[gdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json
team_drive = TEAM DRIVE ID
server_side_across_configs = true

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

Fixed the readme - glad someone is reading it!

Ok so the remote you set up with one of the SA's you create. So number 1 of 100 for example. And then for uploading you rotate within the service accounts folder between the 100 SA's? Am I understanding well it correctly then?

 

And if I want to have another remote to seperate my bazarr traffic. Do I then create a new project or do I just use a different SA? I'm not sure on what level the api ban is registered.

Link to comment
10 minutes ago, Kaizac said:

Ok so the remote you set up with one of the SA's you create. So number 1 of 100 for example. And then for uploading you rotate within the service accounts folder between the 100 SA's? Am I understanding well it correctly then?

 

And if I want to have another remote to seperate my bazarr traffic. Do I then create a new project or do I just use a different SA? I'm not sure on what level the api ban is registered.

No need for new project if the SA group has been added to the respective teamdrives - think of SAs as normal accounts, that don't need credentials/client_ids setting up i.e. bans work the same - on the offending SA.

 

They're good for efficiently handling multiple accounts for rotating etc once they are setup

 

 

Link to comment
1 hour ago, DZMM said:

No need for new project if the SA group has been added to the respective teamdrives - think of SAs as normal accounts, that don't need credentials/client_ids setting up i.e. bans work the same - on the offending SA.

 

They're good for efficiently handling multiple accounts for rotating etc once they are setup

 

 

Did you configure the path and file to the json through rclone config or did you just add the line to the rclone config after setting it up. When I try it through the rclone config way through SSH it says:

 

Failed to configure team drive: config team drive failed to create oauth client: error opening service account credentials file: open sa_tdrive.json: no such file or directory

 

Link to comment
15 minutes ago, Kaizac said:

Did you configure the path and file to the json through rclone config or did you just add the line to the rclone config after setting it up. When I try it through the rclone config way through SSH it says:

 


Failed to configure team drive: config team drive failed to create oauth client: error opening service account credentials file: open sa_tdrive.json: no such file or directory

 

I just add to rclone config - since I learnt that you can copy passwords and somehow the encryption still works, I do almost all my config stuff directly in the file.

Link to comment

I'm getting the following error when mounting my remotes:

 

INFO : Google drive root 'Archief': Failed to get StartPageToken: Get "https://www.googleapis.com/drive/v3/changes/startPageToken?alt=json&prettyPrint=false&supportsAllDrives=true": oauth2: cannot fetch token: 401 Unauthorized
Response: {
"error": "deleted_client",
"error_description": "The OAuth client was deleted."
}

Do you also get that?

 

And is there an easy way to use your mount script for multiple remotes?

Link to comment

Yeah SA's are created, also the new project. SA's are added to a group which is added as member to the team drive. When going in to the dev console of Google I don't see an o-auth module though, not sure if it's needed.

 

My rclone config looks like this:

 

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts_tdrive/sa_tdrive.json
team_drive = XX
server_side_across_configs = true

[tdrive_crypt]
type = crypt
remote = tdrive:Archief
filename_encryption = standard
directory_name_encryption = true
password = XX
password2 = XX

Really starts to annoy me that it's so complicated.

Link to comment

@DZMM, for the AutoRclone part did you let the script create a new project? And did you change anything to in your Gsuite developer/admin console to have it work?

 

I read this on the rclone page but that seems to be too much work for 100 SA's.

 

1. Create a service account for example.com

    To create a service account and obtain its credentials, go to the Google Developer Console.
    You must have a project - create one if you don’t.
    Then go to “IAM & admin” -> “Service Accounts”.
    Use the “Create Credentials” button. Fill in “Service account name” with something that identifies your client. “Role” can be empty.
    Tick “Furnish a new private key” - select “Key type JSON”.
    Tick “Enable G Suite Domain-wide Delegation”. This option makes “impersonation” possible, as documented here: Delegating domain-wide authority to the service account
    These credentials are what rclone will use for authentication. If you ever need to remove access, press the “Delete service account key” button.

2. Allowing API access to example.com Google Drive

    Go to example.com’s admin console
    Go into “Security” (or use the search bar)
    Select “Show more” and then “Advanced settings”
    Select “Manage API client access” in the “Authentication” section
    In the “Client Name” field enter the service account’s “Client ID” - this can be found in the Developer Console under “IAM & Admin” -> “Service Accounts”, then “View Client ID” for the newly created service account. It is a ~21 character numerical string.
    In the next field, “One or More API Scopes”, enter https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.

 

Link to comment
2 minutes ago, DZMM said:

It's been a while but I didn't do anything clever - I just followed these instructions https://github.com/xyou365/AutoRclone/blob/master/Readme.md.  somehow I ended up with 500 not 100 though

Because you had 5 projects probably. I had 28, so got 2800 SA's hahaha.

 

Anyways I discovered it was a remote to my Gdrive (not team drive) was giving the errors. Everything has been mounted fine now.

Will use my own mount script since I have 10 remotes, so using 10 scripts seems excessive. Maybe I can find a way to convert your script to a multiple remote script.

Link to comment
15 minutes ago, Kaizac said:

@DZMM in your mounting script you have the following:

 


Remember to disable AUTOSTART in docker settings page

Are you talking about disabling autostart for the specific dockers or for the whole docker module?

For the dockers who are starting with the script

Link to comment
2 hours ago, DZMM said:

For the dockers who are starting with the script

Ok maybe you should write that down differently then. I now assumed correctly but what I was reading was that I had to disable the autostart of the docker daemon in the docker settings page. You mean the docker overview page and the autostart for those specific dockers, not the daemon.

 

Regarding the SA rotation for uploading, does it now rotate automatically when 750gb is maxxed or does it just move up to the next SA when a new upload is started because of timing. Ie. it's only suitable for continuous download/uploading and not for uploading a backlog on full gigabit speed?

Link to comment

Hello coming over from plex guide. can i use my same config file and if so how would i use my blitz service accounts. pretty new to unraid so am starting from scratch. i was able to take my config file and set it up on a windows machine and mapped to the pgunion folder but can't get it going on unraid and i haven't tried the service accounts on windows for uploading.

Link to comment
On 3/12/2020 at 7:51 PM, watchmeexplode5 said:

@DZMM, @jrdnlc

Decided to play around with the discord notifications.

Got something working and put in a pull request.

 

Had to rewrite a clean up a bit of it before adding it to your script but it appears to be working as intended (reports transfer number, rate, amount,, ect). I'm sure there is room for improvement like displaying transferred file names would be awesome!

 

 Couldn't get error reporting but didn't go digging too deep. Just scrapped that bit rather than pulling my hair out over trivial things.

 

Credit to no5tyle or SenPaiBox or whoever wrote it. 

image.png.e1cba4523be6a8c97e0fd006cfe1a7ce.png

I had the chance to give this a try today but it looks like the script log file no log shows the progress? No it just says "Uploading using upload remote gdrive". Before I used to get percentage of each file, file name, and estimated time left. 

 

The upload.log in the appdata folder just says "Waiting for checks to finish", "Waiting for transfers to finish", "Scheduled bandwidth change. Bandwidth limits disabled" and generates nothing else. 

 

Right now the upload script is running but I have to idea what exactly it's uploading. 

 

Would be also great if the discord notification would be for each file upload/progress/files left that way you don't have to go digging in the logs. 

 

 

Edited by jrdnlc
Link to comment

@jrdnlc

 

Hmmm, mines been running for 2 days now with notifications without issue :/I don't know why you would have a log with nothing but a hanging upload output. Nothing was changed in the actual rclone move command besides adding logging, changing -vv  --->  -vP, and placing the command to output into a variable. Those changes shouldn't result in anything hanging. Couple questions:

 

  • Beta or stable Rclone plugin? (I'm running stable, rclone v1.51.0. Didn't test beta so that could be it)
  • Does the script ever finish? (script log have something like --> INFO: Script complete)
  • Are there files needing to be uploaded in your local (to be uploaded) folder or is it empty?

 

Code was adopted from SenpaiBox and no5tyle's (github) works, setting the rclone move output into a variable. The notification is dependent on a specific format expected from rclone's output. When testing with "-vv" it would not function, therefore I utilized "-vP" within the rclone move command. This produces the needed output for the variables to be extracted (and also cleans up the logs significantly imo). That is why you are no longer seeing the rclone move progress every few seconds.  

 

----------------------

I agree that the notifications would be better with more info (upload speed / progress / files / ect). 

That being said, I think progress and such would cause a massive flood of discord notifications. Wouldn't really work great in a static push notification system. That would be better displayed in as a dynamic stats in something like a webgui (like in rclone beta gui). 

----------------------

 

Here is the output of my logs that rclone generates for reference: 

2020/03/14 21:00:45 INFO  : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/03/14 21:00:47 INFO  : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2020/03/14 21:00:47 INFO  : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2020/03/14 21:00:49 INFO  : testdir/test.pptx: Copied (new)
2020/03/14 21:00:49 INFO  : testdir/test.pptx: Deleted
2020/03/14 21:00:49 INFO  : 
Transferred:   	    1.451M / 1.451 MBytes, 100%, 861.632 kBytes/s, ETA 0s
Checks:                 2 / 2, 100%
Deleted:                1
Transferred:            1 / 1, 100%
Elapsed time:         1.7s

 

Has anybody else tried the recent discord notifications and experienced similar issues?

 

Edited by watchmeexplode5
Link to comment

@JohnJay829

I use to use plexguide on a different box a long long long time ago. Make sure to backup your config file prior to modifying it. You should be able to adapt it to work with the scripts fairly easily. If plexguide hasn't changed (which I don't believe it has), it adds all the service accounts as separate remotes ( [GDSA01], [GDSA02], ect. ). This is NOT necessary with these scripts. 

 

You simply need 1 or 2 mounts depending on if you are encrypted (and solely using tdrive for storage).

 

So to edit your config simply copy and paste the values from your [tdrive] and [tcrypt] into the "XXXXXX" for the following config (assume encrytion):

 

[tdrive] values get copied here:

[gdrive]
type = drive
scope = drive
server_side_across_configs = true
client_id = XXXXXXXXXX
client_secret = XXXXXXXXXX
token = XXXXXXXXXX
team_drive = XXXXXXXXXX

 

[tcrypt] values get copied here:

[gdrive_vfs]
type = crypt
remote = gdrive:/encrypt
filename_encryption = standard
directory_name_encryption = true
password = XXXXXXXXXX
password2 = XXXXXXXXXX

If you don't have encryption then you simply need [gdrive].

 

Now configure your mount script (self explanatory with the comments in the mount script). Run the mount script and you should see your files from your tdrive that plexguide was utilizing.  

 

---------------

Service account setup. 

 

If you still have plexguide running / on a hard drive, locate the service account files (.json). If memory serves me correctly they are stored in "/opt/appdata/plexguide/.blitzkeys/" and are named GDSA01, GDSA02....... You only need about 4-15 but you can grab as many as you want. I think plexguide defaults at something like 6ish. Might have to rename and remove the "0" in front of 1,2,3 ect to work with the script to properly work with their names or you can mass rename (see readme on github) but for a handful I'd just manually rename. 


Copy those to unraid "/mnt/user/appdata/other/rclone/service_accounts/" so now you have GDSA1.json, GDSA2.json, ect. 

Rclone upload script should be: RcloneRemoteName="gdrive_vfs", RcloneUploadRemoteName="gdrive_vfs", ServiceAccountFile="GDSA".

Edit the rest of the script for your setup: script explains each variable. 

 

No need to re-add the service accounts into your team drive authentication... plexguide should already have listed them as members and given them access.

 

---------------

 

That should be it. Simple transition, much cleaner config file. Unraid is about 100x more stable and less frustrating than plexguide. Everything can be changed and you get control over whats going on as oppose to plexguide utilizing a million scripts to perform in only 1 manner with no ability to modify functionality.

 

Feel free to chime back in or pm me if you need help. I think everything should be as listed above but again, I'm recalling the plexguide configs and service accounts from memory :/ 

 

 

  • Like 1
  • Thanks 1
Link to comment
19 hours ago, Kaizac said:

Regarding the SA rotation for uploading, does it now rotate automatically when 750gb is maxxed or does it just move up to the next SA when a new upload is started because of timing. Ie. it's only suitable for continuous download/uploading and not for uploading a backlog on full gigabit speed?

Service accounts rotated every time script runs at X interval as @DZMM stated.

 

IMO you should always use SA to avoid any issues. I've seen zero issues using 100 SA rotating every 15mins on a single tdrive (that's an excessive number). 5-25 SA is plenty. Google servers will simply see a new 1/100 user uploading every 15 min.  I saturate a gigabit line on a single service account but speeds will still be dependent on you and your distance to googles servers. Rotating through that many accounts, you physically can't hit the 750gb/day max on a single account and rotate all the way back to that account again. It simply takes longer than a day to rotate through all the accounts (with 100x15min intervals). 

 

The addition of the --drive-stop-on-upload-limit flag should prevent a single SA from attempting to upload your 750+gb in one script run (ie using a single SA) and would resume where it left off when it runs again with a new SA. I haven't tested this though, never stored that much pending to be uploaded so never run into this issue. 

Edited by watchmeexplode5
Link to comment
48 minutes ago, watchmeexplode5 said:

The addition of the --drive-stop-on-upload-limit flag should prevent a single SA from attempting to upload your 750+gb in one script run (ie using a single SA) and would resume where it left off when it runs again with a new SA. I haven't tested this though, never stored that much pending to be uploaded so never run into this issue. 

It works.  I have an asymmetric 360/180 connection (moved before xmas and lost my 1G symmetric connection 😞 ), so I tend to have more than 750GB pending upload - also because I use bwlimits to make sure I've got some spare upload left, even though I use traffic shaping on my pfsense VM. 

 

It stops once any transfers that started before 750GB hits have finished, and then resumes for the next run with a new SA.

Edited by DZMM
  • Thanks 1
Link to comment

@watchmeexplode5 It's good to see someone else state we didn't need all the extra mount points. Also used Plex guide for awhile, plex left my Unraid system for about a year.

 

I'm not having much luck with the unmount script on array stop, having to manually use fusermount -uz command each time. I let people start using plex again so I don't plan to stop it again just yet.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.