Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

28 minutes ago, testdasi said:

You shouldn't be running the upload script on an hourly schedule with such a slow connection to be honest.

At least don't run it on schedule untill everything has been uploaded.

Actually any schedule is fine - that's why the checker file is there to stop additional upload jobs starting if there's an instance already running.  If you don't make user script changes while an instance is running, the logs should still keep updating to show you what's happening.

Link to comment
20 minutes ago, DZMM said:

Actually any schedule is fine - that's why the checker file is there to stop additional upload jobs starting if there's an instance already running.  If you don't make user script changes while an instance is running, the logs should still keep updating to show you what's happening.

I could have explained it clearer.

The rclone upload job could be terminated silently (e.g. out of memory is the most frequent problem I have seen), in which case, the termination of upload script due to existing ongoing upload is actually an indication that something is amiss.

This would be missed by the users if the script is run on a regular schedule during the initial massive upload. Most users don't check the upload log (and network stat etc) and just assume things are just going in the background.

Link to comment

Thanks for putting this together saves me days of uploads. I have tried to access my encrypted files from a windows pc using a copy of the rclone config file but am unable to find the media folders or files. I originally was using PlexGuide which i used the same SA's in my upload script i am using copy instead of move as i would like to have a copy on UnRaid and my gdrive 

Link to comment

I use vlans at home and this caused all the traffic to leave via the management address even after binding it to an interface within the rclone upload script. The fix was to add a route second routing table and route for the IP I assigned it to.

 

The subnet is 192.168.100/24

The gateway is 168.168.100.1

The IP assigned to rclone upload is 192.168.100.90

 

echo "1 rt2" >> /etc/iproute2/rt_tables
ip route add 192.168.100.0/24 dev br0.100 src 192.168.100.90 table rt2
ip route add default via 192.168.100.1 dev br0.100 table rt2
ip rule add from 192.168.100.90/32 table rt2
ip rule add to 192.168.100.90/32 table rt2

 

Edited by Tuftuf
  • Like 1
Link to comment
On 3/19/2020 at 11:45 AM, JohnJay829 said:

Thanks for putting this together saves me days of uploads. I have tried to access my encrypted files from a windows pc using a copy of the rclone config file but am unable to find the media folders or files. I originally was using PlexGuide which i used the same SA's in my upload script i am using copy instead of move as i would like to have a copy on UnRaid and my gdrive 

I was able to get this going with making a new config file..

 

 

 

Now i am able to see my encrypted files on my windows rclone browser. Although i can see the files i am only able to upload without the sa's when i try using them i get an error:

26.03.2020 12:40:22 INFO: *** Rclone move selected. Files will be moved from /mnt/user/MeJoMediaServer/googledrive_encrypted for googledrive_encrypted ***
26.03.2020 12:40:22 INFO: Checking if rclone installed successfully.
26.03.2020 12:40:22 INFO: rclone installed successfully - proceeding with upload.
26.03.2020 12:40:22 INFO: Counter file found for googledrive_encrypted.
26.03.2020 12:40:22 INFO: Adjusted service_account_file for upload remote googledrive_encrypted to GDSA5.json based on counter 5.
26.03.2020 12:40:22 INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload.
====== RCLONE DEBUG ======
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 2 (retrying may help)
Elapsed time: 3.7s
==========================
26.03.2020 12:40:28 INFO: Created counter_6 for next upload run.
26.03.2020 12:40:28 INFO: Log files scrubbed
rm: cannot remove '/mnt/user/appdata/other/rclone/remotes/googledrive_encrypted/upload_running_daily_upload': No such file or directory
26.03.2020 12:40:28 INFO: Script complete
Script Finished Thu, 26 Mar 2020 12:40:28 -0400

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone Upload/log.txt

 

Where i do see the actual errors

Link to comment
10 hours ago, veritas2884 said:

@DZMM Just wanted to drop a thank you note. Finally finished my 26TB upload. 100% cloud based now. Had 12 simultaneous streams last night and no issues. You're a legend.

You're welcome.  What are you going to do with your empty HDDs?  I sold mine, even my parity drive as I don't need it now.

 

12 simultaneous streams is good going - I think I've only hit 10 once over Christmas.

 

 

  • Haha 1
Link to comment
13 hours ago, DZMM said:

You're welcome.  What are you going to do with your empty HDDs?  I sold mine, even my parity drive as I don't need it now.

 

12 simultaneous streams is good going - I think I've only hit 10 once over Christmas.

 

 

I didn't think about selling them. We are currently on state-wide lock down, so I don't think I'll get to a fedex any time soon. I might also scrap the parity drive to speed up writes.

 

I do have a new issue that popped up. I wanted to start adding 4K TV content and created a 4KTV folder in my local media folder. This folder then showed up inside my MergerFS mount, as expected. However, Sonarr and Radarr cannot add the location even though I can select it, it just doesn't show up upon hitting add as a location. It seems like any new folder I create isn't addressable by my Docker Containers even though it is inside the master media folder that is added to their container with read/write access settings. The odd thing is I had a folder of some of my daughter's dance recitals I had shared on plex for family to see a while ago and I was able to move the files out of there get Sonarr to add the 4Ktv to that. 

 

So TL:DR it seems like some kind of permissions issue with new folder created inside the mergerFS structure. Is there a recommended way to create new folder locations to make them addressable by containers? 

 

 

**UPDATE** I was able to chmod 777 the folder inside the mergerFS mount via the terminal and now I can add it to Sonarr. Is this expected behavior or am I creating new folders for new content incorrectly?  

Edited by veritas2884
Link to comment
5 hours ago, veritas2884 said:

**UPDATE** I was able to chmod 777 the folder inside the mergerFS mount via the terminal and now I can add it to Sonarr. Is this expected behavior or am I creating new folders for new content incorrectly? 

Something's wrong as folders created either in /local or /mount_mergerfs should behave like normal folders i.e. radarr/sonarr etc adding/upgrading/removing when they want. 

 

Some apps like krusader and Plex need to be started after the mount, but that's the only problem I'm aware of.   What are your docker mappings and rclone mount options?

Link to comment
On 4/1/2020 at 10:59 AM, remedy said:

the upload script doesn't show any progress output for me until the transfer is complete, it just sits at "====== RCLONE DEBUG ======"

 

any ideas? i'd like to be able to see the live progress.

The reason for this is because of the Discord notifications. If you don't care for Discord notifications then you can remove "--stats 9999m" and change "-vP" to -vvv or -P

  • Like 1
Link to comment
4 hours ago, senpaibox said:

The reason for this is because of the Discord notifications. If you don't care for Discord notifications then you can remove "--stats 9999m" and change "-vP" to -vvv or -P

hmm still not working. it prints everything after the transfer is complete, but until then it still sits at the rclone debug line. i removed "--stats 9999m" and tried "-vvv" and tried "-P, same result.

 

is there no way to get it to output the upload periodically during the actual upload? i tried "-v" with "--stats 5m" instead and that didn't work either.

Link to comment

Just thought I would share this little script. It can probably be integrated with DZMM's scripts, but I'm not using all his scripts.

 

When a mount drops, the script should automatically pick it up, but when this is not possible the dockers will just continue to fill the merger/union folder making the remount impossible (you get the error that the mount is not empty). To make sure all dockers stop which are using the union I made the following script. Just run it every minute as well. When the mount is back again it should start your dockers again from your mount script.

Just make sure you change the folder paths to your situation and put in your dockers.

#!/bin/bash

if [[ -f "/mnt/user/mount_rclone/Tdrive/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Mount connected."
else
	touch /mnt/user/appdata/other/rclone/mount_disconnected
	echo "$(date "+%d.%m.%Y %T") INFO: Mount disconnected, stopping dockers."
	docker stop plex nzbget
    rm /mnt/user/appdata/other/rclone/dockers_started
fi

 

  • Thanks 1
Link to comment
23 hours ago, remedy said:

hmm still not working. it prints everything after the transfer is complete, but until then it still sits at the rclone debug line. i removed "--stats 9999m" and tried "-vvv" and tried "-P, same result.

 

is there no way to get it to output the upload periodically during the actual upload? i tried "-v" with "--stats 5m" instead and that didn't work either.

Hmm works for me but I am using scripts on a VPS machine rather than Unraid. I know Userscripts recently got an update but not sure if that is stopping scripts from displaying live progress

Link to comment

Hi I'm completely new to this, so sorry for my questions, I just want to set it up correct. I've purchased a GSuite 10€ unlimited account and now wants to setup Google Drive and RClone.

What I have done so far:

- Setup GSuite and use my domain.

- Setup own API key

- No teams

- No Google Drive folders yet.

 

What I want is to have 2 folders.

1. 1 encrypted folder for Plex.

2. 1 ordinary folder for backup of pictures. (Which is not main priority here)

 

I have ssh into unraid and typed rclone config.

- Typed in API key

- blank root folder

- bland service account

- no advanced config

- no auto config and have autorized through link

 

- I'm at teams now. Should I skip that or what should I do here? Again I want 1 encrypted folder for Plex.

 

I'm sorry for the newbie questions but I really want it to work the right way the first time.

Hope you can help.

 

Link to comment

So you just need to do the next step which is to create an encrypted remote. I would also recommend setting up Service Accounts if you plan on exceeding 750gb/day otherwise here is an example of what your rclone config should look like if you want an encrypted Plex folder

[googledrive]
type = drive
client_id = **********
client_secret = **********
scope = drive
token = {"access_token":"**********"}
server_side_across_configs = true

[googledrive_encrypted]
type = crypt
remote = googledrive:encrypted_plex_folder
filename_encryption = standard
directory_name_encryption = true
password = **********
password2 = **********

 

Link to comment
On 4/4/2020 at 2:06 PM, senpaibox said:

Hmm works for me but I am using scripts on a VPS machine rather than Unraid. I know Userscripts recently got an update but not sure if that is stopping scripts from displaying live progress

It worked for me. Removed "--stats 9999m" and used "-P", "-vvv" did not work for me. Thank you

Link to comment
1 hour ago, senpaibox said:

So you just need to do the next step which is to create an encrypted remote. I would also recommend setting up Service Accounts if you plan on exceeding 750gb/day otherwise here is an example of what your rclone config should look like if you want an encrypted Plex folder


[googledrive]
type = drive
client_id = **********
client_secret = **********
scope = drive
token = {"access_token":"**********"}
server_side_across_configs = true

[googledrive_encrypted]
type = crypt
remote = googledrive:encrypted_plex_folder
filename_encryption = standard
directory_name_encryption = true
password = **********
password2 = **********

 

Thanks for the answer. I have seen some mentioning about teams, should I create that or not?

Also how do I make sure the folder itself is encrypted and not only the rclone transfers?

Thanks for helping.

Link to comment
12 minutes ago, Bjur said:

Thanks for the answer. I have seen some mentioning about teams, should I create that or not?

Also how do I make sure the folder itself is encrypted and not only the rclone transfers?

Thanks for helping.

Yes you should. As you start to use the gdrive for things beyond media storage (which you will once you discover how convenient this whole gdrive thing is), you will start to desire separating data for easier organisation and/or approach the limits of gdrive (yes, even unlimited has limits) and/or control access. That's where team drives come in.

It's better to start using tdrive now then to wait till you need it and then have to wait for things to be moved about.

 

You can double check the encryption just by trying to access the file from the gdrive website. The file names should be jumbled up and if you download the file, you should not be able to make any sense out of it.

Link to comment

So I have created a team account now in admin module. But it doesn't have any user or folders in settings.

So should I just then go into Google Drive and start creating folder and selecting Team somehow from there?

I can't find any goods guides to start up with.

Thanks for you assistance.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.