Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

thanks I've seen that now, but I'm still getting stuck at almost the first step.

 

Tried a working client id/key to test and created a new one.

Completed the remote auth and provided response.

I've selected the correct team drive once it was listed.

 

But verifying the mount fails.

 

root@Firefly:~# rclone lsd tdrive
2020/03/06 22:14:06 ERROR : : error listing: directory not found
2020/03/06 22:14:06 Failed to lsd with 2 errors: last error was: directory not found

Link to comment
4 minutes ago, Tuftuf said:

thanks I've seen that now, but I'm still getting stuck at almost the first step.

 

Tried a working client id/key to test and created a new one.

Completed the remote auth and provided response.

I've selected the correct team drive once it was listed.

 

But verifying the mount fails.

 

root@Firefly:~# rclone lsd tdrive
2020/03/06 22:14:06 ERROR : : error listing: directory not found
2020/03/06 22:14:06 Failed to lsd with 2 errors: last error was: directory not found

Did you forget the colons?

rclone lsd tdrive:

Edited by senpaibox
  • Thanks 1
Link to comment
1 hour ago, Tuftuf said:

I have another system I can look at its rclone.conf that's mounting this tdrive

you can just copy the rclone.conf for that system to /boot/config/plugins/rclone - assuming you are using the unraid rclone plugin.

Link to comment
26 minutes ago, senpaibox said:

Did you forget the colons?

rclone lsd tdrive:

 

Thank you :) That's a really good start.

 

root@Firefly:~# rclone lsd tcrypt:
          -1 2019-04-09 20:41:53        -1 movies

 

4 minutes ago, DZMM said:

you can just copy the rclone.conf for that system to /boot/config/plugins/rclone - assuming you are using the unraid rclone plugin.

I was copying the encrypted part from it but it has many service accounts defined as well starting fresh seemed good, finally trying to understand this.

 

Yes using Unraid and the rclone plugin. I guess I can look into the next bit now.

Link to comment

hello,

 

i just changed old scripts by new ones.

 

seems working now except upload message error:

 

08.03.2020 11:48:50 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_media_vfs for gdrive_upload_vfs ***
08.03.2020 11:48:50 INFO: *** Starting rclone_upload script for gdrive_upload_vfs ***
08.03.2020 11:48:50 INFO: Script not running - proceeding.
08.03.2020 11:48:50 INFO: Checking if rclone installed successfully.
08.03.2020 11:48:50 INFO: rclone installed successfully - proceeding with upload.
08.03.2020 11:48:50 INFO: Uploading using upload remote gdrive_upload_vfs
08.03.2020 11:48:50 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
2020/03/08 11:48:50 DEBUG : --min-age 15m0s to 2020-03-08 11:33:50.704339109 +0100 CET m=-899.989103786
2020/03/08 11:48:50 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive_media_vfs" "gdrive_upload_vfs:" "--user-agent=gdrive_upload_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,15M 16:00,12M" "--bind=" "--delete-empty-src-dirs"]
2020/03/08 11:48:50 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/03/08 11:48:50 INFO : Starting bandwidth limiter at 15MBytes/s
2020/03/08 11:48:50 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/03/08 11:48:50 Failed to create file system for "gdrive_upload_vfs:": didn't find section in config file
08.03.2020 11:48:50 INFO: Not utilising service accounts.
08.03.2020 11:48:50 INFO: Script complete
Script Finished Sun, 08 Mar 2020 11:48:50 +0100
 

2020/03/08 11:48:50 Failed to create file system for "gdrive_upload_vfs:": didn't find section in config file

 

 

 

 

Edited by neow
Link to comment

@DZMM Do you mount the mergefs within /mnt/user? I had read some recommendations to place it in /mnt/disks and then use RW,Slave option for dockers however I'm not certain if that was old information or not.

 

Previously I had used service accounts, but I've not set that here. Is the 750GB limit an upload limit, or does it include streaming (or is that just API limit) I don't expect to be uploading more than 750gb per day.

 

I have some concerns that my array may not keep up with downloads, extraction etc etc. I thought about putting the 'local' mount point on an NVME/SSD. Have you or anyone done such configurations?

 

I have almost everything working (*plex is misbehaving). Added mergefs mount to /user within docker and Plex scanned all the movies and tv over night. However I now can't access the Plex UI locally but accessing movies and files is fine, accessing from Plex.tv is fine.

Accessing Plex UI directly gives connection closed or timed out.

 

 

 

@neow I only just started using this whole process on Unraid, started on the original scripts and then moved onto the new ones. I only had to change the settings in the new scripts near the top of the file to match my requirements and used the same paths or names within the different scripts. Then it worked. I believe it's referring to the name of your share within rclone.conf

 

 

 

Edited by Tuftuf
Link to comment
1 hour ago, Tuftuf said:

Do you mount the mergefs within /mnt/user? I had read some recommendations to place it in /mnt/disks and then use RW,Slave option for dockers however I'm not certain if that was old information or not.

I had problems back in the day with /mnt/disks - I use /mnt/user/mount_rclone and /mnt/user/mount_mergerfs.

 

1 hour ago, Tuftuf said:

Previously I had used service accounts, but I've not set that here. Is the 750GB limit an upload limit, or does it include streaming (or is that just API limit) I don't expect to be uploading more than 750gb per day.

750GB/user/day upload - 10TB/day I think download.

 

1 hour ago, Tuftuf said:

I have some concerns that my array may not keep up with downloads, extraction etc etc. I thought about putting the 'local' mount point on an NVME/SSD. Have you or anyone done such configurations?

I think you're overthinking things - rclone does not add any extra considerations to your local setup, other than bandwidth and enough storage for local files that are pending upload.  For my setup I have made the following choices:

 

1. plex appdata on an unassigned Nvme - probably overkill, but I want my library browsing to be as fast as possible and the drive was on sale.

2. A mergerfs union of 2 old unassigned 1TB SSDs in a pool and /mnt/user/local - if the SSD pool is full then new nzbget/qbittorrent files get added to the array instead i.e. like a 2nd cache pool.  I do this to avoid my new nvme cache drive, to try and avoid 'noisy' writes to a HDD and because I need an SSD to keep up with my download speed.

 

1 hour ago, Tuftuf said:

However I now can't access the Plex UI locally but accessing movies and files is fine, accessing from Plex.tv is fine.

Accessing Plex UI directly gives connection closed or timed out.

Not sure what's going on there.

Link to comment
2 hours ago, DZMM said:

I think you're overthinking things - rclone does not add any extra considerations to your local setup, other than bandwidth and enough storage for local files that are pending upload.  For my setup I have made the following choices:

 

It's the time to think, I previously moved my whole plex and related setup to a hosted dedicated server 1gb/1gb as with gdrive my upload is not good enough to keep up 400/35. Cost wise it would now work out around the same for me to upgrade to a business connection which gives me options of 400/200 or 750/375.

 

I recently built a 2 in 1 gaming pc on a 7700 and since I've had an Intel CPU begging me to use quicksync I've been looking at options to bring my plex setup back home. Right now it only has 1 SSD and 1 NVME but that will change soon.

2 hours ago, DZMM said:

1. plex appdata on an unassigned Nvme - probably overkill, but I want my library browsing to be as fast as possible and the drive was on sale.

2. A mergerfs union of 2 old unassigned 1TB SSDs in a pool and /mnt/user/local - if the SSD pool is full then new nzbget/qbittorrent files get added to the array instead i.e. like a 2nd cache pool.  I do this to avoid my new nvme cache drive, to try and avoid 'noisy' writes to a HDD and because I need an SSD to keep up with my download speed.

 

Did you place your pool as "LocalFilesShare="/mnt/disks/NVMEpool

and array as LocalFilesShare2="/mnt/user/local"

 

I'm checking its just a case of placing the shares in the order you want them to be used or was there more to it?

Link to comment

  

3 hours ago, Tuftuf said:

Did you place your pool as "LocalFilesShare="/mnt/disks/NVMEpool

and array as LocalFilesShare2="/mnt/user/local"

 

I'm checking its just a case of placing the shares in the order you want them to be used or was there more to it?

 

1. Created a 2TB UD pool /mnt/disks/ud_mx500 of 2x1TB SSDs

2. Before my rclone and mergerfs mounts are built etc I create an extra mergerfs mount of #1 and my array only share /mnt/user/local

mergerfs /mnt/disks/ud_mx500/local:/mnt/user/local /mnt/user/mount_mergerfs/local -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=lus,cache.files=partial,dropcacheonclose=true,moveonenospc=true,minfreespace=150G

The combination of category.create=lus (least used space - SSD always wins as smaller than the array/drive - not sure which is used in the calc, but it works) and minfreespace=150G (don't store on SSD if less than 150G) seems to work the way I want, with new files going onto the SSD and only onto the array if the SSD is full.  The SSD sometimes dips to around 70-80GB free, but never lower.  It keeps files off my faster cache nvme drive as the SSDs are fast enough to max out my linespeed both up and down.

 

3. Then I add /mnt/user/mount_mergerfs/local as my local location to my scripts:

 

# REQUIRED SETTINGS
RcloneRemoteName="tdrive_vfs"
LocalFilesShare="/mnt/user/mount_mergerfs/local"
RcloneMountShare="/mnt/user/mount_rclone"
MergerfsMountShare="/mnt/user/mount_mergerfs"
DockerStart="duplicati nzbget qbittorrentvpn lazylibrarian radarr radarr-uhd radarr-collections sonarr sonarr-uhd plex ombi tautulli LDAPforPlex letsencrypt organizrv2"
MountFolders=\{"downloads/complete,downloads/seeds,documentaries/kids,documentaries/adults,movies_adults_gd,movies_kids_gd,tv_adults_gd,tv_kids_gd,uhd/tv_adults_gd,uhd/tv_kids_gd,uhd/documentaries/kids,uhd/documentaries/adults"\}

# OPTIONAL SETTINGS

LocalFilesShare2="/mnt/user/mount_rclone/gdrive_media_vfs"
LocalFilesShare3=""
LocalFilesShare4=""

/mnt/user/mount_rclone/gdrive_media_vfs is my rclone mount for my music and photos.  I don't add these to the tdrive I use for plex media, as combined it pushes me over the 400k object limit.

Edited by DZMM
Link to comment
4 hours ago, Tuftuf said:

 

 

i checked tuto i am not sure to understand this :

 

 

"To get the best performance out of mergerfs, map dockers to /user --> /mnt/user

Then within the docker webui navigate to the relevant folder within the mergerfs share e.g. /user/mount_unionfs/downloads or /user/mount_unionfs/movies. These are the folders to map to plex, radarr, sonarr,nzbget etc

DO NOT MAP any folders from local or the rclone mount

DO NOT create mappings like /downloads or /media for your dockers. Only use /user --> /mnt/user if you want to ensure the best performance from mergerfs when moving and editing files within the mount"

 

am i right to do this in plex? https://d.pr/i/16qg8V

 

since i changed scripts to new one, i have errors in plex like this : https://d.pr/i/HTC0dc  https://d.pr/i/Wtn2ml

 

in krusader i have new files : https://d.pr/i/uWHUmZ 

mount union_fs is empty https://d.pr/i/lzg2kj

mount mergerfs has two files https://d.pr/i/EaEId6 one has the files in the cloud(gdrive_media_vfs file) , one is empty gdrive_vfs

 

thanks in advance

 

 

 

 

Edited by neow
Link to comment

So far everything is working great but for some reason after the upload the folders are not being deleted from the local folder only the files inside the folder. 

Is it also possible to add Discord notifications for when upload starts, when it finishes a file upload and when it's done uploading completely? Would be great to have this as a variable option. Thanks again for the scripts! @DZMM

Link to comment
2 hours ago, jrdnlc said:

So far everything is working great but for some reason after the upload the folders are not being deleted from the local folder only the files inside the folder. 

Can you give an example and post your script options as it should be deleting source folders.

Link to comment
5 minutes ago, DZMM said:

Can you give an example and post your script options as it should be deleting source folders.

I’m using latest script from github. 
 

When I add media to /mount_mergefs/gdrive it shows up in /local which then gets uploaded. When the media is uploaded, the files get deleted from the folders in /local but the not the folders.  

Edited by jrdnlc
Link to comment
10 hours ago, DZMM said:

@jrdnlc can you post your script options please

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/cache/mount_upload" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/cache/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="off"
BWLimit1="off"
BWLimit2Time="off"
BWLimit2="15M"
BWLimit3Time="off"
BWLimit3="12M"

# OPTIONAL SETTINGS

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/cache/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

 

Link to comment
1 hour ago, jrdnlc said:

Yes i'm aware. I changed the directory name. All 3 scripts were updated with the new name

What do you have set for MountFolders= in the mount script and do you have it running on a cron job?  If so, it'll recreate those folders.

 

Otherwise, I'm out of ideas.

Link to comment
50 minutes ago, DZMM said:

What do you have set for MountFolders= in the mount script and do you have it running on a cron job?  If so, it'll recreate those folders.

 

Otherwise, I'm out of ideas.

The default so "MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount" and yeah it's on a cron job. Runs every 10min

 

The github recommends to run it on a cron job at every 10min to make sure the mount is mounted. 

Link to comment
6 minutes ago, jrdnlc said:

Runs every 10min

Are those the folders you want to see deleted?  They will get re-created every 10 mins.  If not, then something else is recreating your folders, or your folders aren't setup correctly as rclone deletes source folders after uploading.

Link to comment
9 minutes ago, DZMM said:

Are those the folders you want to see deleted?  They will get re-created every 10 mins.  If not, then something else is recreating your folders, or your folders aren't setup correctly as rclone deletes source folders after uploading.

No, i'm talking about the folders in /local/gdrive which in my case is /mount_upload/gdrive. I created a /media folder in /mount_mergefs/gdrive then created movies, music folders etc.

so /gdrive/media/movies/movie 1 gets uploaded and the files get deleted but not the movie 1 folder.

Edited by jrdnlc
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.