Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

I need a little help here ... Maybe a lot of help. So I have everything setup to the point where when I put a file and/or directory into the "local" share and run the "rclone_upload" script everything is uploaded to googledrive and deleted in the local share, as expected.

 

My plex library is located on an unassigned devices disk /mnt/disks/plex, I would like to transfer that whole library to google drive but can't figure out how to mount the /mnt/disks/plex via the rclone_mount script so rclone has access to it and uploads it to google drive when I run "rclone_upload" script.

Link to comment
26 minutes ago, Ultra-Humanite said:

I need a little help here ... Maybe a lot of help. So I have everything setup to the point where when I put a file and/or directory into the "local" share and run the "rclone_upload" script everything is uploaded to googledrive and deleted in the local share, as expected.

 

My plex library is located on an unassigned devices disk /mnt/disks/plex, I would like to transfer that whole library to google drive but can't figure out how to mount the /mnt/disks/plex via the rclone_mount script so rclone has access to it and uploads it to google drive when I run "rclone_upload" script.

Set /mnt/disks/Plex as your local location in the mount and upload scripts

Link to comment

Thanks for a fast response. In my initial question I miss spoke when I said local as by local I meant to say that any thing I placed in "/mnt/user/local/gdrive_media_vfs" was uploaded without any issue.

 

If I set LocalFilesShare="/mnt/disks/Plex" in mount and upload scripts I get and an empty gdrive_media_vfs folder in the plex folder and this error message when I run the upload script:

 

2020/06/20 14:22:11 INFO : Starting bandwidth limiter at 20MBytes/s
2020/06/20 14:22:11 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/06/20 14:22:12 DEBUG : mountcheck: Excluded
2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2020/06/20 14:22:12 DEBUG : {}: Removing directory
2020/06/20 14:22:12 DEBUG : Local file system at /mnt/disks/Plex/gdrive_media_vfs: deleted 1 directories
2020/06/20 14:22:12 INFO : There was nothing to transfer

 

Here is my config:

[gdrive]
type = drive
client_id =
client_secret =
scope = drive
token =
server_side_across_configs = true
root_folder_id =

 

[gdrive_media_vfs]
type = crypt
remote = gdrive:gdrive_media_vfs
filename_encryption = standard
directory_name_encryption = true
password =
password2 =

 

Here is the mount script:

 

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.7 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
LocalFilesShare="/mnt/disks/Plex" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{""\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

 

Here is the upload script:

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/disks/Plex" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="20M"
BWLimit3Time="16:00"
BWLimit3="20M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

 

Link to comment
12 minutes ago, Ultra-Humanite said:

Thanks for a fast response. In my initial question I miss spoke when I said local as by local I meant to say that any thing I placed in "/mnt/user/local/gdrive_media_vfs" was uploaded without any issue.

 

If I set LocalFilesShare="/mnt/disks/Plex" in mount and upload scripts I get and an empty gdrive_media_vfs folder in the plex folder and this error message when I run the upload script:

 

2020/06/20 14:22:11 INFO : Starting bandwidth limiter at 20MBytes/s
2020/06/20 14:22:11 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/06/20 14:22:12 DEBUG : mountcheck: Excluded
2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2020/06/20 14:22:12 DEBUG : {}: Removing directory
2020/06/20 14:22:12 DEBUG : Local file system at /mnt/disks/Plex/gdrive_media_vfs: deleted 1 directories
2020/06/20 14:22:12 INFO : There was nothing to transfer

 

Here is my config:

[gdrive]
type = drive
client_id =
client_secret =
scope = drive
token =
server_side_across_configs = true
root_folder_id =

 

[gdrive_media_vfs]
type = crypt
remote = gdrive:gdrive_media_vfs
filename_encryption = standard
directory_name_encryption = true
password =
password2 =

 

Here is the mount script:

 

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.7 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
LocalFilesShare="/mnt/disks/Plex" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{""\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

 

Here is the upload script:

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/disks/Plex" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="20M"
BWLimit3Time="16:00"
BWLimit3="20M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

 

What folders are in  /mnt/disks/Plex ?  I've got mergerfs mounts that include UD and they work fine.

Link to comment
13 minutes ago, Ultra-Humanite said:

If I move anything to the /mnt/disks/plex/gdrive_media_vfs for example the tv folder from /mnt/disks/plex/tv it gets uploaded to googledrive no problem. I guess that's a solution.

that's what's supposed to happen!

 

Link to comment

@DZMM Is your upgrade post from unionfs to mergefs on page 46 still correct?

 

Just so i can type it out to sanity check myself, to upgrade i need too.

 

1.Unmount the drive and finish and current uploads.

2.Copy the new mount script and unmount scripts and replace my current mount script, using ctrl f to find and replace mount_mergerfs to mount_unions (does this also apply to the new upload script?, i still have a lot of data left in rclone_upload left to upload, but no transfer currently in progress)

3. run mount script and adjust upload script to only upload at 1AM every day.

Link to comment
11 hours ago, Bolagnaise said:

@DZMM Is your upgrade post from unionfs to mergefs on page 46 still correct?

 

Just so i can type it out to sanity check myself, to upgrade i need too.

 

1.Unmount the drive and finish and current uploads.

2.Copy the new mount script and unmount scripts and replace my current mount script, using ctrl f to find and replace mount_mergerfs to mount_unions (does this also apply to the new upload script?, i still have a lot of data left in rclone_upload left to upload, but no transfer currently in progress)

3. run mount script and adjust upload script to only upload at 1AM every day.

 
 
 

Probably not up to date.  There are comments on the new scripts which make moving easy - just:

 

1. make sure you haven't got any rclone activity going on - stop old scripts, uploads and any dockers using the mount

2. setup new paths - but /mount_unionfs etc as your mergerfs mount paths etc if that's what you have setup now.  Be careful to put your existing paths in

3. choose other script options

4. run scripts and if all ok, launch dockers

Link to comment

I had to put this down for a bit ... but coming back around... anyone else concerned about that 400K limit on the team drive? I think i'm at already 125K objects on my array. 

 

Will it muck things up if you split your stuff into various team drives? say 1 for each media type in your library? 3D movies, Animation, TV Shows, etc? 

 

Or is that adding unnecessary complication? 

Link to comment
1 hour ago, Bjur said:

Hi in my mountfolders, I just want media\tv

When I do:

MountFolders=\{"media/tv"\}

I get 2 folders. 1 media\tv which is correct but also a

{media

with subfolder

tv}

How do I stop that?

 

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

I guess it fails if you only add one folder.  just create folder manually in your mergerfs mount.

Link to comment
12 hours ago, BRiT said:

That's what many already do, use multiple team drives.

Thanks,

 

Can you share some of your organization/folder structure? Also, do you have a bunch of mount scripts and a bunch of upload scripts? Or is it possible to use the extra sections on the script to manage these too? 

 

Thank you!

Link to comment
8 hours ago, axeman said:

Thanks,

 

Can you share some of your organization/folder structure? Also, do you have a bunch of mount scripts and a bunch of upload scripts? Or is it possible to use the extra sections on the script to manage these too? 

 

Thank you!

Multiple mounts, one upload and one tidy-up script.

 

@watchmeexplode5 did some testing and performance gets worse as you get closer to the 400k mark, so you'll need to do something like below soon:

 

1. My folder structure looks something like this:

 

mount_mergerfs/tdrive_vfs/movies

mount_mergerfs/tdrive_vfs/music

mount_mergerfs/tdrive_vfs/uhd

mount_mergerfs/tdrive_vfs/tv_adults

mount_mergerfs/tdrive_vfs/tv_kids

 

2. I created separate tdrives  / rclone mounts for some of the bigger folders e.g.

 

mount_rclone/tdrive_vfs/movies

mount_rclone/tdrive_vfs/music

mount_rclone/tdrive_vfs/uhd

mount_rclone/tdrive_vfs/adults_tv

                                        

for each of those I created a mount script instance where I do NOT create a mergerfs mount

 

3. I mount each in turn and for the final main mount add the extra tdrive rclone mounts as extra mergerfs folders:

 

###############################################################
###################### mount tdrive   #########################
###############################################################

# REQUIRED SETTINGS
RcloneRemoteName="tdrive_vfs"
RcloneMountShare="/mnt/user/mount_rclone"
LocalFilesShare="/mnt/user/local"
MergerfsMountShare="/mnt/user/mount_mergerfs"

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="/mnt/user/mount_rclone/music" 
LocalFilesShare3="/mnt/user/mount_rclone/uhd"
LocalFilesShare4="/mnt/user/mount_rclone/adults_tv"

4. Run the single upload script - everything initially gets moved from /mnt/user/local/tdrive_vfs to the tdrive_vfs teamdrive

 

5. Overnight I run another script to move files from the folders that are in tdrive_vfs: to the correct teamdrive.  You have to work out the encrypted folder names for this to work.  Because rclone is moving the files, the mergerfs mount gets updated i.e. it looks to plex etc like they haven't moved

 

#!/bin/bash

rclone move tdrive:crypt/music_tdrive_encrypted_folder_name gdrive:crypt/music_tdrive_encrypted_folder_name \
--user-agent="transfer" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit \
--delete-empty-src-dirs

rclone move tdrive:crypt/tv_tdrive_encrypted_folder_name tdrive_t_adults:crypt/tv_tdrive_encrypted_folder_name \
--user-agent="transfer" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit \
--delete-empty-src-dirs

rclone move tdrive:crypt/uhd_tdrive_encrypted_folder_name tdrive_uhd:crypt/uhd_tdrive_encrypted_folder_name \
--user-agent="transfer" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit \
--delete-empty-src-dirs

exit

 

 

Edited by DZMM
  • Like 2
  • Thanks 2
Link to comment
14 minutes ago, DZMM said:

Multiple mounts, one upload and one tidy-up script.

 

@watchmeexplode5 did some testing and performance gets worse as you get closer to the 400k mark, so you'll need to do something like below soon:

 

1. My folder structure looks something like this:

 

mount_mergerfs/tdrive_vfs/movies

mount_mergerfs/tdrive_vfs/music

mount_mergerfs/tdrive_vfs/uhd

mount_mergerfs/tdrive_vfs/tv_adults

mount_mergerfs/tdrive_vfs/tv_kids

 

2. I created separate tdrives  / rclone mounts for some of the bigger folders e.g.

 

mount_rclone/tdrive_vfs/movies

mount_rclone/tdrive_vfs/music

mount_rclone/tdrive_vfs/uhd

mount_rclone/tdrive_vfs/adults_tv

                                        

for each of those I created a mount script instance where I do NOT create a mergerfs mount

 

3. I mount each in turn and for the final main mount add the extra tdrive rclone mounts as extra mergerfs folders:

 

4. Run the single upload script - everything initially gets moved from /mnt/user/local/tdrive_vfs to the tdrive_vfs teamdrive

 

5. Overnight I run another script to move files from the folders that should be in tdrive_vfs: to the correct teamdrive.  You have to work out the encrypted folder names for this to work.  Because rclone is moving the files, the mergerfs mount gets updated i.e. it looks to plex etc like they haven't moved

 

 

 

 

Thanks. this is excellent. I am going to see if I have the cranial capacity to do this.

 

Turns out a LOT of the issues I had at the start of this endeavor was due to running it in the in the foreground. I saw the user scripts postthat you can force a script to be background only by setting the  #backgroundOnly=true directive 

 

Maybe add that to the next update of your scripts so that other fools like me don't repeat.  

Link to comment

@DZMM

 

Yup, found out about that performance hit 2 weeks ago and already migrated to a bunch of t drives. I took the same approach with server side move after figuring out the corresponding encrypted name 👍

 

Currently playing around with a modified version of rclone's backend that rotates sa based on error callbacks. Developer has done a lot of performance tweaks, bypass 10tb dl per day limit on mounts, and circumvents API limit troubles (though that's less of an issue these days).

 

I'd post it but it's fairly unstable and I don't think the developer wants it being tossed around till it's all ready but I'll keep you updated on it 

Link to comment

So I want to move to doing thins with service accounts but of course I hit every hurdle imaginable. I am literally stuck at step 1. This step requires me to input 

sudo git clone https://github.com/xyou365/AutoRclone && cd AutoRclone && sudo pip3 install -r requirements.txt into the terminal which I do very obediently. Here is my output. 

----@Keep:~/AutoRclone# sudo pip3 install -r requirements.txt
Collecting oauth2client
  Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting google-api-python-client
  Using cached google_api_python_client-1.9.3-py3-none-any.whl (59 kB)
Collecting progress
  Using cached progress-1.5.tar.gz (5.8 kB)
    ERROR: Command errored out with exit status 1:
     command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rcm7bl3g/progress/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rcm7bl3g/progress/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-rcm7bl3g/progress/pip-egg-info
         cwd: /tmp/pip-install-rcm7bl3g/progress/
    Complete output (11 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/usr/lib64/python3.8/site-packages/setuptools/__init__.py", line 19, in <module>
        from setuptools.dist import Distribution, Feature
      File "/usr/lib64/python3.8/site-packages/setuptools/dist.py", line 36, in <module>
        from setuptools import windows_support
      File "/usr/lib64/python3.8/site-packages/setuptools/windows_support.py", line 2, in <module>
        import ctypes
      File "/usr/lib64/python3.8/ctypes/__init__.py", line 7, in <module>
        from _ctypes import Union, Structure, Array
    ImportError: libffi.so.7: cannot open shared object file: No such file or directory
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Some smacking around and sending me in the correct direction would be much appreciated. Thank you kind souls. 

Link to comment

Sorry if this has been explained before, it's a long thread...

 

What is the reason for the upload script if I can just place files in the mounted folder and it'll upload on its own?

 

I just use the simple mount command below. Thanks

 

 

mntpoint="/mnt/disks/gdrive"    

remoteshare="gsuite:"

 

rclone mount --allow-other --drive-chunk-size 512M --dir-cache-time 5m --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --fast-list --vfs-cache-mode writes --max-read-ahead 2G $remoteshare $mntpoint &

Link to comment
37 minutes ago, DisposableHero said:

Sorry if this has been explained before, it's a long thread...

 

What is the reason for the upload script if I can just place files in the mounted folder and it'll upload on its own?

 

 

More control over how files are uploaded and when

Link to comment

Hello! 

 

I've been trying to get this set up for a good portion of today. I don't usually upload from my Unraid box so I just want the remote mounted so I can dockerize plex (it's currently running on a VM).  I don't need to merge any folders or start/stop any docker containers yet.

 

The script successfully mounts the remote but it always gives me the mount failed message. I can see the mountcheck file in the directory it is checking. 

 

Here are my settings:

# REQUIRED SETTINGS
RcloneRemoteName="GSuiteDrive_Storage" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/rclone_mount" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
LocalFilesShare="ignore" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MergerfsMountShare="ignore" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=c\{""\} # comma separated list of folders to create within the mount

And script execution:

Script location: /tmp/user.scripts/tmpScripts/mount_rclone_gsuite_storage/script
Note that closing this window will abort the execution of this script
25.06.2020 12:22:03 INFO: Creating local folders.
25.06.2020 12:22:03 INFO: *** Starting mount of remote GSuiteDrive_Storage
25.06.2020 12:22:03 INFO: Checking if this script is already running.
25.06.2020 12:22:03 INFO: Script not running - proceeding.
25.06.2020 12:22:03 INFO: *** Checking if online
25.06.2020 12:22:10 PASSED: *** Internet online
25.06.2020 12:22:10 INFO: Mount not running. Will now mount GSuiteDrive_Storage remote.
25.06.2020 12:22:10 INFO: Recreating mountcheck file for GSuiteDrive_Storage remote.
2020/06/25 12:22:10 DEBUG : rclone: Version "v1.52.2" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "GSuiteDrive_Storage:" "-vv" "--no-traverse"]
2020/06/25 12:22:10 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/06/25 12:22:10 DEBUG : fs cache: renaming cache item "mountcheck" to be canonical "/"
2020/06/25 12:22:21 DEBUG : mountcheck: Modification times differ by -58.115950247s: 2020-06-25 12:22:10.156950247 -0700 PDT, 2020-06-25 19:21:12.041 +0000 UTC
2020/06/25 12:22:40 INFO : mountcheck: Copied (replaced existing)
2020/06/25 12:22:40 INFO :
Transferred: 32 / 32 Bytes, 100%, 1 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 19.6s

2020/06/25 12:22:40 DEBUG : 6 go routines active
25.06.2020 12:22:40 INFO: *** Creating mount for remote GSuiteDrive_Storage
25.06.2020 12:22:40 INFO: sleeping for 5 seconds
25.06.2020 12:22:50 INFO: continuing...
25.06.2020 12:22:50 CRITICAL: GSuiteDrive_Storage mount failed - please check for problems. Stopping dockers
"docker stop" requires at least 1 argument.
See 'docker stop --help'.

Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]

Stop one or more running containers

 

Edited by psycho_asylum
Link to comment
On 6/24/2020 at 2:37 AM, Hypner said:

So I want to move to doing thins with service accounts but of course I hit every hurdle imaginable. I am literally stuck at step 1. This step requires me to input 

sudo git clone https://github.com/xyou365/AutoRclone && cd AutoRclone && sudo pip3 install -r requirements.txt into the terminal which I do very obediently. Here is my output. 


----@Keep:~/AutoRclone# sudo pip3 install -r requirements.txt
Collecting oauth2client
  Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting google-api-python-client
  Using cached google_api_python_client-1.9.3-py3-none-any.whl (59 kB)
Collecting progress
  Using cached progress-1.5.tar.gz (5.8 kB)
    ERROR: Command errored out with exit status 1:
     command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rcm7bl3g/progress/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rcm7bl3g/progress/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-rcm7bl3g/progress/pip-egg-info
         cwd: /tmp/pip-install-rcm7bl3g/progress/
    Complete output (11 lines):
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/usr/lib64/python3.8/site-packages/setuptools/__init__.py", line 19, in <module>
        from setuptools.dist import Distribution, Feature
      File "/usr/lib64/python3.8/site-packages/setuptools/dist.py", line 36, in <module>
        from setuptools import windows_support
      File "/usr/lib64/python3.8/site-packages/setuptools/windows_support.py", line 2, in <module>
        import ctypes
      File "/usr/lib64/python3.8/ctypes/__init__.py", line 7, in <module>
        from _ctypes import Union, Structure, Array
    ImportError: libffi.so.7: cannot open shared object file: No such file or directory
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Some smacking around and sending me in the correct direction would be much appreciated. Thank you kind souls. 

Can anyone help me with this? Sorry dont mean to bump something but I didnt see a response. Much appreciated. 

Link to comment
21 minutes ago, Hypner said:

Can anyone help me with this? Sorry dont mean to bump something but I didnt see a response. Much appreciated. 

Sorry I don't really understand python and I think I fluked completing this step as I didn't really understand what I was doing!  Hopefully someone else can help.  Or, have you tried asking the autorclone author?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.