Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

@DZMM

If anybody is interested in testing a modified rclone build with a new upload tool. Feel free to grab the builds of my repository. You can run the builds side-by-side with stable rclone so you don't have to take down rclone for testing purposes! It should go without saying, but only run this if you are comfortable with rclone / DZMMs scripts and how they function. If not, you should stick on DZMMs scripts with rclone official build!

 

Users of this modified build have reported upload speeds of ~1.4x faster than rclone and ~1.2-1.4x on downloads. I fully saturate my gig line on uploads with lclone where on stock rclone I typically got around 75-80% saturation. 

 

I've also got some example scripts for pulling from git, mounting, and uploading. Config files are already setup so you just have to edit them for your use case. The scripts aren't elegant but they get the job done. If you anybody likes it, I'll probably script it better to build from src as oppose to just pulling the pre-builds from my github.

 

https://github.com/watchmeexplode5/lclone-crop-aio

 

Feel free to use all or none of the stuff there. You can just run just the lclone build with DZMM's scripts if you want (make sure to edit the rclone config to include these new tags)

drive_service_account_file_path = /folder/SAs (No trailing slash for service account file)
service_account_file = /folder/SAs/any_sa.json

 

 

All build credit goes to l3uddz who is a heavy contributor to rclone and cloudbox. You can follow his work on the cloudbox discord if you are interested

 

-----Lclone (also called rclone_gclone) is a modified rclone build which rotates to a service accounts upon quota/api errors. This effectively not only removes the upload limit but also the download limit (even via mount command - solving plex/sonarr deep dive scan bans) and also a bunch of optimization features. 

 

-----Crop is a command line tool for uploading which utilizes rotating service accounts based once a limit has been hit. So it runs ever service account to it's limit before rotating. Not only that but you can have all your upload settings placed in a single config file (easy for those using lots of team drives). You can also setup the config to sync after upload so you can upload to one drive and server-side sync to all your other backup drives/servers with ease. 

 

For more info and options on crop/rclone_gclone config files check out:

l3uddz Repository https://github.com/l3uddz?tab=repositories

Edited by watchmeexplode5
  • Like 2
Link to comment
On 7/4/2020 at 8:39 PM, watchmeexplode5 said:

@DZMM

If anybody is interested in testing a modified rclone build with a new upload tool. Feel free to grab the builds of my repository. You can run the builds side-by-side with stable rclone so you don't have to take down rclone for testing purposes! It should go without saying, but only run this if you are comfortable with rclone / DZMMs scripts and how they function. If not, you should stick on DZMMs scripts with rclone official build!

 

Users of this modified build have reported upload speeds of ~1.4x faster than rclone and ~1.2-1.4x on downloads. I fully saturate my gig line on uploads with lclone where on stock rclone I typically got around 75-80% saturation. 

 

I've also got some example scripts for pulling from git, mounting, and uploading. Config files are already setup so you just have to edit them for your use case. The scripts aren't elegant but they get the job done. If you anybody likes it, I'll probably script it better to build from src as oppose to just pulling the pre-builds from my github.

 

https://github.com/watchmeexplode5/lclone-crop-aio

 

Feel free to use all or none of the stuff there. You can just run just the lclone build with DZMM's scripts if you want (make sure to edit the rclone config to include these new tags)


drive_service_account_file_path = /folder/SAs (No trailing slash for service account file)
service_account_file = /folder/SAs/any_sa.json

 

 


All build credit goes to l3uddz who is a heavy contributor to rclone and cloudbox. You can follow his work on the cloudbox discord if you are interested

 

-----Lclone (also called rclone_gclone) is a modified rclone build which rotates to a service accounts upon quota/api errors. This effectively not only removes the upload limit but also the download limit (even via mount command - solving plex/sonarr deep dive scan bans) and also a bunch of optimization features. 

 

-----Crop is a command line tool for uploading which utilizes rotating service accounts based once a limit has been hit. So it runs ever service account to it's limit before rotating. Not only that but you can have all your upload settings placed in a single config file (easy for those using lots of team drives). You can also setup the config to sync after upload so you can upload to one drive and server-side sync to all your other backup drives/servers with ease. 

 

For more info and options on crop/rclone_gclone config files check out:

l3uddz Repository https://github.com/l3uddz?tab=repositories

This is really nice, i'm currently playing around with this, and this will simplify my setup, can you share in which order you run the custom scripts, like in what priority, and which schedule?

 

Also it would awesome to have some short readme in github, to help with the setup.

Link to comment

@Thel1988

 

Currently I kinda left everything barebones because it's more for advanced users. Definitely not for those just getting into it. But yeah, I added a basic readme. Most settings can be viewed on their official project page and the rest of the things are pretty self explanatory within the configs/scripts if you read them.

 

For the script order --> I run the install script on startup of the array, then I run the mount script so I can access my mounts. Finally I cron my upload for every 20 minutes.

 

I don't want to hijack DZMM's thread too much so if anybody has more questions feel free to PM me. 

 

 

Edited by watchmeexplode5
Link to comment

I'm still not 100% satisfying with my download. When I download something I only get around 20 Mbps and have gigabit line. I have tried in both local and mergerfs folder. Could it be perhaps the parity is writing and slowing down and if so, how do I solve it? 

Link to comment

@Bjur

Download via what -- Usenet/Torrent?

Or download from your actual mount?

 

Are you utilizing a cache drive to avoid parity write bottlenecks?

 

Lots of different variables can effect your dl speeds and a lot are out of your control --> like distance from server and peering to the server. 

 

But onto what you can control. Generally the fastest way (and to test for any bottlenecks) would be to dl from whatever to a share that is set as "use cache: only" in unraid. That way you avoid any parity write overhead. Also, kinda obvious but, NVME/SSD will trump any mechanical HDD so for quick writes that's what you should be using.

 

Other than that, you can play with the amount of parallel workers. Buffer and cache size of files, ect. With DZMMs scripts, these values are  optimized for downloading/streaming from gdrive but you can read up on others settings on the official rclone forum. Animosity022's github has some great settings (heavily tested and very active on the rclone forum). His recommendations are often the most widely accepted settings when it comes to general purpose mounting!

Edited by watchmeexplode5
  • Thanks 1
Link to comment

Hi thanks for the fine answer :) using Usenet and just DL to standard share. When I did DL to UD then I get fast speed so it must be parity writing. I initially did DL to SSD also fast speed but have a Samsung EVO 860 or 870 can't remember but I didn't want to use that because of the tearing so I only use that for dockers. Does that makes sense? 

Link to comment

Hello all
I could need some help. Did I do it correct?

I asume the mount script is not fully correct. i have copied id from my windows Intel NUC.

rclone mount --allow-other --allow-non-empty --cache-db-purge --buffer-size 32M --use-mmap --drive-chunk-size 32M  --timeout 1h  --vfs-cache-mode full --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G gcrypt:/ t:

 

Feel free to adjust the mount comand.

Thank you

image.thumb.png.8e72f740a8f19dc672b976523ccdc0a0.png

Link to comment

@DiniMuetter

 

It's best to use all the script on DZMM's github: https://github.com/BinsonBuzz/unraid_rclone_mount

 

Instructions for setting up all the user settings are well documented on his github. Read that fully and you should have no issue setting it up correctly.

 Use the userscripts plugin for easy editing and running. 

 

 

Your current command is mounting like it's on a windows file system -- Mounting your gcrypt: to the windows "t:" drive. 

For unraid that won't work and it should look something like:

rclone mount \
	....
	....
	gcrypt: /mnt/user/cloud (or where ever you want your gcrypt to be mounted in unraid)

But again. If you use DZMM's scripts you won't have to do any of the hard editing. Simply set your user settings at the beginning of his scripts and they automagically configure it for you! 

 

Feel free to chime back in if you have more questions/problems

Edited by watchmeexplode5
Link to comment
28 minutes ago, cinereus said:

How do you get this in the UK?

You need a fibre to the building provider, which isn't typically widely available. My building, for example, has a provider offering 1000/1000 500/500 and a "basic" package of 250/250.

Most other fibre services have a upload speed cap from 10 to 50 Mbps even with gigabit download. E.g. Virgin has M500 pack that is 500/52.

  • Thanks 1
Link to comment
2 hours ago, cinereus said:

How do you get this in the UK?

Back then I was in a Gigaclear area and I was loving my 1000/1000 service.   Now I only get 360/180, which is adequate.

 

Now if you're really lucky, there are some providers in the UK who are offering 10000/10000 - one day we'll all get speeds like that!

Link to comment
43 minutes ago, DZMM said:

Back then I was in a Gigaclear area and I was loving my 1000/1000 service.   Now I only get 360/180, which is adequate.

 

Now if you're really lucky, there are some providers in the UK who are offering 10000/10000 - one day we'll all get speeds like that!

One day I'll tell my grandchildren the tale of the 56k modem beeping. 😅

 

Link to comment
1 hour ago, testdasi said:

One day I'll tell my grandchildren the tale of the 56k modem beeping. 😅

 

Fancy - I started out with a 14.4 - and it was like magic to get a 28.8 Zoom modem. That sucker even did voicemail.

 

Bit something forget the name of the software. 

 

By the time 56K hit my neighborhood, Cable was rolling out 1, 3, and maybe 5mbit lines. 

  • Haha 1
Link to comment

Hello!

Im trying to mount my gdrive in unraid but Im facing some problems. I'm not a native english speaker so maybe thats the main problem haha

 

I have created 3 remotes in rclone. one called gdrive with connect to  my gdrive, one called gcache which points to 'gdrive:media' and one gcrypt which points to 'gcache:'. I think thats ok.

 

Now I have to create a user script with the rclone_mount script but I'm seeing all the folders at the beginning and that's where I'm getting lost.

 

I have a user share (calle Plex) with 3 disks.  I have a folder called 'movies' inside that share, so /mnt/user/Plex/movies. I want to create another folder called gdrivemovies so /mnt/user/Plex/gdrivemovies. So, in my case:

 

RcloneRemoteName="gcrypt"

RcloneMountShare=  ???

LocalFilesShare="/mnt/user/Plex/gdrivemovies"

MergerfsMountShare="ignore"

DockerStart="transmission plex sonarr radarr"

MountFolders=these should be folders inside my gdrive? I mean I have two folders one called 'media' (which I think I need for the cache)  and one 'movies'. Do I  need more?

Thanks in advance and great work.

 

Link to comment

 

@Yeyo53

Do you plan on moving your local plex files to the cloud? Or keeping some files local and some in the cloud?

 

To start off, I wouldn't use the rclone cache system if you don't have to. In my tests, I haven't seen any performance gains from it compared to the scripts listed here. 

I recommend using just your a remote pointing to your gdrive and a crypt pointing to that gdrive.

 

 

Here is an explanation of the commands that I think you are struggling with:

 

RcloneMountShare="/mnt/user/mount_rclone" 

  • This is where your gdrive will be mounted on your server. So when you navigate to /mnt/user/mount_rclone you will see the content of your gdrive. In your case it sounds like your will see your two folders which are "media" and "movies" 

 

LocalFilesShare="/mnt/user/local"

  • This is where local media is placed to be uploaded to the gdrive when you run the upload script. This is where you will have a download folder, a movie folder, a tv folder, or any folder your want. 

MergerfsMountShare="ignore"

  • If your fill this in it will combine your local and gdrive to a single folder. So lets say you set it as /mnt/user/mount_mergerfs. 
  • These files do not actually exist at that location but simply appear like they are at this location. Here is a visual example to help  
/mnt/user/
      │
      ├── mount_rclone (Google Drive Mount)
      │      └── movies
      │            └──FILE ON GDRIVE.mkv
      │           
      ├── local
      │     └── movies
      │           └──FILE ON LOCAL.mkv
      │
      └── mount_mergerfs
            └── movies
                  ├──FILE ON GDRIVE.mkv
                  └──FILE ON LOCAL.mkv 

 

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}

  • These are the folders created in your LocalFileShare location. The folders here will be uploaded to gdrive when the uploader script runs (except the download folder. The uploader ignores that folder). 
  • So typically it's best to leave them as the default value. You can always make your own files there if you want

 

Link to comment

@Yeyo53

 

These are the settings I would recommend for starting out. Mostly default but adapted to work for your Plex mount. Keeping things default also makes initial setup and support easier!


Using gcrypt pointing to your gdrive

RcloneRemoteName="gcrypt"
RcloneMountShare="/mnt/user/mount_rclone"
LocalFilesShare="/mnt/user/local"
MergerfsMountShare="/mnt/user/mount_mergerfs"
DockerStart="transmission plex sonarr radarr"
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="/mnt/user/Plex/"

 

So your gdrive will be mounted at .../mount_rclone. 

Your local file at .../local (to be moved to gdrive on upload)

I added your mnt/user/Plex/ folder to the localfileshare2 setting for mergerfs to see.

 

The merged folder will be at .../mount_mergerfs

  • If you go to .../mount_mergerfs you will have all your paths combined so your gdrive, your ../local, and your /Plex files will all be there. 
  • When you write/move/copy things to .../mount_mergerfs it will be written to /mnt/user/local/
  • When you run the upload script. Anything in .../local folder will be uploaded to your gdrive.

 

So with this configuration you should point Plex/Sonarr/NZBGet to "/mnt/user/mount_mergerfs"

It will still see your media that's in your /Plex folder because it's added to localfileshare2.

 

This setup will keep your /Plex folder untouched while you make sure everything works well. If you want to move portions of your /Plex folder to your gdrive. Simply move files from /mnt/user/Plex to /mnt/user/local (or /mnt/user/mount_mergerfs). Then run the upload script. 

Edited by watchmeexplode5
Link to comment

Hi DZMM,

 

Great Script. I've been using it for about a week. Everything is running perfect.

But I just couldn't find out how to monitor the bandwidth ( traffic speed and amount of data transferred).

Someone please could point me to the right direction ?

 

Thanks again !

 

Edited by Marcel_Costa
Link to comment
6 hours ago, Marcel_Costa said:

Hi DZMM,

 

Great Script. I've been using it for about a week. Everything is running perfect.

But I just couldn't find out how to monitor the bandwidth ( traffic speed and amount of data transferred).

Someone please could point me to the right direction ?

 

Thanks again !

 

easiest way is to look at the upload script logs

Link to comment

Noob question for everyone.  I've searched but cannot seem to find the answer, maybe because I'm using the wrong terminology.  But, is it possible to move all the files from one G Drive account to another?  I'm assuming I'd have to establish a new mount in addition to the existing one, but just not sure how to update the user script to do this.

Link to comment
39 minutes ago, BigMal said:

Noob question for everyone.  I've searched but cannot seem to find the answer, maybe because I'm using the wrong terminology.  But, is it possible to move all the files from one G Drive account to another?  I'm assuming I'd have to establish a new mount in addition to the existing one, but just not sure how to update the user script to do this.

No need for a script as long as your mounts have server_side_across_configs = true:

 

[gdrive]
type = drive
client_id = xxxxxxxxxxxx
client_secret = xxxxxxx
scope = drive
server_side_across_configs = true

Actual command:

rclone move gdrive1:Path_to_source_folder gdrive2:Path_to_destination_folder

You can pick and choose some typical arguments:

rclone move gdrive1:Path_to_source_folder gdrive2:Path_to_destination_folder
--user-agent="transfer" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit \
--delete-empty-src-dirs

It all happens server side so mega fast.  If you hit the 750GB/day limit, just run daily or create a service account rotation if you can't wait.

  • Thanks 1
Link to comment
5 minutes ago, DZMM said:

No need for a script as long as your mounts have server_side_across_configs = true:

 


[gdrive]
type = drive
client_id = xxxxxxxxxxxx
client_secret = xxxxxxx
scope = drive
server_side_across_configs = true

Actual command:


rclone move gdrive1:Path_to_source_folder gdrive2:Path_to_destination_folder

You can pick and choose some typical arguments:


rclone move gdrive1:Path_to_source_folder gdrive2:Path_to_destination_folder
--user-agent="transfer" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit \
--delete-empty-src-dirs

It all happens server side so mega fast.  If you hit the 750GB/day limit, just run daily or create a service account rotation if you can't wait.

Thanks so much for the quick response.  I'll give it a try.

Link to comment
3 minutes ago, BigMal said:

Thanks so much for the quick response.  I'll give it a try.

 

it's best to create new rclone remotes for the server side move i.e. don't use the ones you use for daily usage to avoid any API/transfer 24hr bans

 

Edit:  Another way is to use teamdrives and just move within gdrive on the web.  Quick but any changes made might take a time to be seen by rclone, whereas the method above captures the new paths straightaway

Edited by DZMM
  • Like 1
Link to comment
On 6/28/2020 at 2:36 AM, DZMM said:

If you're using SAs you don't need APIs.  If you're not, then unique client IDs is recommended

So forgive me if this is dumb - but I can use the same service account files across all the team drives, right? I should just grant access to the google group that I created? 

 

For some reason I get a 404 when trying to setup the rclone remote for a new team drive. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.