Jump to content
DZMM

Guide: How To Use Rclone To Mount Cloud Drives And Play Files

1982 posts in this topic Last Reply

Recommended Posts

Posted (edited)
14 minutes ago, DZMM said:

Sorry I don't really understand python and I think I fluked completing this step as I didn't really understand what I was doing!  Hopefully someone else can help.  Or, have you tried asking the autorclone author?

I haven't tried asking the autoclone author. Thank you though. 

Edited by Hypner
Correction to post

Share this post


Link to post

Hey so while restarting my server today I ran into an issue where after 5-10 minutes or so sometimes longer my mount_mergerfs doesn't show everything. As in it only shows my downloads folder and not anything from mount_rclone. I am able to run the mount script and things work again but I dont know what is stopped the mount_mergfs to stop showing anything outside of whats on local.  Let me know what logs are needed. Thank you.

Share this post


Link to post
On 6/22/2020 at 6:14 PM, DZMM said:

Multiple mounts, one upload and one tidy-up script.

 

Do you use the same API/project or do you create separate for each TeamDrive ? 

Share this post


Link to post
3 hours ago, Hypner said:

Hey so while restarting my server today I ran into an issue where after 5-10 minutes or so sometimes longer my mount_mergerfs doesn't show everything. As in it only shows my downloads folder and not anything from mount_rclone. I am able to run the mount script and things work again but I dont know what is stopped the mount_mergfs to stop showing anything outside of whats on local.  Let me know what logs are needed. Thank you.

Logs from mount script and script options please

Share this post


Link to post
2 hours ago, axeman said:

Do you use the same API/project or do you create separate for each TeamDrive ? 

If you're using SAs you don't need APIs.  If you're not, then unique client IDs is recommended

Share this post


Link to post
On 6/27/2020 at 11:32 PM, DZMM said:

Logs from mount script and script options please

I actually ended up figuring it out. Thanks.

Share this post


Link to post

Alright so I have tried the AutoRclone to create multiple service accounts but just ran into a brick wall and didn't get any support author. Anyway no biggie. I was hoping someone here could help me here instead as it looks like a lot of users are running service accounts. 

I have created one service account already just need to know what to really do to use them once I create them. I know dumb question but I want to be sure I'm not going to just jack up my current setup.  

Share this post


Link to post
11 hours ago, Hypner said:

I have created one service account already just need to know what to really do to use them once I create them. I know dumb question but I want to be sure I'm not going to just jack up my current setup. 

It's all explained on my github

Share this post


Link to post
Posted (edited)

@DZMM

If anybody is interested in testing a modified rclone build with a new upload tool. Feel free to grab the builds of my repository. You can run the builds side-by-side with stable rclone so you don't have to take down rclone for testing purposes! It should go without saying, but only run this if you are comfortable with rclone / DZMMs scripts and how they function. If not, you should stick on DZMMs scripts with rclone official build!

 

Users of this modified build have reported upload speeds of ~1.4x faster than rclone and ~1.2-1.4x on downloads. I fully saturate my gig line on uploads with lclone where on stock rclone I typically got around 75-80% saturation. 

 

I've also got some example scripts for pulling from git, mounting, and uploading. Config files are already setup so you just have to edit them for your use case. The scripts aren't elegant but they get the job done. If you anybody likes it, I'll probably script it better to build from src as oppose to just pulling the pre-builds from my github.

 

https://github.com/watchmeexplode5/lclone-crop-aio

 

Feel free to use all or none of the stuff there. You can just run just the lclone build with DZMM's scripts if you want (make sure to edit the rclone config to include these new tags)

drive_service_account_file_path = /folder/SAs (No trailing slash for service account file)
service_account_file = /folder/SAs/any_sa.json

 

 

All build credit goes to l3uddz who is a heavy contributor to rclone and cloudbox. You can follow his work on the cloudbox discord if you are interested

 

-----Lclone (also called rclone_gclone) is a modified rclone build which rotates to a service accounts upon quota/api errors. This effectively not only removes the upload limit but also the download limit (even via mount command - solving plex/sonarr deep dive scan bans) and also a bunch of optimization features. 

 

-----Crop is a command line tool for uploading which utilizes rotating service accounts based once a limit has been hit. So it runs ever service account to it's limit before rotating. Not only that but you can have all your upload settings placed in a single config file (easy for those using lots of team drives). You can also setup the config to sync after upload so you can upload to one drive and server-side sync to all your other backup drives/servers with ease. 

 

For more info and options on crop/rclone_gclone config files check out:

l3uddz Repository https://github.com/l3uddz?tab=repositories

Edited by watchmeexplode5

Share this post


Link to post
On 7/4/2020 at 8:39 PM, watchmeexplode5 said:

@DZMM

If anybody is interested in testing a modified rclone build with a new upload tool. Feel free to grab the builds of my repository. You can run the builds side-by-side with stable rclone so you don't have to take down rclone for testing purposes! It should go without saying, but only run this if you are comfortable with rclone / DZMMs scripts and how they function. If not, you should stick on DZMMs scripts with rclone official build!

 

Users of this modified build have reported upload speeds of ~1.4x faster than rclone and ~1.2-1.4x on downloads. I fully saturate my gig line on uploads with lclone where on stock rclone I typically got around 75-80% saturation. 

 

I've also got some example scripts for pulling from git, mounting, and uploading. Config files are already setup so you just have to edit them for your use case. The scripts aren't elegant but they get the job done. If you anybody likes it, I'll probably script it better to build from src as oppose to just pulling the pre-builds from my github.

 

https://github.com/watchmeexplode5/lclone-crop-aio

 

Feel free to use all or none of the stuff there. You can just run just the lclone build with DZMM's scripts if you want (make sure to edit the rclone config to include these new tags)


drive_service_account_file_path = /folder/SAs (No trailing slash for service account file)
service_account_file = /folder/SAs/any_sa.json

 

 


All build credit goes to l3uddz who is a heavy contributor to rclone and cloudbox. You can follow his work on the cloudbox discord if you are interested

 

-----Lclone (also called rclone_gclone) is a modified rclone build which rotates to a service accounts upon quota/api errors. This effectively not only removes the upload limit but also the download limit (even via mount command - solving plex/sonarr deep dive scan bans) and also a bunch of optimization features. 

 

-----Crop is a command line tool for uploading which utilizes rotating service accounts based once a limit has been hit. So it runs ever service account to it's limit before rotating. Not only that but you can have all your upload settings placed in a single config file (easy for those using lots of team drives). You can also setup the config to sync after upload so you can upload to one drive and server-side sync to all your other backup drives/servers with ease. 

 

For more info and options on crop/rclone_gclone config files check out:

l3uddz Repository https://github.com/l3uddz?tab=repositories

This is really nice, i'm currently playing around with this, and this will simplify my setup, can you share in which order you run the custom scripts, like in what priority, and which schedule?

 

Also it would awesome to have some short readme in github, to help with the setup.

Share this post


Link to post
Posted (edited)

@Thel1988

 

Currently I kinda left everything barebones because it's more for advanced users. Definitely not for those just getting into it. But yeah, I added a basic readme. Most settings can be viewed on their official project page and the rest of the things are pretty self explanatory within the configs/scripts if you read them.

 

For the script order --> I run the install script on startup of the array, then I run the mount script so I can access my mounts. Finally I cron my upload for every 20 minutes.

 

I don't want to hijack DZMM's thread too much so if anybody has more questions feel free to PM me. 

 

 

Edited by watchmeexplode5

Share this post


Link to post

I'm still not 100% satisfying with my download. When I download something I only get around 20 Mbps and have gigabit line. I have tried in both local and mergerfs folder. Could it be perhaps the parity is writing and slowing down and if so, how do I solve it? 

Share this post


Link to post
Posted (edited)

@Bjur

Download via what -- Usenet/Torrent?

Or download from your actual mount?

 

Are you utilizing a cache drive to avoid parity write bottlenecks?

 

Lots of different variables can effect your dl speeds and a lot are out of your control --> like distance from server and peering to the server. 

 

But onto what you can control. Generally the fastest way (and to test for any bottlenecks) would be to dl from whatever to a share that is set as "use cache: only" in unraid. That way you avoid any parity write overhead. Also, kinda obvious but, NVME/SSD will trump any mechanical HDD so for quick writes that's what you should be using.

 

Other than that, you can play with the amount of parallel workers. Buffer and cache size of files, ect. With DZMMs scripts, these values are  optimized for downloading/streaming from gdrive but you can read up on others settings on the official rclone forum. Animosity022's github has some great settings (heavily tested and very active on the rclone forum). His recommendations are often the most widely accepted settings when it comes to general purpose mounting!

Edited by watchmeexplode5

Share this post


Link to post

Hi thanks for the fine answer :) using Usenet and just DL to standard share. When I did DL to UD then I get fast speed so it must be parity writing. I initially did DL to SSD also fast speed but have a Samsung EVO 860 or 870 can't remember but I didn't want to use that because of the tearing so I only use that for dockers. Does that makes sense? 

Share this post


Link to post

@Bjur

Yeah. Kinda a double edged sword with the ssd/nvme game. I let my nvme's get hit hard on the wear and tear front because it's so nice to write and unpack rapidly. Cost vs benefits debate but I'm a sucker for the speed of them.  

Share this post


Link to post

Hello all
I could need some help. Did I do it correct?

I asume the mount script is not fully correct. i have copied id from my windows Intel NUC.

rclone mount --allow-other --allow-non-empty --cache-db-purge --buffer-size 32M --use-mmap --drive-chunk-size 32M  --timeout 1h  --vfs-cache-mode full --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G gcrypt:/ t:

 

Feel free to adjust the mount comand.

Thank you

image.thumb.png.8e72f740a8f19dc672b976523ccdc0a0.png

Share this post


Link to post
Posted (edited)

@DiniMuetter

 

It's best to use all the script on DZMM's github: https://github.com/BinsonBuzz/unraid_rclone_mount

 

Instructions for setting up all the user settings are well documented on his github. Read that fully and you should have no issue setting it up correctly.

 Use the userscripts plugin for easy editing and running. 

 

 

Your current command is mounting like it's on a windows file system -- Mounting your gcrypt: to the windows "t:" drive. 

For unraid that won't work and it should look something like:

rclone mount \
	....
	....
	gcrypt: /mnt/user/cloud (or where ever you want your gcrypt to be mounted in unraid)

But again. If you use DZMM's scripts you won't have to do any of the hard editing. Simply set your user settings at the beginning of his scripts and they automagically configure it for you! 

 

Feel free to chime back in if you have more questions/problems

Edited by watchmeexplode5

Share this post


Link to post
On 11/7/2018 at 12:21 PM, DZMM said:

I have 300/300 which is good enough for about 4-5x 4K or about 20-30x 1080P streams, although my usage is nowhere near this

How do you get this in the UK?

Share this post


Link to post
28 minutes ago, cinereus said:

How do you get this in the UK?

You need a fibre to the building provider, which isn't typically widely available. My building, for example, has a provider offering 1000/1000 500/500 and a "basic" package of 250/250.

Most other fibre services have a upload speed cap from 10 to 50 Mbps even with gigabit download. E.g. Virgin has M500 pack that is 500/52.

Share this post


Link to post
2 hours ago, cinereus said:

How do you get this in the UK?

Back then I was in a Gigaclear area and I was loving my 1000/1000 service.   Now I only get 360/180, which is adequate.

 

Now if you're really lucky, there are some providers in the UK who are offering 10000/10000 - one day we'll all get speeds like that!

Share this post


Link to post
43 minutes ago, DZMM said:

Back then I was in a Gigaclear area and I was loving my 1000/1000 service.   Now I only get 360/180, which is adequate.

 

Now if you're really lucky, there are some providers in the UK who are offering 10000/10000 - one day we'll all get speeds like that!

One day I'll tell my grandchildren the tale of the 56k modem beeping. 😅

 

Share this post


Link to post
1 hour ago, testdasi said:

One day I'll tell my grandchildren the tale of the 56k modem beeping. 😅

 

Fancy - I started out with a 14.4 - and it was like magic to get a 28.8 Zoom modem. That sucker even did voicemail.

 

Bit something forget the name of the software. 

 

By the time 56K hit my neighborhood, Cable was rolling out 1, 3, and maybe 5mbit lines. 

Share this post


Link to post

Hello!

Im trying to mount my gdrive in unraid but Im facing some problems. I'm not a native english speaker so maybe thats the main problem haha

 

I have created 3 remotes in rclone. one called gdrive with connect to  my gdrive, one called gcache which points to 'gdrive:media' and one gcrypt which points to 'gcache:'. I think thats ok.

 

Now I have to create a user script with the rclone_mount script but I'm seeing all the folders at the beginning and that's where I'm getting lost.

 

I have a user share (calle Plex) with 3 disks.  I have a folder called 'movies' inside that share, so /mnt/user/Plex/movies. I want to create another folder called gdrivemovies so /mnt/user/Plex/gdrivemovies. So, in my case:

 

RcloneRemoteName="gcrypt"

RcloneMountShare=  ???

LocalFilesShare="/mnt/user/Plex/gdrivemovies"

MergerfsMountShare="ignore"

DockerStart="transmission plex sonarr radarr"

MountFolders=these should be folders inside my gdrive? I mean I have two folders one called 'media' (which I think I need for the cache)  and one 'movies'. Do I  need more?

Thanks in advance and great work.

 

Share this post


Link to post

 

@Yeyo53

Do you plan on moving your local plex files to the cloud? Or keeping some files local and some in the cloud?

 

To start off, I wouldn't use the rclone cache system if you don't have to. In my tests, I haven't seen any performance gains from it compared to the scripts listed here. 

I recommend using just your a remote pointing to your gdrive and a crypt pointing to that gdrive.

 

 

Here is an explanation of the commands that I think you are struggling with:

 

RcloneMountShare="/mnt/user/mount_rclone" 

  • This is where your gdrive will be mounted on your server. So when you navigate to /mnt/user/mount_rclone you will see the content of your gdrive. In your case it sounds like your will see your two folders which are "media" and "movies" 

 

LocalFilesShare="/mnt/user/local"

  • This is where local media is placed to be uploaded to the gdrive when you run the upload script. This is where you will have a download folder, a movie folder, a tv folder, or any folder your want. 

MergerfsMountShare="ignore"

  • If your fill this in it will combine your local and gdrive to a single folder. So lets say you set it as /mnt/user/mount_mergerfs. 
  • These files do not actually exist at that location but simply appear like they are at this location. Here is a visual example to help  
/mnt/user/
      │
      ├── mount_rclone (Google Drive Mount)
      │      └── movies
      │            └──FILE ON GDRIVE.mkv
      │           
      ├── local
      │     └── movies
      │           └──FILE ON LOCAL.mkv
      │
      └── mount_mergerfs
            └── movies
                  ├──FILE ON GDRIVE.mkv
                  └──FILE ON LOCAL.mkv 

 

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}

  • These are the folders created in your LocalFileShare location. The folders here will be uploaded to gdrive when the uploader script runs (except the download folder. The uploader ignores that folder). 
  • So typically it's best to leave them as the default value. You can always make your own files there if you want

 

Share this post


Link to post
Posted (edited)

@Yeyo53

 

These are the settings I would recommend for starting out. Mostly default but adapted to work for your Plex mount. Keeping things default also makes initial setup and support easier!


Using gcrypt pointing to your gdrive

RcloneRemoteName="gcrypt"
RcloneMountShare="/mnt/user/mount_rclone"
LocalFilesShare="/mnt/user/local"
MergerfsMountShare="/mnt/user/mount_mergerfs"
DockerStart="transmission plex sonarr radarr"
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="/mnt/user/Plex/"

 

So your gdrive will be mounted at .../mount_rclone. 

Your local file at .../local (to be moved to gdrive on upload)

I added your mnt/user/Plex/ folder to the localfileshare2 setting for mergerfs to see.

 

The merged folder will be at .../mount_mergerfs

  • If you go to .../mount_mergerfs you will have all your paths combined so your gdrive, your ../local, and your /Plex files will all be there. 
  • When you write/move/copy things to .../mount_mergerfs it will be written to /mnt/user/local/
  • When you run the upload script. Anything in .../local folder will be uploaded to your gdrive.

 

So with this configuration you should point Plex/Sonarr/NZBGet to "/mnt/user/mount_mergerfs"

It will still see your media that's in your /Plex folder because it's added to localfileshare2.

 

This setup will keep your /Plex folder untouched while you make sure everything works well. If you want to move portions of your /Plex folder to your gdrive. Simply move files from /mnt/user/Plex to /mnt/user/local (or /mnt/user/mount_mergerfs). Then run the upload script. 

Edited by watchmeexplode5

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.