Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

7 hours ago, animeking said:

still cant get service accounts to work. i have followed and read the github but no luck here is my logs: 

 

In your logs:

 

 failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service/sa_gdrive.json: no such file or directory

Link to comment
On 10/23/2020 at 10:19 PM, learningunraid said:

Hello, I have been trying to install rClone on my unRaid Server. But I wish to have a little bit of understand with the custom setup I've been doing.

 

Disk Setup:

Parity: 8TB

Data: 8TB

Cache: 250GB SSD

Unassigned Drives: 1TB HDD (/mnt/disks/us_hdd1) and 120GB SSD (/mnt/disks/ua_ssd1)

 

Setup:

I use 120GB SSD of Unassigned Drives for the AppData of the Docker and also for the Docker.img.

I use 1TB HDD of Unassigned Drives for the Downloads of SABnzb or Torrent Client which means, it skips the whole array (Cache/Data/Parity). Because, I mainly download Movies/Shows and I use Google Drive for it. As I have many other shows on Google Drive already and I want to use Google Drive only.

(In short, I don't want to use Data Drives or Array in Total for the Media/Entertainment which if I loose, I wouldn't cry either because I want to use Array for the Data which is important for me like Work/CCTV etc).


Docker.img: /mnt/disks/ua_ssd1/system/docker/docker.img
Appdata Storage: /mnt/disks/ua_ssd1/appdata/

 

Now, How do I make sure that, rclone copies the content which are downloaded by the Sonarr/Radarr to the Google Drive and then Plex is able to stream those content from Google Drive?

 

Previous Experience: Yes, I have used rClone but never with unraid or docker.

 

I have seen multiple videos but I am just not able to understand it, and I have read your readme file on Github too but still, I am unclear of many things.

 

I do have questions but I don't know how to ask them since, I don't know how I can achieve it for now.

 

But,

 

1. I do not want to use crypt/decrypt/encrypt thing. I straight away with to use Google Drive and It's Mount.

2. I want to use unassigned 1TB HDD for the downloading.

3. Rest, I have shared the docker settings/config above.

 

Thanks.

Hi @DZMM Sorry. Hmm, I am still looking for the help here.

Link to comment
1 hour ago, learningunraid said:

1. I do not want to use crypt/decrypt/encrypt thing. I straight away with to use Google Drive and It's Mount.

You don't have to encrypt your files if you don't want to.  Just create an unecrpyted rclone remote.  This is very easy to do - if you need help doing this there are other threads (although you can probably work out what you need to do in this thread) as this thread is for support of my scripts

1 hour ago, learningunraid said:

I want to use unassigned 1TB HDD for the downloading.

In my scripts, 

 

RcloneMountShare="/mnt/wherever_you_want/mount_rclone" - doesn't matter as these aren't actually stored anywhere

 

LocalFilesShare="/mnt/ua_hdd_or_whatever_you_called_it/local" - for the files that are pending upload to gdrive

 

MergerfsMountShare="/mnt/wherever_you_want/mount_mergerfs" - doesn't matter as these aren't actually stored anywhere

 

I've just checked my readme, and once you've worked out how to setup your remotes, which isn't covered but shows what they should look like afterwards, all the information you need is there

 

https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/README.md

  • Like 1
Link to comment
On 10/24/2020 at 8:17 AM, animeking said:

@DZMM on one of your previous posted this setup, is that encrypted folder name is the remote drive we are sending the media/files too or it needs to stay encrypted folder name? 

image.png

of course not

 

9 hours ago, animeking said:

Will this work with Dropbox as well??? I'm thinking about moving my storage to Dropbox

Sent from my SM-N986U using Tapatalk
 

 

It supports whatever backends rclone supports.  What the streaming experience is like for non-google storage?  I don't know - check the rclone forums

Link to comment
6 minutes ago, DZMM said:

You don't have to encrypt your files if you don't want to.  Just create an unecrpyted rclone remote.  This is very easy to do - if you need help doing this there are other threads (although you can probably work out what you need to do in this thread) as this thread is for support of my scripts

In my scripts, 

 

RcloneMountShare="/mnt/wherever_you_want/mount_rclone" - doesn't matter as these aren't actually stored anywhere

 

LocalFilesShare="/mnt/ua_hdd_or_whatever_you_called_it/local" - for the files that are pending upload to gdrive

 

MergerfsMountShare="/mnt/wherever_you_want/mount_mergerfs" - doesn't matter as these aren't actually stored anywhere

 

I've just checked my readme, and once you've worked out how to setup your remotes, which isn't covered but shows what they should look like afterwards, all the information you need is there

 

https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/README.md

Thanks for the reply.

 

1. I have already mounted the Google Drive (manually) using the rclone command in CLI. But, I will leave it to your script now.

 

2. I am little new but in your script, there are multiple locations which are not matching my setup as I use unassigned drive for the appdata. For example: 8mYZu0i.png

 

AGr3XdI.png

 

1zxrZc4.png

 

3. I also wish to do one thing more, I use my unraid server for the CCTV as well and I want to copy (not sync) from local HDD to Google Drive. How can I do so using your script?

 

4. Can we mount multiple Google Drives using your script?

 

Thanks.

 

Edited by learningunraid
Link to comment
2 hours ago, learningunraid said:

I am little new but in your script, there are multiple locations which are not matching my setup as I use unassigned drive for the appdata. For example:

Edit the script if you want to, or let the script create the directory.

 

2 hours ago, learningunraid said:

3. I also wish to do one thing more, I use my unraid server for the CCTV as well and I want to copy (not sync) from local HDD to Google Drive. How can I do so using your script?

 

create another instance of the upload script and choose copy not move or sync

 

2 hours ago, learningunraid said:

4. Can we mount multiple Google Drives using your script?

Yes - create more instances of the mount script and disable the mergerfs mount if you don't need.  If you want the other drives in your mergerfs mount, add the extra rclone mount locations as extra local folder locations in the mount script that creates the mergerfs mount.

  • Like 1
Link to comment
8 hours ago, DZMM said:

Edit the script if you want to, or let the script create the directory.

 

create another instance of the upload script and choose copy not move or sync

 

Yes - create more instances of the mount script and disable the mergerfs mount if you don't need.  If you want the other drives in your mergerfs mount, add the extra rclone mount locations as extra local folder locations in the mount script that creates the mergerfs mount.

Thanks.

 

1. I don't know how to install or setup: Mergerfs?

 

2. Even though, you have said that, if you don't want don't use crypt drives but then your files ask for the encrypt/crypt drives only. Screenshot: 

xThpF3p.png

11 hours ago, DZMM said:

You don't have to encrypt your files if you don't want to.  Just create an unecrpyted rclone remote.  This is very easy to do - if you need help doing this there are other threads (although you can probably work out what you need to do in this thread) as this thread is for support of my scripts

 

3. If I let your script create the directory, it means it would use the array right?

8 hours ago, DZMM said:

Edit the script if you want to, or let the script create the directory.

 

4. Adding to Question 3, Since, I use unassigned drive for the downloading purpose, how would mergefs would work? Because, then Download Directory is different from the Directory your script makes.

 

Thanks.

 

Edited by learningunraid
Link to comment
On 10/21/2020 at 3:57 PM, DZMM said:

Best thing to do is move the files within gdrive so you don't hit the 750GB/day limit.

 

I think the answer your config question is yes...if you've setup the SAs, you've done the hard bit.

Thanks, moved in the web interface and hit no 750GB limit although google's readme indicated that.

 

Yes, SA's is setup, group created, group added to the team/shared drive, SA json generated.

I edited my old rclone gdrive_media_vfs mount, removed secrets, linked SA json and added team drive. Then I used the same crypt mount as before.

Seems to work as expected, haven't tried to hit 750GB yet to check for sure though.

 

Do you use separate rclone gdrive mounts for stream and upload? Are there any benefits of doing so?

Link to comment
10 hours ago, niXta- said:

Do you use separate rclone gdrive mounts for stream and upload? Are there any benefits of doing so?

I do - just a bit of paranoia that if something went wrong with the upload, then the streaming mount wouldn't be impacted.  Probably overkill as I think I've only had 1 or 2 API bans in over 2 years and none since I started doing this.

Link to comment
On 10/23/2020 at 11:48 AM, DZMM said:

@live4ever all looks ok. Look in /mnt/user/mount_rclone and /mnt/user/local and you should see the source of the weird files - maybe you did a duff upload somewhere.

 

Either way - if you don't need them (unlikely), just delete and they should go away.

@DZMM had to reboot unRAID server to get it to clear up - unmounting with fusermount wasn't enough.

 

Quick question, does the mount script have to be used to create the subfolders within the mergerfs folder?

 

For example my mount script does:

MountFolders=\{"downloads/complete,downloads/incomplete,backup"\}

and if I manually create a folder like /mnt/user/mount_mergerfs/gdrive_media_vfs/test/video.mkv

the /test/video.mkv disappears (I guess when the mount script runs after the next 10min?)

 

Thanks again

Link to comment
1 hour ago, live4ever said:

Quick question, does the mount script have to be used to create the subfolders within the mergerfs folder?

 

Nope, I added this to the script to try and help first-timers.

 

1 hour ago, live4ever said:

and if I manually create a folder like /mnt/user/mount_mergerfs/gdrive_media_vfs/test/video.mkv

the /test/video.mkv disappears (I guess when the mount script runs after the next 10min?)

 
 

Do you do this before or after the mount script?  It's a bad idea to do b4 the script runs, as you might get mounting issues - "mountpoint isn't empty" errors.  Once rclone and mergerfs are mounted, it's 100% safe to 'create' folders in mergerfs (in reality, the folder is created in /local until uploaded to gdrive) and this is what you should do.  That's the whole point - radarr/sonarr/manual rips etc being added to mergerfs that are accessible to plex regardless of what stage they are at - still residing locally or moved to gdrive.

 

Thanks for the beer just now - if only I could go somewhere to buy one right now!

Edited by DZMM
Link to comment
18 hours ago, DZMM said:

Nope, I added this to the script to try and help first-timers.

 

Do you do this before or after the mount script?  It's a bad idea to do b4 the script runs, as you might get mounting issues - "mountpoint isn't empty" errors.  Once rclone and mergerfs are mounted, it's 100% safe to 'create' folders in mergerfs (in reality, the folder is created in /local until uploaded to gdrive) and this is what you should do.  That's the whole point - radarr/sonarr/manual rips etc being added to mergerfs that are accessible to plex regardless of what stage they are at - still residing locally or moved to gdrive.

 

@DZMMI do it after the mount script is up and running - and I did another test last night created two directories with the following paths:

/mnt/user/mount_mergerfs/gdrive_media_vfs/test/video.mkv
/mnt/user/mount_mergerfs/gdrive_media_vfs/test2/

The directory ../test/video.mkv (with a file within it) was there in this morning while the ../test2/ directory (without any sub-folders or files) was deleted/removed. Not a big deal really though, was just a little concerned that (empty) folders I created in mergerfs were disappearing.

 

Also, I noticed that the mount script created shares/folders only on /mnt/disk14 and /mnt/disk15 (the least full disks of my array) - can I manually create the:

/mnt/local/gdrive_media_vfs

share on disks1-13? Or how do I get the mergerfs share to appear on all disks (to prevent moving files between disks before upload)?

 

Thanks again

 

Link to comment
3 hours ago, live4ever said:

I did another test last night created two directories with the following paths

Ahh I understand what you are saying now.  I'm on the move so I can't check easily, but I think the upload script has delete-empty-src-dirs set to on, which explains this behaviour.

 

Edit: I just checked - it is on

 

3 hours ago, live4ever said:

shares/folders only on /mnt/disk14 and /mnt/disk15

All of this is controlled in your share settings in unraid

Edited by DZMM
Link to comment

Okay I have this all set up, first it was not working but I was trying to bypass unraid's shfs and path everything directly to /mnt/cache and while the rclone mount works the mergerfs does not.

 

Regardless I do have one dumb question as the mergerfs is only mounted as long as I do not close the script window. Therefore most people must be running this in the background, or launching with a cron or CA User Scripts on a schedule. This is expected behaviour I assume? I'm unfamiliar with mergerfs but it looks like the script exits fully and the reporting seems to indicate it's all done can could be closed. The rclone mount is persistent but it seems the mergerfs mount is not, at least when run in the CA User Scripts GUI in the fg.

Link to comment
5 hours ago, crazyhorse90210 said:

Okay I have this all set up, first it was not working but I was trying to bypass unraid's shfs and path everything directly to /mnt/cache and while the rclone mount works the mergerfs does not.

 

Regardless I do have one dumb question as the mergerfs is only mounted as long as I do not close the script window. Therefore most people must be running this in the background, or launching with a cron or CA User Scripts on a schedule. This is expected behaviour I assume? I'm unfamiliar with mergerfs but it looks like the script exits fully and the reporting seems to indicate it's all done can could be closed. The rclone mount is persistent but it seems the mergerfs mount is not, at least when run in the CA User Scripts GUI in the fg.

Are you running it using the "Run Script in Background" option?

 

If not, do it that way. 

  • Like 1
Link to comment

ok i keep getting this error when my rclone is mounted and working 

Quote

29.10.2020 23:10:49 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/plex_vfs for superplex_vfs ***
29.10.2020 23:10:49 INFO: *** Starting rclone_upload script for superplex_vfs ***
29.10.2020 23:10:49 INFO: Script not running - proceeding.
29.10.2020 23:10:49 INFO: Checking if rclone installed successfully.
29.10.2020 23:10:49 INFO: rclone not installed - will try again later.
Script Finished Oct 29, 2020 23:10.49

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

 

Link to comment
On 12/15/2018 at 7:36 AM, DZMM said:

I just made a very useful change to my scripts that has solved my problem with the limit of only being able to upload 750GB/day, which was creating bottlenecks on my local server as I couldn't upload fast enough to keep up with new pending content. 

 

I've added a Teamdrive remote to my setup, that allows me to upload another 750GB/day in addition to the 750GB/day to my existing remote.  This is because the 750GB/day limit is per account - by sharing the teamdrive created by my google apps account with another google account I can upload more.  Theoretically I could repeat for n extra accounts (each one would need a separate token team drive), but 1 is enough for me.

 

Steps:

  1. create new team drive with main google apps account
  2. share with 2nd google account
  3. create new team drive remotes (see first post) - remember to get token from account in #2 not account in #1 otherwise you won't get 2nd upload quota
  4. amend mount script (see first post) to mount new tdrive and change unionfs mount from 2-way union to 3-way including tdrive
  5. new upload script to upload to tdrive - my first upload script moves files from the array, and the 2nd from the cache.  Another way to 'load-balance' the uploads could be to run one script against disks 1-3 and the other against 4-x
  6. add tdrive line to cleanup script
  7. add tdrive line to unmount script
  8. Optional repeat if need more upload capacity e.g. change 3-way union to 4-way

im trying this method but im not understanding changing 2 way to 3way

Link to comment

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

Link to comment
20 minutes ago, MowMdown said:

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

Yes please.  This thread was setup so we could improve the setup together.  Fingers crossed we can implement a one-provider solution using rclone union.

Link to comment
30 minutes ago, MowMdown said:

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

Yes Please! I had too many other things going on and am finally getting back into this thread... Almost feel like holding off again until this gets done, so that I don't have to go back and re-do it. 

Link to comment
3 hours ago, MowMdown said:

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

Another interested person here. I don't actually have a need to upload and my local mounts are 100% separate so I don't actually have need for mergerfs but I'm not sure if it offers better performance on top of raw rclone w/ vfs caching so I would like to see all options!

 

One more question as well in general is this: is the general consensus that rclone's built-in vfs caching is better than using a separate rclone cache mount? Is that cache mount function outdated now?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.