Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

49 minutes ago, teh0wner said:

In the scripts, when specifying the RcloneRemoteName, do you point it to the encrypted type?

you point it to the remote you want to mount i.e. the decrypted remote

 

50 minutes ago, teh0wner said:

Also, I'm not quite sure when RcloneUploadRemoteName would apply?

Many people have setup a separate remote for uploading to reduce the risk of their mounted 'streaming' remote getting an API ban because of odd uploading behaviour, strange subtitle behaviour etc etc

Link to comment
19 hours ago, DZMM said:

you point it to the remote you want to mount i.e. the decrypted remote

 

Many people have setup a separate remote for uploading to reduce the risk of their mounted 'streaming' remote getting an API ban because of odd uploading behaviour, strange subtitle behaviour etc etc

The latter sounds quite neat actually - any tutorialshowing how to set this up?

Thanks

Link to comment
3 hours ago, teh0wner said:

The latter sounds quite neat actually - any tutorialshowing how to set this up?

Thanks

No tutorial needed.  If you've setup gdrive_media_vfs to be gdrive:crypt, then just create another remote with another name pointing to the same location i.e. gdrive:crypt with the same passwords.  The only difference is create a different client_ID so that one gets the ban, if any.

 

To be honest I've only had an API ban once I think and that was when I didn't know what I was doing yonks ago.

Edited by DZMM
Link to comment
6 hours ago, Stevenson Chittumuri said:

Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

The script doesn't control how your apps access your mount.  Somewhere along the way, one of your apps is accessing your mount oddly e.g. some users in this thread have had problems with bazarr and have created a separate mount/client id combo for this.

  • Thanks 1
Link to comment
8 hours ago, Stevenson Chittumuri said:

But I thought the script was supposed to make sure I don't hit the threshold for API requests? Also (dumb question) couldn't i just make another API Client ID + Secret to bypass this?

You probably misunderstood the Service Accounts section.

SA's can allow you to bypass limits, including API limit, (just switch to a remote using a different SA) but that isn't how the SA's are used in the script.

 

SA's are basically the same as having additional client ID + secrets. (so the answer is yes on your 2nd question).

 

Generally though, you shouldn't be maxing out API requests.

The API limits are very generous so you need to resolve the root cause first i.e. whatever app / docker that is crushing it.

The first hunch is subtitle-related apps. They have been known to cause problems with API limits.

Link to comment
3 hours ago, testdasi said:

You probably misunderstood the Service Accounts section.

SA's can allow you to bypass limits, including API limit, (just switch to a remote using a different SA) but that isn't how the SA's are used in the script.

 

SA's are basically the same as having additional client ID + secrets. (so the answer is yes on your 2nd question).

 

Generally though, you shouldn't be maxing out API requests.

The API limits are very generous so you need to resolve the root cause first i.e. whatever app / docker that is crushing it.

The first hunch is subtitle-related apps. They have been known to cause problems with API limits.

Ahh I see that makes a lot more sense. Thanks for clearing that up.

 

Then can’t anyone make a SA account to bypass this entire API thing. It’s a bit more steps but I created one in a few minutes, inside the credentials tab for Cloud API. (Probably did it wrong tho lol)

 

And it was either that I was using Plex to scan my libraries (which I already had movies + tv shows) or using Nextcloud to upload photos and videos?

 

I did (mnt/user/mergerfs_mount/“app”/“data”/) for both containers, so maybe nextcloud was using the wrong directory too?

Edited by Stevenson Chittumuri
Link to comment

Hi DZMM! First thank you for the amazing scripts. I was too far down the rabbit hole when I found these to use them wholesale, but I pulled a lot of your code to add to my existing scripts and I have everything running perfectly... except one issue.

 

For Sonarr/Radarr when I point them to my mergerfs mount (which I have setup with a local folder as Read/Write and my Google Cloud folder as Read Only) I get a permission error when I try to add a Movie and/or Series to it. Using Midnight commander I can read and write to the mergerfs mount with no issue and I can also have Plex, Sonarr, and Radarr read from it with no issue, but it seems my Docker Containers cannot write to it. Would you or anyone else here have an idea what permission issue I have to fix to allow this to happen?

yc8evDP.jpg

 

 

**UPDATE** I was able to get it to work by going into /mnt/disks/merger folder that contains all of my mergerfs mount points and running 

sudo chmod -R -v 777 *

The command failed on every folder that was mounted via the Read Only option, but was successful on the RW local folders. Sonarr and Radarr can now add new items. However, this command is manual and it appears I have to run it on every server reboot. Any ideas on how to automate it? Thanks again!

Edited by veritas2884
Link to comment
14 minutes ago, veritas2884 said:

... (which I have setup with a local folder as Read/Write and my Google Cloud folder as Read Only) ...

You can't mix RO and RW like that with mergerfs. It will not let you write to any file that is on the RO source since it doesn't support CoW (copy-on-write).

To use a mix of RO and RW, you need to use unionfs (which was in the older versions of the script) and of course not have the mergerfs benefits.

 

I guess the important point is why you would need the rclone mount to be RO.

Link to comment
3 minutes ago, testdasi said:

You can't mix RO and RW like that with mergerfs.

Thank you for the reply.

mergerfs -o defaults,allow_other,use_ino,fsname=mergerFS /mnt/user/mediacloud/Movies=RW:/mnt/disks/secure/Movies=RO:/mnt/user/media/Movies=RO /mnt/disks/merge/Movies

This is the command I use.

The mediacloud/Movies folder is the temporary location that Radarr copies the download to until my rclone chron job uploads it.

The secure/Movies is the encrypted rclone mounted folder

The media/Movies is my legacy folder with over a 1000 movies in it that I will eventually upload but am in no rush to.

 

When you said you can't mix those commands, I have to say it is working. When I write a file to /mnt/disks/merge/Movies the file physically lands in the mnt/user/mediacloud/Movies folder on my array, which is the only one setup as RW. 

Link to comment
2 minutes ago, veritas2884 said:

When you said you can't mix those commands, I have to say it is working. When I write a file to /mnt/disks/merge/Movies the file physically lands in the mnt/user/mediacloud/Movies folder on my array, which is the only one setup as RW. 

...but if you write to a file that exists on one of the RO locations, it will refuse to write.

Radarr/Sonarr requires full write capability to ALL content of the folder.

 

Enabling full write on a mix of RO and RW content requires CoW which is not supported by mergerfs.

 

Link to comment

Hello again!

 

I have been trying to get the rclone upload to work and the script keeps stopping here:

 

26.02.2020 13:54:01 INFO: rclone not installed - will try again later.

Here are my settings

 

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="secure" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/mediacloud" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/disks/" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

To get to my rclone mounted files they're in /mnt/disks/secure , so I have tried changing the RcloneMountShare= to RcloneMountShare=/mnt/disks/secure and RcloneMountShare=/mnt/disks and get the same error.

 

I have two rclone mounts 

Name                 Type
====                 ====
gcloud               drive
secure               crypt

 

These are my mount commands:

 

rclone mount --max-read-ahead 1024k --allow-other gcloud: /mnt/disks/gcloud &
rclone mount --max-read-ahead 1024k --allow-other secure: /mnt/disks/secure &

 

 

Banging my head against the wall as to why I can't get the script to work. Thanks for any help!

 

 

 

 

Edited by veritas2884
Link to comment
19 minutes ago, veritas2884 said:

Hello again!

 

I have been trying to get the rclone upload to work and the script keeps stopping here:

 


26.02.2020 13:54:01 INFO: rclone not installed - will try again later.

Here are my settings

 


# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="secure" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/mediacloud" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/disks/" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

To get to my rclone mounted files they're in /mnt/disks/secure , so I have tried changing the RcloneMountShare= to RcloneMountShare=/mnt/disks/secure and RcloneMountShare=/mnt/disks and get the same error.

 

I have two rclone mounts 

Name                 Type
====                 ====
gcloud               drive
secure               crypt

 

These are my mount commands:

 


rclone mount --max-read-ahead 1024k --allow-other gcloud: /mnt/disks/gcloud &
rclone mount --max-read-ahead 1024k --allow-other secure: /mnt/disks/secure &

 

 

Banging my head against the wall as to why I can't get the script to work. Thanks for any help!

 

 

 

 

 

You're not using my mount script that would have created a mountcheck file in /mnt/disks/secure - the upload script stops if it can't find that file

Edited by DZMM
Link to comment

One thing I'm trying to wrap my head around, which sounds a bit silly.

The mount script does the following :

a) Creates/Uses a
LocalFilesShare (Local Folder) which is used by MergerFS later (Not quite sure what the purpose of MountFolders are)

b) Mounts an rclone drive
c) MergerFS takes a + b and creates a 'unified' folder


The upload script does the following :

a) Takes whatever is in
LocalFilesShare and moves/copies/whatever you set to the rclone drive
 

I have a few questions :

1) What will happen if I put something directly into the rclone mount?
2) If I put something in MergerFS folder, will it be immedeately uploaded to the rclone drive?
3) Is there a way I can have local data + remote data? i.e. make MergerFS 'bind' my local data, whilst also having remote data?

I hope this makes sense.

Thanks

Link to comment
21 minutes ago, teh0wner said:

 

I have a few questions :

1) What will happen if I put something directly into the rclone mount?
2) If I put something in MergerFS folder, will it be immedeately uploaded to the rclone drive?
3) Is there a way I can have local data + remote data? i.e. make MergerFS 'bind' my local data, whilst also having remote data?

 

1) Rclone will do a direct write to your cloud storage. But I have come to find out that RCLONE isn't reliable at doing direct writes and that is why the upload script was created. To move local files to cloud files in the background.

2) This is something I asked the creator via DM. I am hoping to find out what the direct write to MergerFS folder actual does with the file.

3) That is what it is doing. It creating a merged folder that appears to unraid/your docker container as if the content of your local folder and the content of your cloud are one folder. So Plex or whatever you have addressing your data doesn't know if it is local or in the cloud.

Link to comment

To follow up on the previous person’s question, I too want to know where the file physically resides when writing to the mergerfs folderol. So if mergerfs merges /rclonemount and /localstorage into /merge and I write a file to /merge Where does the file actually sit until the upload script runs?

Edited by veritas2884
Link to comment
1 hour ago, veritas2884 said:

 

1) Rclone will do a direct write to your cloud storage. But I have come to find out that RCLONE isn't reliable at doing direct writes and that is why the upload script was created. To move local files to cloud files in the background.

2) This is something I asked the creator via DM. I am hoping to find out what the direct write to MergerFS folder actual does with the file.

3) That is what it is doing. It creating a merged folder that appears to unraid/your docker container as if the content of your local folder and the content of your cloud are one folder. So Plex or whatever you have addressing your data doesn't know if it is local or in the cloud.

For Point 3 though, if I'm understanding this correctly, it will eventually 'move' whatever is in LocalFilesShare to rclone mount. Or will the upload script only move what's in MergeFS Folder, that hasn't been 'moved' yet, leaving LocalFilesShare untouched?

Link to comment
51 minutes ago, teh0wner said:

For Point 3 though, if I'm understanding this correctly, it will eventually 'move' whatever is in LocalFilesShare to rclone mount. Or will the upload script only move what's in MergeFS Folder, that hasn't been 'moved' yet, leaving LocalFilesShare untouched?

From my understanding it is best to think of the MergerFS folder as not an actual folder but a shortcut to the local and cloud folders. The MergerFS system writes the data to the local storage but reads from both. Then the upload script moves the local storage files to the cloud. I don't believe the MergerFS system actually moves files after the initial write. But, again, this is a point I am fuzzy on as well, so I am hoping someone that knows 100% will chime in. 

Link to comment
15 hours ago, veritas2884 said:

To follow up on the previous person’s question, I too want to know where the file physically resides when writing to the mergerfs folderol. So if mergerfs merges /rclonemount and /localstorage into /merge and I write a file to /merge Where does the file actually sit until the upload script runs?

New files added to the mergerfs mount get added to the local folder and then moved to the cloud via the upload script.  Changes to files already in the cloud happen in the cloud without downloading and then reuploading (a la unionfs).  You can add files safely to local if you want but directly to the rclone mount isn't advised, as writing direct to the mount outside rclone move is not 100% reliable.

14 hours ago, teh0wner said:

For Point 3 though, if I'm understanding this correctly, it will eventually 'move' whatever is in LocalFilesShare to rclone mount. Or will the upload script only move what's in MergeFS Folder, that hasn't been 'moved' yet, leaving LocalFilesShare untouched?

mergerfs isn't a physical folder, so files can't be 'moved' from it - files are moved to the cloud from the real local location.

 

13 hours ago, veritas2884 said:

From my understanding it is best to think of the MergerFS folder as not an actual folder but a shortcut to the local and cloud folders. The MergerFS system writes the data to the local storage but reads from both. Then the upload script moves the local storage files to the cloud. I don't believe the MergerFS system actually moves files after the initial write. But, again, this is a point I am fuzzy on as well, so I am hoping someone that knows 100% will chime in. 

Correct although it's the folder you should be using for all activities i.e. sonarr, plex etc as it allows them to see all files available, regardless of whether they are in the cloud, or local waiting to be uploaded to the cloud

Edited by DZMM
Link to comment
8 hours ago, DZMM said:

New files added to the mergerfs mount get added to the local folder and then moved to the cloud via the upload script.  Changes to files already in the cloud happen in the cloud without downloading and then reuploading (a la unionfs).  You can add files safely to local if you want but directly to the rclone is mount isn't advised, as writing direct to the mount outside rclone move is not 100% reliable.

mergerfs isn't a physical folder, so files can't be 'moved' from it - files are moved to the cloud from the real local location.

 

Correct although it's the folder you should be using for all activities i.e. sonarr, plex etc as it allows them to see all files available, regardless of whether they are in the cloud, or local waiting to be uploaded to the cloud

Excellent, thanks for the explanation. So just to confirm, the script today have no way of doing a MergeFS of a 100% Local Folder (which you never want to move to the cloud) and the Cloud. The LocalFilesShare will always eventually move to the cloud.

I guess the above can be achievied by just modifying the mergefs command in the script slightly to also include an 100% local I want to 'merge'.

I would also like to ask, what's the purpose of MountFolders? Wouldn't everything in Local get eventually moved on?

Edited by teh0wner
Link to comment
1 hour ago, teh0wner said:

Excellent, thanks for the explanation. So just to confirm, the script today have no way of doing a MergeFS of a 100% Local Folder (which you never want to move to the cloud) and the Cloud. The LocalFilesShare will always eventually move to the cloud.

I guess the above can be achievied by just modifying the mergefs command in the script slightly to also include an 100% local I want to 'merge'.

I would also like to ask, what's the purpose of MountFolders? Wouldn't everything in Local get eventually moved on?

The "moving to the cloud" is done through the upload script so if you don't run the upload script then your local will remain local forever.

Link to comment

@teh0wner mergerfs doesn't have anything to do with moving files from local to the cloud - the upload script does that.

 

- If you don't want files from local to be uploaded then don't run the upload script.

 

- If you want to add 2 local folders to mergerfs for Plex etc and not have one uploaded then you'll have to do some mergerfs tinkering.  Not hard you just have to read up a bit on mergerfs.

Edited by DZMM
  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.