Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

4 hours ago, Michel Amberg said:

Last few rows of the mount script, it checks if backup is running and it does not start the containers in that case. 

Ah good find! I don't use that mount script myself, so never caught it. I just checked but I still have the same folder in tmp though even with the current CA Backup (V3). Are you sure the path changed?

 

4 hours ago, Michel Amberg said:

I have issues with direct playing huge files lately. I am trying to stream a 40GB Remux file and it just stops every 5-10 minutes stating my server is not powerful enough. Looking at the router I am only downloading 5 MB/s which is like 1/4 of my internet speed. Why is this? Can we make it cache the file faster so it does not stop during playback?

What client are you streaming from? I've noticed this problem when I'm using my mibox, and I need to restart the device. My shield Pro never has the issue. Is direct play through your file browser showing the same issue?

Link to comment
On 12/29/2022 at 4:19 PM, Kaizac said:

Ah good find! I don't use that mount script myself, so never caught it. I just checked but I still have the same folder in tmp though even with the current CA Backup (V3). Are you sure the path changed?

 

Sorry I just assumed it would have changed! that is my bad

 

On 12/29/2022 at 4:19 PM, Kaizac said:

What client are you streaming from? I've noticed this problem when I'm using my mibox, and I need to restart the device. My shield Pro never has the issue. Is direct play through your file browser showing the same issue?


Problem was with Plex and not the mount. It seems like PGS subtitle files causes major issues with transcoding remux files. I turned it off and problem is now gone!

Link to comment
3 minutes ago, Michel Amberg said:

 

Sorry I just assumed it would have changed! that is my bad

 


Problem was with Plex and not the mount. It seems like PGS subtitle files causes major issues with transcoding remux files. I turned it off and problem is now gone!

PGS causes a single threaded transcode. So that's often too heavy for the client/server.

Link to comment

Hello,

 

I've got Rclone to work, and I'm currently uploading my content to the provider.

 

There's one thing I can't get to work though.

That's getting Sonarr (and the other *arr's) to see the files on the folder I've got in the cloud. 
How do I make it show the encrypted files and folders, without the encryption so I can hardlink it and read the files and folders?

Edited by Nanobug
Typo
Link to comment
On 1/2/2023 at 8:51 AM, Nanobug said:

Hello,

 

I've got Rclone to work, and I'm currently uploading my content to the provider.

 

There's one thing I can't get to work though.

That's getting Sonarr (and the other *arr's) to see the files on the folder I've got in the cloud. 
How do I make it show the encrypted files and folders, without the encryption so I can hardlink it and read the files and folders?

 

Do you have MergerFS set up? You should be able to see the rclone mount there and all the files unencrypted.

  • Like 1
Link to comment
On 12/16/2022 at 8:22 PM, hohoho said:

Hi


I´m having problem with my plex(rclone) server.
All my local files run without a problem and I can keep over 10 concurrent streams.

 

I am using sharepoint(OneDrive?) to stream media from(rclone).
Everything runs smoothly and I can stream movies when im using Plex.
I want family and friends to be able to use my server.
My problem is that when I open more streams (because i´m testing) then it starts buffering(and basicly freezing) if I try to stream more than one movie at a time(possibly two).

This is my current mount settings:


mount Crypt: R: --volname \rclone\crypt --use-mmap --cache-dir "E:\rclonecach" --vfs-cache-max-size 200G --dir-cache-time 1000h --vfs-cache-mode full --tpslimit 10 --rc --rc-web-gui --rc-user=XXX --rc-pass=XXXX --rc-serve --log-level INFO --log-file=mylogfile.txt

 

Can you imagine what I am doing wrong and why this keeps happening ?

I have 1gig upload and downoload

image.png.9c2f5bd65323e93318548b4582942e15.png

hello hohoho, can you share how did you configured the sharepoint on your rclone/unraid?

 

Thanks

Ronan

Link to comment
  • 2 weeks later...
On 11/29/2022 at 10:34 AM, Kaizac said:

Well, we've discussed this a couple of times already in this topic, and it seems there is not one fix for everyone.

 

What I've done is added to my mount script:

--uid 99

--gid 100

 

For --umask I use 002 (I think DZMM uses 000 which is allowing read and write to everyone and which I find too insecure. But that's your own decision.

 

I've rebooted my server without the mount script active, so just a plain boot without mounting. Then I ran the fix permissions on both my mount_rclone and local folders. Then you can check again whether the permissions of these folders are properly set. If that is the case, you can run the mount script. And then check again.

 

After I did this once, I never had the issue again.

When I did this I still find to have issues so I did what you guys suggested that was creating a script with mount permissions being set up startup array and that seems to have fixed it. 

Link to comment

Hello Everyone,

 

I can sure use a bit of help. I am new to unraid just installed it last night, still learning it.

 

I am trying to to get my gdrive mounted but I am failing miserably at it. I follwed the direction on the main page but I still can't seem to get it to mount.

I also don't get a log file in /tmp/user.scripts/tmpScripts/rclone_mount. Also when I try to go to my gdrive mount directory I get an error: /mnt/user/mount_rclone# cd gdrive-media bash: cd: gdrive-media: Transport endpoint is not connected

gdrive-media is my remote name I don't use crypt. Any help would be appreciated

 

 

Thank you!

 

image.png.50ac40f61a5f3d0055fd3403af8315dd.png

image.thumb.png.3fc3fcb377a4468c2862f01e119c92de.png

 

 

 

Link to comment

Thank you so so much for this script. Been using rclone for years, but never had the need for mergerfs until recently and your scripts made it work in under 5 minutes, haha.

I'm planning to use local storage for my data, with gdrive mostly as a backup.

 

After copying my local stuff to gdrive, will the local version still be used when playing media from the mergerfs mount point?

 

Edit: Yep, it will. This is brilliant!

 

Edited by Pulteney
  • Like 1
Link to comment
  • 2 weeks later...

Hey everyone, since a few days the upload script is not working correctly. It's stuck on this: 

 

Found the issue, apparently Google is blocking Hetzner requests....

 

2023/02/08 11:43:04 ERROR : media/movies/Insurgent (2015)/Divergente 2 l insurrection 2015   1080p VFF EN x264 ac3 mHDgz.mkv: Not deleting source as copy failed: googleapi: got HTTP response code 429 with body: <html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"/><title>Sorry...</title><style> body { font-family: verdana, arial, sans-serif; background-color: #fff; color: #000; }</style></head><body><div><table><tr><td><b><font face=sans-serif size=10><font color=#4285f4>G</font><font color=#ea4335>o</font><font color=#fbbc05>o</font><font color=#4285f4>g</font><font color=#34a853>l</font><font color=#ea4335>e</font></font></b></td><td style="text-align: left; vertical-align: bottom; padding-bottom: 15px; width: 50%"><div style="border-bottom: 1px solid #dfdfdf;">Sorry...</div></td></tr></table></div><div style="margin-left: 4em;"><h1>We're sorry...</h1><p>... but your computer or network may be sending automated queries. To protect our users, we can't process your request right now.</p></div><div style="margin-left: 4em;">See <a href="https://support.google.com/websearch/answer/86640">Google Help</a> for more information.<br/><br/></div><div style="text-align: center; border-top: 1px solid #dfdfdf;"><a href="https://www.google.com">Google Home</a></div></body></html>

 

 

For anyone wandering, the fix is to change Hetzner DNS to Google or Cloudflare.

Edited by Lucka
Link to comment

how does the unmount script work? i ran the mounting script and everything looked ok but the unmount script still leaves the mergerfsMountShare, the RcloneCacheShare and the RcloneMountShare.. i figured it would unmount ALL of those files..

 

When i try to use fusermount -uz on those locations (/mnt/user/mountrclone, /mnt/user0/mount_rclone, and /mnt/user/mount_mergerfs) it gives me an invalid action error)

Link to comment

Hello,

I use the script with the tags --uid 99 --gid 100 unfortunately the directories never get the right permissions;

# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 002 \
	--uid 99 \
	--gid 100 \
	--dir-cache-time $RcloneMountDirCacheTime \
	--attr-timeout $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

I also used this script which works but it is not the right solution because the upload script delete the directory and when they are recreated they are in root

 

#!/bin/sh 
for dir in "/mnt/user/Local"
do echo $dir 
chmod -R ug+rw,ug+X,o-rwx $dir 
chown -R nobody:users $dir 
done

does anyone know why the "rclone mount" doesn't have the correct permissions?

Edited by Nono@Server
Link to comment

Updated my Unraid and now am having some permission issues. Radarr and Sonarr are unable to create new folders when importing to my mounted drives.

 

I can manually create folders, so I know the process works. Any idea why radarr and sonarr can't? I went down to trace level and there wasn't anything useful I could see.

 

Anyone run into this before and know what caused it, and how to solve it?

 

EDIT: error from sonarr

 

Couldn't import episode /downloads/completed/[tv episode name].mkv: Access to the path '/gdrive/mount_unionfs/gdrive_media_vfs/tv_shows/[Name of Series]' is denied.

 

The folder ('/gdrive/mount_unionfs/gdrive_media_vfs/tv_shows/[Name of Series]') has not been created for the series and does not exist. No clue what's causing this.

Edited by privateer
Link to comment
21 hours ago, privateer said:

Updated my Unraid and now am having some permission issues. Radarr and Sonarr are unable to create new folders when importing to my mounted drives.

 

I can manually create folders, so I know the process works. Any idea why radarr and sonarr can't? I went down to trace level and there wasn't anything useful I could see.

 

Anyone run into this before and know what caused it, and how to solve it?

 

EDIT: error from sonarr

 

Couldn't import episode /downloads/completed/[tv episode name].mkv: Access to the path '/gdrive/mount_unionfs/gdrive_media_vfs/tv_shows/[Name of Series]' is denied.

 

The folder ('/gdrive/mount_unionfs/gdrive_media_vfs/tv_shows/[Name of Series]') has not been created for the series and does not exist. No clue what's causing this.

 

The problem went away after some time. Maybe a gdrive or rclone issue.

Link to comment
On 2/20/2023 at 6:38 AM, Nono@Server said:

Hello,

I use the script with the tags --uid 99 --gid 100 unfortunately the directories never get the right permissions;

# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 002 \
	--uid 99 \
	--gid 100 \
	--dir-cache-time $RcloneMountDirCacheTime \
	--attr-timeout $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

I also used this script which works but it is not the right solution because the upload script delete the directory and when they are recreated they are in root

 

#!/bin/sh 
for dir in "/mnt/user/Local"
do echo $dir 
chmod -R ug+rw,ug+X,o-rwx $dir 
chown -R nobody:users $dir 
done

does anyone know why the "rclone mount" doesn't have the correct permissions?

 

Have you tried --umask 000?

 

Is this error still happening?

Link to comment

Hey Everyone,

 

I'm having an issue where after I use the rclone_mount script it works fine for about 3-5 minutes and then suddenly all directories lose their contents and trying to go into mount_rclone/gdrive_vfs throws transport endpoint not connected at me. I've tried restarting, changing things in the unmount script and still no luck.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.