Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

8 hours ago, francrouge said:

@DZMMHi 

 

Can you tell me in you're script what user should the script put folders

 

like root or nobody

 

My folder got root permission and it seem to glitch a lot with the script normal ?

 

 

thx

Shouldn't be root, this caused problems. It should be:

user: nobody

group: users

 

Often this is 99/100.

Link to comment
12 minutes ago, francrouge said:

Here is my folder

 

image.png.75e6b781bb48db5dc5680145cda2fc8c.png

 

What should i do ?

 

 

thx

Well, we've discussed this a couple of times already in this topic, and it seems there is not one fix for everyone.

 

What I've done is added to my mount script:

--uid 99

--gid 100

 

For --umask I use 002 (I think DZMM uses 000 which is allowing read and write to everyone and which I find too insecure. But that's your own decision.

 

I've rebooted my server without the mount script active, so just a plain boot without mounting. Then I ran the fix permissions on both my mount_rclone and local folders. Then you can check again whether the permissions of these folders are properly set. If that is the case, you can run the mount script. And then check again.

 

After I did this once, I never had the issue again.

  • Like 1
  • Thanks 1
Link to comment
9 minutes ago, Kaizac said:

Well, we've discussed this a couple of times already in this topic, and it seems there is not one fix for everyone.

 

What I've done is added to my mount script:

--uid 99

--gid 100

 

For --umask I use 002 (I think DZMM uses 000 which is allowing read and write to everyone and which I find too insecure. But that's your own decision.

 

I've rebooted my server without the mount script active, so just a plain boot without mounting. Then I ran the fix permissions on both my mount_rclone and local folders. Then you can check again whether the permissions of these folders are properly set. If that is the case, you can run the mount script. And then check again.

 

After I did this once, I never had the issue again.

cool thx i will try thx a lot 

 

Should it be faster just to delete mount folder directly with no script load up and let the script create it again 🤔

Link to comment
2 minutes ago, francrouge said:

cool thx i will try thx a lot 

 

Should it be faster just to delete mount folder directly with no script load up and let the script create it again 🤔

Ah, I forgot, also the mount_mergerfs or mount_unionfs folder should be fixed with permissions.

 

I don't know whether the problem lies with the script of DZMM. I think the script creates the union/merger folders as root, which causes the problem. So I just kept my union/merger folders and also fixed those permissions. But maybe letting the script recreate them will fix it. You can test it with a simple mount script to see the difference, of course.

 

It's sometimes difficult to advise for me, because I'm not using the mount script from DZMM but my own one, so it's easier to for me to troubleshoot my own system.

  • Like 1
Link to comment
1 minute ago, Kaizac said:

Ah, I forgot, also the mount_mergerfs or mount_unionfs folder should be fixed with permissions.

 

I don't know whether the problem lies with the script of DZMM. I think the script creates the union/merger folders as root, which causes the problem. So I just kept my union/merger folders and also fixed those permissions. But maybe letting the script recreate them will fix it. You can test it with a simple mount script to see the difference, of course.

 

It's sometimes difficult to advise for me, because I'm not using the mount script from DZMM but my own one, so it's easier to for me to troubleshoot my own system.

I understand  i will try both option to see thx a lot i will keep you update :D

Link to comment
On 11/29/2022 at 4:34 AM, Kaizac said:

Well, we've discussed this a couple of times already in this topic, and it seems there is not one fix for everyone.

 

What I've done is added to my mount script:

--uid 99

--gid 100

 

For --umask I use 002 (I think DZMM uses 000 which is allowing read and write to everyone and which I find too insecure. But that's your own decision.

 

I've rebooted my server without the mount script active, so just a plain boot without mounting. Then I ran the fix permissions on both my mount_rclone and local folders. Then you can check again whether the permissions of these folders are properly set. If that is the case, you can run the mount script. And then check again.

 

After I did this once, I never had the issue again.

Hi 

 

Just to let you know that i tried what you told me on my main and backup server and it seem to be working permission owner where changed to nobody:users

 

 

thx alot

Link to comment

Hello guys, any updated tutorial for this? lots of pages and insctructions i get confuse, where can i signup for a great g drive space account and use trough rclone on my unraid?

 

I already mounted my personal grive, onedrive and dropbox on my unraid, but the space is low, thanks in advance!

Link to comment
On 11/17/2022 at 3:29 AM, DZMM said:

That's one of the drawbacks of the cache - that it caches all reads e.g. even when Plex, Sonarr etc are doing scans. You could turn off any background scans that your apps are doing - I accept it as a necessary evil in return for the amount of storage I'm getting for £11/pm (I think that's what I pay)

Ok, sounds good. My ISP hasnt said a thing so far, so lets hope it stays that way. 650GB a day is a fair bit of traffic. :)

Link to comment

I've been trying to find an answer to this but the unraid forum search engine isn't treating me well :D

What folder should i point my downloads to in deluge if i want to upload it to gdrive and continue syncing there? Or is that not an option?

mnt/user/local gets uploaded to the cloud.
mnt/user/mount_rclone is the cloudfiles mounted locally.
mnt/user/mount_mergergfs is a merge of both, mapped in plex/sonarr/etc.

But I cant download to mount_mergerfs right? Then the files wont be uploaded? or?
 

Link to comment
38 minutes ago, martikainen said:

I've been trying to find an answer to this but the unraid forum search engine isn't treating me well :D

What folder should i point my downloads to in deluge if i want to upload it to gdrive and continue syncing there? Or is that not an option?

mnt/user/local gets uploaded to the cloud.
mnt/user/mount_rclone is the cloudfiles mounted locally.
mnt/user/mount_mergergfs is a merge of both, mapped in plex/sonarr/etc.

But I cant download to mount_mergerfs right? Then the files wont be uploaded? or?
 

The only folder you need to work with is mount_mergerfs for all your mappings. So you download to mount_mergerfs and then at first it will be placed on your local folder. From there the upload script will move them to the cloud folder. So it basically migrates from user/local to user/mount_rclone, but while using rclone to prevent file corruption and such. The folder mount_mergerfs won't see a difference, it will just show the files but not care about its location.

 

With Deluge and Torrents generally this setup is a bit more tricky. Seeding from Google Drive is pretty much impossible, you will get API banned quickly and then your mount work until the reset (often midnight or 24h). So you'll have to seed from your local drive, which means you need to prevent these files from being uploaded. You can do that based on age of the files, or you can use a separate folder for your seed files and add that folder into your mount_mergerfs folder. Then after you have seeded them enough you can put them in your local folder to be uploaded.

 

I don't have much experience with Torrents, DZMM had a setup with it though. Maybe he knows some tricks, but the most important thing to realise is that seeding from your Google Drive will not work.

Link to comment
6 hours ago, Kaizac said:

The only folder you need to work with is mount_mergerfs for all your mappings. So you download to mount_mergerfs and then at first it will be placed on your local folder. From there the upload script will move them to the cloud folder. So it basically migrates from user/local to user/mount_rclone, but while using rclone to prevent file corruption and such. The folder mount_mergerfs won't see a difference, it will just show the files but not care about its location.

 

With Deluge and Torrents generally this setup is a bit more tricky. Seeding from Google Drive is pretty much impossible, you will get API banned quickly and then your mount work until the reset (often midnight or 24h). So you'll have to seed from your local drive, which means you need to prevent these files from being uploaded. You can do that based on age of the files, or you can use a separate folder for your seed files and add that folder into your mount_mergerfs folder. Then after you have seeded them enough you can put them in your local folder to be uploaded.

 

I don't have much experience with Torrents, DZMM had a setup with it though. Maybe he knows some tricks, but the most important thing to realise is that seeding from your Google Drive will not work.

Thanks! Much appreciated feedback!
Just tried using drive for everything (before reading your answer :)) and noticed the API ban pretty quickly.

 

Link to comment

Hi


I´m having problem with my plex(rclone) server.
All my local files run without a problem and I can keep over 10 concurrent streams.

 

I am using sharepoint(OneDrive?) to stream media from(rclone).
Everything runs smoothly and I can stream movies when im using Plex.
I want family and friends to be able to use my server.
My problem is that when I open more streams (because i´m testing) then it starts buffering(and basicly freezing) if I try to stream more than one movie at a time(possibly two).

This is my current mount settings:


mount Crypt: R: --volname \rclone\crypt --use-mmap --cache-dir "E:\rclonecach" --vfs-cache-max-size 200G --dir-cache-time 1000h --vfs-cache-mode full --tpslimit 10 --rc --rc-web-gui --rc-user=XXX --rc-pass=XXXX --rc-serve --log-level INFO --log-file=mylogfile.txt

 

Can you imagine what I am doing wrong and why this keeps happening ?

I have 1gig upload and downoload

image.png.9c2f5bd65323e93318548b4582942e15.png

Link to comment
  • 2 weeks later...

Hello everyone, I receive this log when trying to run the rclone-upload script.

 

root@WZLCPFS01:/tmp/user.scripts/tmpScripts/rclone-upload# cat log.txt
Script Starting Dec 25, 2022  22:16.32

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone-upload/log.txt

25.12.2022 22:16:32 INFO: *** Rclone move selected.  Files will be moved from /mnt/user0/data/media/gdrive_media_vfs for gdrive_media_vfs ***
25.12.2022 22:16:32 INFO: *** Starting rclone_upload script for gdrive_media_vfs ***
25.12.2022 22:16:32 INFO: Script not running - proceeding.
25.12.2022 22:16:32 INFO: Checking if rclone installed successfully.
25.12.2022 22:16:32 INFO: rclone not installed - will try again later.
Script Finished Dec 25, 2022  22:16.32

 

Rclone is installed via instructions on github, and config is setup the same way as well. Any Ideas?

Link to comment
16 hours ago, Ronan C said:

Hello everyone, which drive provider are the best option today on the price x benefit proportion?

 

i think in order the one drive bussiness plan 2, they say its unlimited and the price is fine, anyone using this option? how it is?

 

Thanks

Google drive is still the best with around 18 USD giving unlimited, IF you can get unlimited. I think they changed it so that new signups only get 5TB and you need multiple users (x 18USD) to grow 5TB every time. Some countries/website versions stills how unlimited for Google, others don't. So it's hard to give a right answer for your situation. You can do the trial and see whether they still offer the unlimited if you would subscribe.

 

Regarding Onedrive you have to read the fine print. You will need 5 users and then they will give you 5x25TB if I understand correctly. Beyond that it will be Sharepoint, and I have no idea about speeds and how Rclone deals with that for streaming.

 

Dropbox was another alternative but it seems like they killed the unlimited storage recently so it's only available for Enterprise?

Link to comment
On 12/25/2022 at 10:22 PM, WenzelComputing said:

Hello everyone, I receive this log when trying to run the rclone-upload script.

 

root@WZLCPFS01:/tmp/user.scripts/tmpScripts/rclone-upload# cat log.txt
Script Starting Dec 25, 2022  22:16.32

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone-upload/log.txt

25.12.2022 22:16:32 INFO: *** Rclone move selected.  Files will be moved from /mnt/user0/data/media/gdrive_media_vfs for gdrive_media_vfs ***
25.12.2022 22:16:32 INFO: *** Starting rclone_upload script for gdrive_media_vfs ***
25.12.2022 22:16:32 INFO: Script not running - proceeding.
25.12.2022 22:16:32 INFO: Checking if rclone installed successfully.
25.12.2022 22:16:32 INFO: rclone not installed - will try again later.
Script Finished Dec 25, 2022  22:16.32

 

Rclone is installed via instructions on github, and config is setup the same way as well. Any Ideas?

 Bumping this, any ideas?

Link to comment
7 hours ago, Michel Amberg said:

@DZMM CA backup is deprecated and there is a new plugin called CA backup V3 that we should migrate to. You might want to update your script to reflect this :)

 

I'm trying to remember and find where the CA Backup is mentioned in his scripts. Can you point me to them? AFAIK there is no mention or importance of CA Backup for this functionality?

Link to comment
On 12/28/2022 at 10:03 AM, Kaizac said:

 

I'm trying to remember and find where the CA Backup is mentioned in his scripts. Can you point me to them? AFAIK there is no mention or importance of CA Backup for this functionality?

Last few rows of the mount script, it checks if backup is running and it does not start the containers in that case. 

Link to comment

I have issues with direct playing huge files lately. I am trying to stream a 40GB Remux file and it just stops every 5-10 minutes stating my server is not powerful enough. Looking at the router I am only downloading 5 MB/s which is like 1/4 of my internet speed. Why is this? Can we make it cache the file faster so it does not stop during playback?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.