Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

41 minutes ago, axeman said:

How do I check that mergerfs is installed?

 

 

41 minutes ago, axeman said:

26.05.2020 17:23:29 INFO: Check successful, gdrive_media_vfs mergerfs mount in place.

 

Or look in your mergerfs location.  Or look in /bin

Link to comment

@DZMM

 

I deleted my rclone remotes and recreated - and now all seems to be fine. Every now and then the upload script seems to crash. Not sure why it's happening. 

 

two more questions.

 

1. my rclone mode is set to "sync" and i'm using the backup = 'Y' option... does "sync' imply that if i go to google drive and manually delete something, it'll delete from my array? So i'm guessing I want "copy". 

 

2. when I look at my mergerfs_mount\videos ... i just see the "MountFolders" that I specified. I thought when I go to the mergerfs_mount, I would see all of my shares/files from local array and google drive? 

 

Thanks - I was really close to just giving up on this - and am almost seeing the light at the end of the tunnel. 

 

Link to comment

@axeman

 

#1 if you delete locally a file that rclone has synced,then rather than delete it on the remote it is moved to the backupdir for your chosen number of days

 

#2 it should show your files.  Are you using krusader? If so, restart it as it has problems. Or try ssh or Windows explorer

Link to comment

@watchmeexplode5 @Kaizac @testdasi and everyone else - I need help testing rclone union please which landed yesterday as part of rclone 1.5.2.

 

https://forum.rclone.org/t/rclone-1-52-release/16718

 

I've created a test union ok and playback seems good - better than mergerfs, although only tried a few files. 

 

It'd be great if we can get this running as I think it'll be easier to support than mergerfs which has been brilliant but must be installed via a docker. We'll also be just using one app.

 

I've encountered one problem so far in that -dir-cache-time applies to local folder changes as well, so a small number is needed to spot any changes made to /mnt/user/local.  I've asked if there's a way to have a long cache for just the remotes.

 

 I've asked for advice here: https://forum.rclone.org/t/rclone-union-cache-time/16728/1

 

My settings so far:

 

[tdrive_union]
type = union
upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs: tdrive_uhd_vfs: tdrive_t_adults_vfs: gdrive_media_vfs:
action_policy = all
create_policy = ff
search_policy = ff
rclone mount --allow-other --buffer-size 256M --dir-cache-time 1m --drive-chunk-size 512M --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes tdrive_union: /mnt/user/mount_mergerfs/union_test

 

Edited by DZMM
  • Thanks 1
Link to comment
25 minutes ago, Kaizac said:

@DZMM did you try --vfs-cache-poll-interval duration? Or the normal poll-interval?

 

https://forum.rclone.org/t/cant-get-poll-interval-working-with-union-remote/13353

nope but it defaults to 1m so it wouldn't help - I think rclone looks for new changes based on --dir-cache-time - when I had this set to 720h as usual changes weren't getting picked up - I waited about 5 mins.  With --dir-cache-time 1m they got picked up pretty quickly.

Link to comment

This new rclone union backend feature looks awsome!

 

I just gave it a try with your mount settings, playback starts almost instantly!

 

We'll see how it behave with a few concurrent stream tonight...

 

I saw your posts on rclone's forum regarding hardlinks, fingers crossed!

Link to comment
7 hours ago, DZMM said:

@axeman

 

#1 if you delete locally a file that rclone has synced,then rather than delete it on the remote it is moved to the backupdir for your chosen number of days

 

#2 it should show your files.  Are you using krusader? If so, restart it as it has problems. Or try ssh or Windows explorer

Thanks - backup will be awesome for woopsies.

 

I don't see anything at all. I'm starting to think something is wrong with mergerFS ... I don't see it in my Dockers. Should it be there as a docker? Also whenever I reboot the server, on the first run of the mount script, it says mergerFS not installed and proceeds to "install" it I never see anything in Docker. 

 

Maybe I can be test dummy for the union backend you guys are talking about above? I'm ready, willing, and (somewhat) able to test for you, with some guidance. 

Link to comment
4 hours ago, DZMM said:

nope but it defaults to 1m so it wouldn't help - I think rclone looks for new changes based on --dir-cache-time - when I had this set to 720h as usual changes weren't getting picked up - I waited about 5 mins.  With --dir-cache-time 1m they got picked up pretty quickly.

 

I've just read the full post you linked to and saw Nick's comment - I'm going to try after work removing the -dir-cache-time and having poll set to maybe 1s.  Need to read up a bit first/have a refresher on what both are doing 

Edited by DZMM
Link to comment
5 minutes ago, axeman said:

I don't see anything at all. I'm starting to think something is wrong with mergerFS ... I don't see it in my Dockers. Should it be there as a docker? Also whenever I reboot the server, on the first run of the mount script, it says mergerFS not installed and proceeds to "install" it I never see anything in Docker. 

It doesn't appear in your /dockers page - it's a bit of an odd case.  It has to be re-installed everytime unRAID starts as part of the script.

 

It's why I want to remove it if possible - because unRAID can't support it natively e.g. via a plugin or a 'normal' docker, it's a bit confusing for unRAID users.  Not mergerfs' problem, but just makes it a bit clunky.  Having everything in rclone will be a lot cleaner.

Link to comment

Well after a night of tryout :

 

Read from the union mount is solid, up to 5 concurrent streams without any issue whatsoever, the overnight plex library scan went fine but took slighly more time than when it was reading from Mergerfs. (we're tallking about 30 more seconds which I think is completly ok)

Deep scan of a manually added mkv went faster than Mergerfs though.

I gave a look at the google dev console to check the api key metrics, looks exactly the same than when using mergerfs so I'd say we're safe from rate limits bans (I made new key to be able to tell the difference).

 

Write to the union mount on the other hand is sketchy at least with my current mount settings (same as yours). Couldn't figure a way to let radarr import a movie to the mount. (I need further investigating this, may be a persmission issue) And when trying to mess with mount settings (playing with dir-cache-time mostly) I even managed to prevent myself from writing to the mount via SMB at some point (from my hackintosh AND from a regular Win laptop)

 

Did you have any more luck?

 

Disclaimer : I not a dev, I'm just geek enough to understand your scripts and tweak them to my liking but couldn't write them from scratch. Just so you know I might not be of a big help here...

 

EDIT :

Further testing and wondering...

 

Sonarr seams happier with the union backend, it imports files fine to the union mounted remote but files are being written straight to google drive 🤔

Edited by Tecneo
Avoid double post
  • Like 1
Link to comment
On 5/28/2020 at 9:58 AM, DZMM said:

It doesn't appear in your /dockers page - it's a bit of an odd case.  It has to be re-installed everytime unRAID starts as part of the script.

 

It's why I want to remove it if possible - because unRAID can't support it natively e.g. via a plugin or a 'normal' docker, it's a bit confusing for unRAID users.  Not mergerfs' problem, but just makes it a bit clunky.  Having everything in rclone will be a lot cleaner.

okay .. is that why I can't seem to shutdown the array safely? for some reason that mergerfs never stops, and without a manual kill command, my array refuses to shutdown. 

 

there must be something wrong with my mergerFS setup. it's just not mounting anything at all. mind boggling. 

Link to comment

@DZMM

 

So been trying some tinkering... and noticed something

 

IF I specify LocalFilesShare="/mnt/user/Videos" the script won't upload anything. 

 

However, if i create a new folder at: "/mnt/user/local"  and do this: LocalFilesShare="/mnt/user/local"  and start putting files in there, those files get happily uploaded. Will I need to reorganize my existing videos in to a new folder for the script to upload? Could this be a permission thing? 

 

Thanks!

Link to comment
On 5/19/2020 at 11:24 PM, DZMM said:

Not sure of the context of @watchmeexplode5 advice, but it's best to forget /mnt/user/local exists and focus all file management activity on /mnt/user/mount_mergerfs as there's less chance of something going wrong

Hi, sorry I have a couple of extra questions:

 

1. If I set my download folder to /mnt/user/mount_mergerfs/downloads I will only get around 10 MBps, if I set my download folder to /mnt/local/downloads I will get around 70 MBps. So when you write there's less chance something goes wrong I want to make sure I get more benefits when using mergerfs folder, since my DL speed is sincerely limited if I use that method. Is other experiencing the same and are there any major disadvantages to use local instead?

 

2. How often do you scan library in Plex, and will a "Run a partial scan when changes are detected" result in quickly API ban?

Edited by Bjur
Link to comment
On 5/7/2020 at 2:03 AM, remedy said:

also having this issue. i don't restart my array often so its not a huge problems, but fusermount -uz /path/to/mount alone isnt getting it done, I have to do the ps -ef | grep /mnt/user and kill the rclone pids.

 

I've also been having this issue since moving to mergerfs. I stop all my VM/Dockers first and this is still happening.

 

Once I kill /usr/local/sbin/shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=330 then the array stops

Link to comment

Super exiting to see that you're looking at going to an even quicker option with union over merge. Regarding that will you be doing a write-up once you get the kinks ironed out? Truly appreciate your github now and your help in this forum. 

I have two questions that I am hoping someone can help with. 
I have my mergefs and rclone setup thanks to the previously mentioned github by DZMM. Thank you. I wanted to add a second container for 4k Movies and 4K TV.  I created two folders one named 4kmovies the other 4ktv in /mnt/user/mount_mergerfs/name_of_remote

 

I first added 4kmovies and 4ktv to the mount script for folders.
 

I then set up a second sonarr and radarr from different repos (is this the easiest way to do this?) When I set everything up like I did with my original sonarr and radarr the only thing I changed was the port number in the container and then pointed the folder path within the GUI of the container. My problem is when a file is completed and moved they are going into the basic movies or tv folder I created previously. Did I miss a step should I have restarted my mount script? 

Link to comment
On 5/29/2020 at 8:33 AM, Tecneo said:

Well after a night of tryout :

 

Read from the union mount is solid, up to 5 concurrent streams without any issue whatsoever, the overnight plex library scan went fine but took slighly more time than when it was reading from Mergerfs. (we're tallking about 30 more seconds which I think is completly ok)

Deep scan of a manually added mkv went faster than Mergerfs though.

I gave a look at the google dev console to check the api key metrics, looks exactly the same than when using mergerfs so I'd say we're safe from rate limits bans (I made new key to be able to tell the difference).

 

Write to the union mount on the other hand is sketchy at least with my current mount settings (same as yours). Couldn't figure a way to let radarr import a movie to the mount. (I need further investigating this, may be a persmission issue) And when trying to mess with mount settings (playing with dir-cache-time mostly) I even managed to prevent myself from writing to the mount via SMB at some point (from my hackintosh AND from a regular Win laptop)

 

Did you have any more luck?

 

Disclaimer : I not a dev, I'm just geek enough to understand your scripts and tweak them to my liking but couldn't write them from scratch. Just so you know I might not be of a big help here...

 

EDIT :

Further testing and wondering...

 

Sonarr seams happier with the union backend, it imports files fine to the union mounted remote but files are being written straight to google drive 🤔

@TecneoI haven't had a chance to play with union this week - have you had any progress before I start?

Link to comment
On 5/29/2020 at 5:29 PM, axeman said:

okay .. is that why I can't seem to shutdown the array safely? for some reason that mergerfs never stops, and without a manual kill command, my array refuses to shutdown. 

Could be - we're not sure.  At the moment my system has been shutting down ok.

 

On 6/1/2020 at 3:16 AM, axeman said:

So been trying some tinkering... and noticed something

 

IF I specify LocalFilesShare="/mnt/user/Videos" the script won't upload anything. 

 

However, if i create a new folder at: "/mnt/user/local"  and do this: LocalFilesShare="/mnt/user/local"  and start putting files in there, those files get happily uploaded. Will I need to reorganize my existing videos in to a new folder for the script to upload? Could this be a permission thing?

Don't know - but if /mnt/user/local is working I'd just run with it!

 

On 6/2/2020 at 12:15 PM, Bjur said:

1. If I set my download folder to /mnt/user/mount_mergerfs/downloads I will only get around 10 MBps, if I set my download folder to /mnt/local/downloads I will get around 70 MBps. So when you write there's less chance something goes wrong I want to make sure I get more benefits when using mergerfs folder, since my DL speed is sincerely limited if I use that method. Is other experiencing the same and are there any major disadvantages to use local instead?

 

2. How often do you scan library in Plex, and will a "Run a partial scan when changes are detected" result in quickly API ban?

 
 

#1 I have no idea what's going on there as I see negliable performance difference and I think @watchmeexplode5 has said the same

#2 I got my first API ban in over a year a few weeks ago, but I was doing some big scans in Plex as well as in Radarr, Sonarr etc all at the same time

 

11 hours ago, Hypner said:

I then set up a second sonarr and radarr from different repos (is this the easiest way to do this?) When I set everything up like I did with my original sonarr and radarr the only thing I changed was the port number in the container and then pointed the folder path within the GUI of the container. My problem is when a file is completed and moved they are going into the basic movies or tv folder I created previously. Did I miss a step should I have restarted my mount script?

 

sounds like something is wrong with your radarr mappings not rclone which just uploads anything it sees in the local share - radarr controls where files are moved.

Link to comment
sounds like something is wrong with your radarr mappings not rclone which just uploads anything it sees in the local share - radarr controls where files are moved.

Agreed thanks.

Hey DZMM I had a follow up and I’m sorry if this has been covered and answered and I didn’t see the answer in previous posts but I’d there a way to upload more than the 750gb cap? Many thanks.


Sent from my iPhone using Tapatalk
Link to comment
1 hour ago, Hypner said:


Agreed thanks.

Hey DZMM I had a follow up and I’m sorry if this has been covered and answered and I didn’t see the answer in previous posts but I’d there a way to upload more than the 750gb cap? Many thanks.


Sent from my iPhone using Tapatalk

Setup Service Accounts - covered on github

  • Thanks 1
Link to comment

Hey guys,

 

Getting:

Script Starting Jun 06, 2020 14:03.56

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

06.06.2020 14:03:56 INFO: *** Rclone move selected. Files will be moved from /mnt/user/downloads/local_storage/gcrypt/cloud/requests for gcrypt ***
06.06.2020 14:03:56 INFO: *** Starting rclone_upload script for gcrypt ***
06.06.2020 14:03:56 INFO: Exiting as script already running.
Script Finished Jun 06, 2020 14:03.56

despite running the script after a complete reboot. Not sure if there's a file I need to manually delete somewhere from which the script is wrongly reading that the script is already running? Rclone mount working fine.

 

Link to comment
1 hour ago, oldsweatyman said:

despite running the script after a complete reboot. Not sure if there's a file I need to manually delete somewhere from which the script is wrongly reading that the script is already running? Rclone mount working fine.

make sure you are running the latest version of the unmount script at array start or manually now to fix your problem.  There was an error in an earlier version.

Link to comment

I've been successfully running the original version (unionfs) for a while and finally decided to make the plunge to team drive, service accounts, and mergerfs.

 

While trying to upgrade, I ran the following command as listed on the AutoRclone git page:

sudo git clone https://github.com/xyou365/AutoRclone && cd AutoRclone && sudo pip3 install -r requirements.txt

The output for this command resulted in an error: 

sudo: pip3: command not found

The rest of the command worked fine. Any idea what's going on here?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.