Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Huge thank you to @DZMM for all the work you have done on this. Been running without issues for 6+ months with your scripts!

 

I just migrated to mergerfs and with your instructions it's working smoothly. 

 

I did have one issue when trying to migrate. I followed your migration guide, using your new scripts from github and replaced any occurance of mount_mergerfs with mount_unionfs.

 

The mount script initially was failing at the mv mergerfs /bin portion due to file/directory not existing.

 

I didn't look into the error much but it appeared the script wasn't running the docker for some reason.

I resolved the issue by manually running the commands below via ssh: 

# Build mergerfs binary and delete old binary as precaution
rm /bin/mergerfs

# Create Docker
docker run -v /mnt/user/appdata/other/mergerfs:/build --rm -it trapexit/mergerfs-static-build

After that, all the scripts work great. Again, thanks so much @DZMM for making and maintaining these scripts!

 

Edited by watchmeexplode5
fix error
  • Like 1
Link to comment
15 minutes ago, watchmeexplode5 said:

Huge thank you to @DZMM for all the work you have done on this. Been running without issues for 6+ months with your scripts!

 

I just migrated to mergerfs and with your instructions it's working smoothly. 

 

I did have one issue when trying to migrate. I followed your migration guide, using your new scripts from github and replaced any occurance of mount_mergerfs with mount_unionfs.

 

The mount script initially was failing at the mv mergerfs /bin portion due to file/directory not existing.

 

I didn't look into the error much but it appeared the script wasn't running the docker for some reason.

I resolved the issue by manually running the commands below via ssh: 


# Create Docker

docker run -v /mnt/user/appdata/other/mergerfs:/build --rm -it trapexit/mergerfs-static-build

# move to bin to use for commands

mv /mnt/user/appdata/other/mergerfs/mergerfs /bin

 

After that, all the scripts work great. Again, thanks so much @DZMM for making and maintaining these scripts!

 

Thanks for confirming the migration works.  My scripts are quite different in places so I wasn't sure.

 

I've added: mkdir -p /mnt/user/appdata/other/mergerfs/mergerfs to my post and github to see if that helps.

 

Glad you got there.

Link to comment

Hey folks..

 

I have been running UnRaid for a number of years. Awhile back I heard about plexguide and I have been using it with cloud servers to populate a gsuite account. I plan on still using plexguide as a feeder for gsuite drive but I would like to move the plex server back to my UnRaid server as I now have decent bandwidth at home and can support my families requirements. 

 

Where I am running into problems is that I did not use encryption when I setup my drives and I store everything in a tdrive share. 

 

 

Any directions or posts on installing without encryption? I have rclone up and configed. I did not setup the 2 VFS portions of rclone config. using the mount script from DZMM it does run with some errors but the mounts are there and I am able to point plex at the shares.

 

I am now trying to work through the upload script and cleanup script but running into more errors.

 

Any direction or advise is appreciated!

 

Thanks! 

 

 

Link to comment

@DZMM

 

I think I figured out my issues. 

These are the updates I made to your script. Now I'm able to mount and unmount with no issues. 

 

When I looked into the errors I was receiving it was due to "the input device is not a TTY" causing the docker run command to not run (and thus never getting mergerfs). I simply removed the flags. I'm not extremely versed in docker so hopefully there is no downside/issues with removing the interactive and TTY flags. 

 

Also edited your new mkdir. I was having the file drop in the wrong place which later interfered with moving it to bin. 

### Avoids placing *mergerfs file in /mergerfs/mergerfs/*mergerfs file structure ###
Line 30: mkdir -p /mnt/user/appdata/other/mergerfs/mergerfs ---> mkdir -p /mnt/user/appdata/other/mergerfs/


### Minor Fixed ###
Line 50: cho "$(date "+%d.%m.%Y %T") INFO: continuing..." ---> echo "$(date "+%d.%m.%Y %T") INFO: continuing..."


### Remove Interactive and TTY flag from Docker Run ###
Line 81: docker run -v /mnt/user/appdata/other/mergerfs:/build --rm -it trapexit/mergerfs-static-build  
	---> docker run -v /mnt/user/appdata/other/mergerfs:/build --rm  trapexit/mergerfs-static-build

 

Link to comment
2 hours ago, watchmeexplode5 said:

@DZMM

 

I think I figured out my issues. 

These are the updates I made to your script. Now I'm able to mount and unmount with no issues. 

 

When I looked into the errors I was receiving it was due to "the input device is not a TTY" causing the docker run command to not run (and thus never getting mergerfs). I simply removed the flags. I'm not extremely versed in docker so hopefully there is no downside/issues with removing the interactive and TTY flags. 

 

Also edited your new mkdir. I was having the file drop in the wrong place which later interfered with moving it to bin. 


### Avoids placing *mergerfs file in /mergerfs/mergerfs/*mergerfs file structure ###
Line 30: mkdir -p /mnt/user/appdata/other/mergerfs/mergerfs ---> mkdir -p /mnt/user/appdata/other/mergerfs/


### Minor Fixed ###
Line 50: cho "$(date "+%d.%m.%Y %T") INFO: continuing..." ---> echo "$(date "+%d.%m.%Y %T") INFO: continuing..."


### Remove Interactive and TTY flag from Docker Run ###
Line 81: docker run -v /mnt/user/appdata/other/mergerfs:/build --rm -it trapexit/mergerfs-static-build  
	---> docker run -v /mnt/user/appdata/other/mergerfs:/build --rm  trapexit/mergerfs-static-build

 

Thanks - updated.  As I stated earlier, my scripts are a bit different so it's hard for me to test the github version.

 

You're probably right about the docker changes.  I setup my mergerfs in putty and I haven't rebooted yet and I cobbled the command together from the one I use to edit dockers, so the interactive bit definitely isn't needed in a script and is probably a bad idea.

Link to comment
3 hours ago, bedpan said:

Hey folks..

 

I have been running UnRaid for a number of years. Awhile back I heard about plexguide and I have been using it with cloud servers to populate a gsuite account. I plan on still using plexguide as a feeder for gsuite drive but I would like to move the plex server back to my UnRaid server as I now have decent bandwidth at home and can support my families requirements. 

 

Where I am running into problems is that I did not use encryption when I setup my drives and I store everything in a tdrive share. 

 

 

Any directions or posts on installing without encryption? I have rclone up and configed. I did not setup the 2 VFS portions of rclone config. using the mount script from DZMM it does run with some errors but the mounts are there and I am able to point plex at the shares.

 

I am now trying to work through the upload script and cleanup script but running into more errors.

 

Any direction or advise is appreciated!

 

Thanks! 

 

 

 

Easy.  Either, use rclone config to change the name of your remote to gdrive_media_vfs: , or:

 

change line 45 of the mount script:

rclone mount --allow-other --buffer-size 256M --dir-cache-time 720h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

to:

rclone mount --allow-other --buffer-size 256M --dir-cache-time 720h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes NAME_OF_UNENCRYPTED_REMOTE: /mnt/user/mount_rclone/google_vfs &

and 43 of upload script:

rclone move /mnt/user/local/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude downloads/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --bwlimit 9500k --tpslimit 3 --min-age 30m

to:

rclone move /mnt/user/local/google_vfs/ YOUR_REMOTE: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude downloads/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --bwlimit 9500k --tpslimit 3 --min-age 30m

 

Link to comment
2 hours ago, DZMM said:

Easy.  Either, use rclone config to change the name of your remote to gdrive_media_vfs: , or:

 

change line 45 of the mount script:

 

 

Much thanks!

 

Tooks some tweaking and playing but things seem to be working. It will be a work in progress I am sure!

 

Another question... Tweaking the rclone parameters....

 

As I am doing my downloading on another server how do you suggest changing the rclone command to better see the new media as it is downloaded. Is there a way to kick start the process to manually refresh the content?

 

Specifically --dir-cache-time 720h

How low can I go with this safely? Or is there a better option to get rclone to see new files on the remote side?

 

Thanks!!

Link to comment
14 minutes ago, bedpan said:

Specifically --dir-cache-time 720h

How low can I go with this safely? Or is there a better option to get rclone to see new files on the remote side?

@bedpan Somebody correct me if I'm wrong.... but --dir-cache-time is only for how long rclone will keep directory entities cached. It shouldn't affect rclone from picking up newly added content. That variable can be modified via --poll-interval (default is something like 1m).

 

You  can see a reference for rclone commands here: https://rclone.org/commands/rclone_mount/

 

Edited by watchmeexplode5
Link to comment

I remember why I loved the UnRaid forums so much now. You guys rock...

 

39 minutes ago, watchmeexplode5 said:

@bedpan Somebody correct me if I'm wrong.... but --dir-cache-time is only for how long rclone will keep directory entities cached. It shouldn't affect rclone from picking up newly added content. That variable can be modified via --poll-interval (default is something like 1m).

 

You  can see a reference for rclone commands here: https://rclone.org/commands/rclone_mount/

 

Thanks for the info. I will do some more reading on this..

 

3 minutes ago, Spladge said:

To monitor the changes you could try to use a PAS docker - I have a combined plex/PAS docker but have not set it up or @Stupifier uses a slightly modified version of another script (plex RCS) to do this by monitoring the log file and initiating a plex scan of that dir via the api.

https://github.com/zhdenny/plex_rcs
 

Thanks Spladge. This looks like exactly what I would like to do. More learning though!

 

Cheers folks.. As of right now plex is scanning in the libraries. Once it is done I will test some reboots to make sure everything is running correctly. Then move onto getting plex to see new stuff at a decent pace.

 

Much thanks!

 

Mike

Link to comment
5 hours ago, Spladge said:

To monitor the changes you could try to use a PAS docker - I have a combined plex/PAS docker but have not set it up or @Stupifier uses a slightly modified version of another script (plex RCS) to do this by monitoring the log file and initiating a plex scan of that dir via the api.

https://github.com/zhdenny/plex_rcs
 

Also try https://github.com/l3uddz/plex_autoscan

Link to comment

So ive read in the internetz and -it means interactive mode, which isnt needed

 

"The -it instructs Docker to allocate a pseudo-TTY connected to the container’s stdin; creating an interactive bash shell in the container."

 

Ive looked at github and you didnt added a mkdir command there, is it needed.. or not? :D

 

What i really wonder tho is, i also got the TTY error, but thought thats normal... why did everything seems to work... when that command didnt work? O.O

Edited by nuhll
Link to comment

@nuhll

When I ran the script with the command, the script would finish but due to the docker error it would never pull and build mergerfs. At the end I would have my rclone share mounted but the local and rclone mounts were not merged into my unionfs/mergerfs folder (because mergerfs was unknown due to never having the file pulled in the first place).

 

I'm not sure why yours worked with -it and mine didn't. Mines been working with the arguments removed.

I'm also unsure if the mkdir is needed but it doesn't hurt to have it there 😛

Link to comment
2 hours ago, watchmeexplode5 said:

@nuhll

When I ran the script with the command, the script would finish but due to the docker error it would never pull and build mergerfs. At the end I would have my rclone share mounted but the local and rclone mounts were not merged into my unionfs/mergerfs folder (because mergerfs was unknown due to never having the file pulled in the first place).

 

I'm not sure why yours worked with -it and mine didn't. Mines been working with the arguments removed.

I'm also unsure if the mkdir is needed but it doesn't hurt to have it there 😛

run the docker run through the terminal and see what happens.

Edited by sauso
Link to comment
4 minutes ago, Kaizac said:

@DZMMcurrently reading into your scripts. What I don't understand is why you have to create a docker for mergerfs. Is this the way it's implemented in rclone?

unRAID is based on slackware and mergerfs doesn't support slackware.  Luckily the author just built a docker version we can use. 

Link to comment
23 minutes ago, DZMM said:

unRAID is based on slackware and mergerfs doesn't support slackware.  Luckily the author just built a docker version we can use. 

Do we need to install something from apps to do that? Or you just use the direct repo link? So in the future it could be that the repo gets moved/deleted and it won't work?

Link to comment
20 minutes ago, Kaizac said:

Do we need to install something from apps to do that? Or you just use the direct repo link? So in the future it could be that the repo gets moved/deleted and it won't work?

Everything is self-contained in the script - no need to touch CA, nerd tools etc except to install the rclone plugin.

 

Re the mergerfs docker - I'm not an expert, but it's building it direct from the mergerfs author's repo, so the script will only need changing if he updates his build options which I think will be unlikely:

 

https://github.com/trapexit/mergerfs#build-options

Link to comment
1 hour ago, DZMM said:

Everything is self-contained in the script - no need to touch CA, nerd tools etc except to install the rclone plugin.

 

Re the mergerfs docker - I'm not an expert, but it's building it direct from the mergerfs author's repo, so the script will only need changing if he updates his build options which I think will be unlikely:

 

https://github.com/trapexit/mergerfs#build-options

Ok understood.

 

In your mount command you have --dir-cache-time 720h. This used to be 72h. Why the change?

And you also started using --fast-list in the mount command. I thought this didn't work in the mount command only when doing transfers for example. Has that changed?

Link to comment
1 hour ago, Kaizac said:

Ok understood.

 

In your mount command you have --dir-cache-time 720h. This used to be 72h. Why the change?

And you also started using --fast-list in the mount command. I thought this didn't work in the mount command only when doing transfers for example. Has that changed?

--dir-cache-time can be large as you want - uploads flush the cache.  No real reason, just decided to put a larger number in for the (rare) days when no new content added

--fast-list - yep, that shouldn't be there.  I forgot to delete when I removed the rc command.

 

On 11/6/2018 at 12:44 PM, DZMM said:

 

--fast-list: Improves speed but only in tandem with rclone rc --timeout=1h vfs/refresh recursive=true

 

 

Link to comment
1 minute ago, DZMM said:

--dir-cache-time can be large as you want - uploads flush the cache.  No real reason, just decided to put a larger number in for the (rare) days when no new content added

--fast-list - yep, that shouldn't be there.  I forgot to delete when I removed the rc command.

 

 

You have a double --fast-list in your upload code. Probably not an issue, but might want to remove it.

 

So far I've just migrated everything over and it seems to be working fine! Don't understand the hardlinking much yet, because I don't use torrents much so don't have the seeding issue. Will have to change parts of my folder structure though to get in line with the new standard.

Link to comment

@DZMM sorry but in your first post you wrote this:

 

Quote

Either finish any existing uploads from rclone_upload before updating, or move pending downloads from /mnt/user/rclone_upload to the new /mnt/user/local folder, or create a version of the new upload script to upload from /mnt/user/local 

I've tried to understand what you're saying here, but I really can't. What exactly is the difference in user/rclone_upload with user/local? They are both local shares which you include in your union/merge. Maybe I'm missing something in your changes, since my configuration was a bit different because of more local folders.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.