Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Hi, thanks a lot for your scripts! This is going to completely change the way I use my server. I'm hoping you can help me out with some questions. But first keep in mind my use case is a bit different: I don't use sonarr, radarr, ecc. I download from direct downloads and I want to manually move files to a directory from which they will be uploaded to the cloud. I will be using plex to stream those files though.

 

1. My local path is on a UD, should I change the mounting of the remote and merged vfs to mnt/disks?

2. I should never use the remote mount for uploading, right? I should put the files in the merged vfs, right? Since most of the files I would upload will be on the same UD where my local path is, it would be faster if instead of moving those files to the merged vfs I would move them to the local path. Is it ok to do that?

I will try to explain myself better with an example.

local path --> /mnt/disks/UDrive/GoogleDriveUploads

folder to upload --> /mnt/disks/UDrive/FolderToUpload

If I move the FolderToUpload to the merged folder it will end up in the GoogleDriveUploads as expected but the transfer will not be instantaneous as one might expect given that FolderToUpload and GoogleDriveUploads are on the same drive, instead the transfer will take some time as when you copy things. My question is: is it ok if I avoid this by moving the files straight into the GoogleDriveUploads folder?

3. What are the pros of having this line in the script?

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}

Can't I just create those folders manually?

4. After running the script I have an orphan image of mergerfs in my dockers, can I delete it?

5. This is the unmount script in github:

#!/bin/bash

#######################
### Cleanup Script ####
#######################
#### Version 0.9.1 ####
#######################

echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_cleanup script ***"

####### Cleanup Tracking Files #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Removing Tracking Files ***"

find /mnt/user/appdata/other/rclone/remotes -name dockers_started -delete
find /mnt/user/appdata/other/rclone/remotes -name mount_running -delete
find /mnt/user/appdata/other/rclone/remotes -name upload_running -delete
echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***"

exit

I think this is the wrong script.

6. What happens if I delete a file while it is being uploaded?

7. How can I interrupt an upload?

 

Thanks

Edited by FabrizioMaurizio
Link to comment

having a bit of trouble getting this setup. i uploaded all of my media to my team drive with the crypt remote.

 

my paths are currently:

 

/mnt/user/Downloads/Media/{TV or Movies} - Media folder plex uses and where Sonarr/Radarr hardlink files to

/mnt/user/Downloads/data - newly downloaded files not yet imported by Sonarr/Radarr

 

so to see if i understand this correctly:

 

/mnt/user/mount_rclone/gdrive_vfs -  this is where rclone will mount my team drive, which should mirror my current Media folder given everything is uploaded

/mnt/user/local/gdrive_vfs - this is where newly downloaded files not yet uploaded will go, becoming my new Downloads/data folder

/mnt/user/mount_mergerfs/gdrive_vfs -- this is where i navigate within dockers to, after pointing them at /mnt/user, as this is the merged contents of my team drive and local data

 

so since i already have everything uploaded, then not have to bother moving my current Media folder, given its already on my team drive? and my mount script settings should then be:

 

# REQUIRED SETTINGS

RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data

RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone

MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable

DockerStart="deluge plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page

LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable

MountFolders=\{"Downloads/data,Movies,TV"\} # comma separated list of folders to create within the mount

 

 

 

Link to comment
1 hour ago, FabrizioMaurizio said:

Hi, thanks a lot for your scripts! This is going to completely change the way I use my server. I'm hoping you can help me out with some questions. But first keep in mind my use case is a bit different: I don't use sonarr, radarr, ecc. I download from direct downloads and I want to manually move files to a directory from which they will be uploaded to the cloud. I will be using plex to stream those files though.

 

1. My local path is on a UD, should I change the mounting of the remote and merged vfs to mnt/disks?

2. I should never use the remote mount for uploading, right? I should put the files in the merged vfs, right? Since most of the files I would upload will be on the same UD where my local path is, it would be faster if instead of moving those files to the merged vfs I would move them to the local path. Is it ok to do that?

I will try to explain myself better with an example.

local path --> /mnt/disks/UDrive/GoogleDriveUploads

folder to upload --> /mnt/disks/UDrive/FolderToUpload

If I move the FolderToUpload to the merged folder it will end up in the GoogleDriveUploads as expected but the transfer will not be instantaneous as one might expect given that FolderToUpload and GoogleDriveUploads are on the same drive, instead the transfer will take some time as when you copy things. My question is: is it ok if I avoid this by moving the files straight into the GoogleDriveUploads folder?

3. What are the pros of having this line in the script?


MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}

Can't I just create those folders manually?

4. After running the script I have an orphan image of mergerfs in my dockers, can I delete it?

5. This is the unmount script in github:


#!/bin/bash

#######################
### Cleanup Script ####
#######################
#### Version 0.9.1 ####
#######################

echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_cleanup script ***"

####### Cleanup Tracking Files #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Removing Tracking Files ***"

find /mnt/user/appdata/other/rclone/remotes -name dockers_started -delete
find /mnt/user/appdata/other/rclone/remotes -name mount_running -delete
find /mnt/user/appdata/other/rclone/remotes -name upload_running -delete
echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***"

exit

I think this is the wrong script.

6. What happens if I delete a file while it is being uploaded?

7. How can I interrupt an upload?

 

Thanks

1. Shouldn't matter but I've had problems mounting to /mnt/disks in the past, so I recommend /mnt/user

2. Correct add files to the mergerfs location.  The local location can be anywhere so there are no files to move - just make your local path /mnt/disks/UDrive/GoogleDriveUploads which will also be the upload folder

3. Was just trying to help users create folders in the right place - fell free to make your own

4. I think so, sounds like an incomplete install - it will get recreated if needed 

5. nope that's right - the name is probably bad as it doesn't 'unmount' anymore

6. I think the upload will fail

7. yes - rclone will just resume on next run

Link to comment
23 minutes ago, remedy said:

having a bit of trouble getting this setup. i uploaded all of my media to my team drive with the crypt remote.

 

my paths are currently:

 

/mnt/user/Downloads/Media/{TV or Movies} - Media folder plex uses and where Sonarr/Radarr hardlink files to

/mnt/user/Downloads/data - newly downloaded files not yet imported by Sonarr/Radarr

 

so to see if i understand this correctly:

 

/mnt/user/mount_rclone/gdrive_vfs -  this is where rclone will mount my team drive, which should mirror my current Media folder given everything is uploaded

/mnt/user/local/gdrive_vfs - this is where newly downloaded files not yet uploaded will go, becoming my new Downloads/data folder

/mnt/user/mount_mergerfs/gdrive_vfs -- this is where i navigate within dockers to, after pointing them at /mnt/user, as this is the merged contents of my team drive and local data

 

so since i already have everything uploaded, then not have to bother moving my current Media folder, given its already on my team drive? and my mount script settings should then be:

 

# REQUIRED SETTINGS

RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data

RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone

MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable

DockerStart="deluge plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page

LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable

MountFolders=\{"Downloads/data,Movies,TV"\} # comma separated list of folders to create within the mount

 

 

 

Correct - just make sure gdrive_vfs is using your existing teamdrive.

 

I would also make sure all your dockers have the mapping /user --> /mnt/user and then within each docker point them to e.g. /user/mount_mergerfs/downloads and /user/mount_mergerfs/media/tv etc or somethign similar.  ALL mappings have to be to the mount_mergerfs mount if you want the full file transfer benefits, hardlinking etc.  


if you need to update plex paths I wrote a post a few posts up on how to preserve metadata.

Link to comment

So I am finally sitting down to migrate to mergerfs and the new scripts and first thing I am wondering is how people are coping with mounting multiple teamdrives here. Just adding them as a manual mount? Or using variables like...
 

RcloneRemoteName01
RcloneRemoteName02.....

Or am I missing something obvious?

Link to comment
11 hours ago, Spladge said:

So I am finally sitting down to migrate to mergerfs and the new scripts and first thing I am wondering is how people are coping with mounting multiple teamdrives here. Just adding them as a manual mount? Or using variables like...
 


RcloneRemoteName01
RcloneRemoteName02.....

Or am I missing something obvious?

I just had a quick go at making the script support multiple remotes, but I couldn't find ways to do certain bits.

 

At the moment, I just run the script once per mount - annoying but once setup you'll forget you've done it.

  • Haha 1
Link to comment
48 minutes ago, Spladge said:

Ha - I did not even think of that!
I'll see if I can modify my ubuntu version somehow. It has all the pieces but currently relies on systemd.

Do you remove the mergerfs section and run it in a single script to avoid non-empty errors?

I only create one mergerfs mount - I mount the other remotes in other scripts, and then add those remote paths to the final script as extra "local" folders to include.

  • Thanks 1
Link to comment

got everything working nicely, but noticed my system log is spamming:

 

"unRAID emhttpd: error: get_filesystem_status, 6618: Operation not supported (95): getxattr: /mnt/user/gdrive"

 

every second. gdrive is where i have my rclone mount. anyone else getting this? everything works fine, but thats a bit odd.

Edited by remedy
Link to comment

In your initial post, you mention:

Quote

Why do this? Well, if set-up correctly Plex can play cloud files regardless of size e.g. I play 4K media with no issues, with start times of under 5 seconds i.e. comparable to spinning up a local disk.

Dumb question: is that because of your download speeds or have you modified some settings in Plex? I ask because I'm having a lot of buffering on 200mbps down and decent (but not spectacular) hardware. I'm definitely not seeing 4K with no issues and would like to remedy if at all possible! 

 

Thanks for your work on this. 

Link to comment
50 minutes ago, DZMM said:

@drogg does the 4k play on if the file is local?  Assuming you've got enough bandwidth, there shouldn't be a problem if you can play a local version.  I'm on 360 and I have tonnes of concurrent activity when im playing 4k.

Yeah, plays perfectly fine locally but streaming on Plex from the mount is giving me some issues. Could be Plex, could be my download speed (though luckily I'm switching to fiber this weekend). Just didn't know if there was anything I needed to change to avoid buffering! 

Link to comment
6 hours ago, drogg said:

Yeah, plays perfectly fine locally but streaming on Plex from the mount is giving me some issues. Could be Plex, could be my download speed (though luckily I'm switching to fiber this weekend). Just didn't know if there was anything I needed to change to avoid buffering! 

Try experimenting with different buffer size (usually higher) and vfs chunk size  in the mount script.  Vfs changes will affect startup times - higher values will slow launch, but usually fix buffering problems.

 

I'm surprised you're having problems with 200Mbps though - what average Mbps is Plex/tautulli reporting for the file?

 

Anyone else got any ideas as buffering problems are very rare.

 

 

Link to comment
21 minutes ago, remedy said:

@DZMM found your thread where you were getting the same error about getxattr. i see you said in another post you've had issues mounting to mnt/disks, and having the rclone mount in mnt/user is whats causing it.

 

did you end up resolving it or just ignore it?

where are you mounting?  Directly in /mnt/user or a share in /mnt/share?  If it's the former, that's why you're getting errors.

Link to comment
7 hours ago, DZMM said:

where are you mounting?  Directly in /mnt/user or a share in /mnt/share?  If it's the former, that's why you're getting errors.

directly in /mnt/user. so I need to mount in a share inside /mnt/user? unraid makes a share normally for new folders in /mnt/user, but i didnt notice that it hadnt for mnt/user/gdrive. can i just make one named the same and it'll work after unmounting and remounting?

 

also, does it matter what the share settings are? i should leave "Use cache drive" set to No, right, since the underlying mergerfs is handling it?

 

does that matter for mnt/user/local? thats where i have my newly downloaded files in a subdirectory /data, and sonarr/radarr hardlinking to subdirectories /Movies and /TV.

Edited by remedy
Link to comment
3 hours ago, remedy said:

directly in /mnt/user. so I need to mount in a share inside /mnt/user? unraid makes a share normally for new folders in /mnt/user, but i didnt notice that it hadnt for mnt/user/gdrive. can i just make one named the same and it'll work after unmounting and remounting?

 

also, does it matter what the share settings are? i should leave "Use cache drive" set to No, right, since the underlying mergerfs is handling it?

 

does that matter for mnt/user/local? thats where i have my newly downloaded files in a subdirectory /data, and sonarr/radarr hardlinking to subdirectories /Movies and /TV.

Yes, you need to create a share folder e.g. mount_rclone like the default in the script and then mount the remote there.

 

 If you don't use the share for anything but mounting, the share settings don't matter as nothing should get stored there.

Link to comment
35 minutes ago, DZMM said:

Yes, you need to create a share folder e.g. mount_rclone like the default in the script and then mount the remote there.

 

 If you don't use the share for anything but mounting, the share settings don't matter as nothing should get stored there.

tried it, and the error is gone! thank you. last question, for the local share I have, if I want it to download to the cache drive, since its an nvme ssd so it doesn't bottleneck as easily as the array,) i should change the share settings for /mnt/user/local to cache - Yes, right? that wont mess with anything on the mergerfs side?

Link to comment
48 minutes ago, Thel1988 said:

I get the same issue, where I need to kill of rclone and reconnect the mount points, rerunning the script does nothing to fix that error.

 

you can just run fusermount -u /mnt/disks/Remote/xxx

would be nice if the script can do it (and restart plex docker)

 

i am not really a programmer but will give it a shot later

Link to comment
10 hours ago, remedy said:

tried it, and the error is gone! thank you. last question, for the local share I have, if I want it to download to the cache drive, since its an nvme ssd so it doesn't bottleneck as easily as the array,) i should change the share settings for /mnt/user/local to cache - Yes, right? that wont mess with anything on the mergerfs side?

correct

Link to comment
6 hours ago, bar1 said:

 

you can just run fusermount -u /mnt/disks/Remote/xxx

would be nice if the script can do it (and restart plex docker)

 

i am not really a programmer but will give it a shot later

The old unmount script used to actually unmount but it can't now since I added the ability for the mount to be anywhere.

Link to comment

having an issue where when plex is being used pretty decently (4ish streams of 1080p 20mbps bitrate content), rclone can cause my server to run OOM and it kills the mount. i have 16gb of ram, buffer size is 256mb for the mount.

 

you can see the OOM errors in my attached syslog around 8pm, then at midnight, yesterday. im on a symmetrical gigabit dl/upload line, writing to an nvme cache drive for any new writes, dont see any iowait at all.

 

below is from right now, with the mount running fine, and no upload running. there still seems to be 4 instances of rclone running, not sure why there isn't only 1, and 2 if i the upload were running.

 

root@unRAID:~# ps -ef | grep rclone
root      9394  1603  0 14:28 pts/0    00:00:00 grep rclone
root     22963     1  0 03:44 ?        00:00:00 /bin/bash /usr/sbin/rclone mount --allow-other --buffer-size 256M --dir-cache-time 1000h --log-level DEBUG --log-file /mnt/user/appdata/other/rclone/mount.log --poll-interval 15s --timeout 1h --bind= gdrive_vfs: /mnt/user/gdrive/mount
root     22966 22963  0 03:44 ?        00:03:11 rcloneorig --config /boot/config/plugins/rclone-beta/.rclone.conf mount --allow-other --buffer-size 256M --dir-cache-time 1000h --log-level DEBUG --log-file /mnt/user/appdata/other/rclone/mount.log --poll-interval 15s --timeout 1h --bind= gdrive_vfs: /mnt/user/gdrive/mount
root     24014     1  0 00:26 ?        00:00:00 /bin/bash /usr/sbin/rclone mount --allow-other --buffer-size 256M --dir-cache-time 720h --drive-chunk-size 256M --log-level INFO --vfs-cache-mode writes --bind= gdrive_vfs: /mnt/user/gdrive/mount
root     24017 24014  0 00:26 ?        00:03:14 rcloneorig --config /boot/config/plugins/rclone-beta/.rclone.conf mount --allow-other --buffer-size 256M --dir-cache-time 720h --drive-chunk-size 256M --log-level INFO --vfs-cache-mode writes --bind= gdrive_vfs: /mnt/user/gdrive/mount

 

i killed the mounts at 3:40am today, with fusermount -uz /mnt/user/gdrive/mount so that some slightly changed settings could be applied and maybe fix the issue. the mount originally restarted after going OOM around midnight, with the old settings, so the time of 00:26 there also makes sense. when I ran "ps -ef | grep rclone" after running the fusermount command at 3:40am to apply the new settings when remounting, there were no rclone processes shown. so its weird that when running that command NOW, it shows the mount from around midnight.

 

i think the /bin/bash rclone mounts are from the userscripts plugin, but im not sure what the rcloneorig one is from, as i don't have anything that runs that command in any script. even still, if it were the uploads, i think it'd show the upload command execution line, which of course isnt "mount".

 

from reading some on the rclone forums, seems like one of the scripts is calling the mount more than it should be, so i think its duplicating and using way more resources than it should. not sure how to track it down.

 

edit: after reading more about it, i think its just the user script calling the rclone plugin, since the parent process of rcloneorig is 22963. the 24017/24014 process dropped off on its own. really unsure why its running OOM and being killed if its not being duplicated.

 

any ideas?

 

syslog.txt

Edited by remedy
Link to comment

# REQUIRED SETTINGS
RcloneRemoteName="MeJoMediaDrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/MeJoMedia" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
MergerfsMountShare="/mnt/user/MeJoMediaMount" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="JJ MeJoServerTv MeJoServerMovies MeJoServer4K" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
LocalFilesShare="/mnt/user/MeJoMediaServer" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MountFolders=\{"MeJoMedia/MeJoMedia\ Tv\ Shows/JJ\ Tv\ Shows/Action\ Tv,MeJoMedia/MeJoMedia\ Tv\ Shows/JJ\ Tv\ Shows/Animated\ Tv,MeJoMedia/MeJoMedia\ Tv\ Shows/JJ\ Tv\ Shows/Classic\ Action\ Tv,MeJoMedia/MeJoMedia\ Tv\ Shows/JJ\ Tv\ Shows/Classic\ Comedy\ Tv,MeJoMedia/MeJoMedia\ Tv\ Shows/JJ\ Tv\ Shows/Classic\ Drama\ Tv,MeJoMedia/MeJoMedia\ Tv\ Shows/JJ\ Tv\ Shows/Classic\ Sci-Fi\ Tv"\} # comma separated list of folders to create within the mount

 

 

Thanks again for the scripts. Here are my settings. My question is can rclonemountshare, mergerfsmountshare and localfileshare be the same location>? My actual files are at /mnt/user/MeJoMediaServer/. I would like for some files to just be on my unRaid server and some on both gdrive and unRaid. The upload script is set to move files from my mountfolders in which i just copy or move files over to from my shares but would like to not have to copy over if not necessary. Second thing is i haven't moved all my files over from my windows server i was hoping to use my service accounts to just move files to gdrive instead of moving to unRaid then to gdrive. Any advice with using the accounts with a simple script on windows i don't need to have it auto change like it does in this script

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.