Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

@Tabris just did a bit of quick research as I'm stuck somewhere boring and looks like you need to use the epmfs option:

 

epmfs (existing path, most free space)
Of all the drives on which the relative path exists choose the drive with the most free space.

 

This should ensure that mergerfs writes /media_array files to the actual array, and then just exclude /media_array from the upload script

 

 

Edited by DZMM
Link to comment

@DZMM Thanks for looking into it. Unfortunately this isn't what I was looking for. I'm fine with it writing to the cache at all times, that's the intent, I only want it to read from the first available location in the chain. But thanks to your hint with the policies I actually found the solution. 

 

It's the "category.search=ff" setting. By adding this I can ensure that the merged directory will follow the order of the sources in the mergerfs command. I just tested it and it works exactly as I wanted it to.

 

If a file with the same name exists in all four merged sources it will first display the one on the cache (/mnt/user/media), then the array (/mnt/user/media_array), then the regular Gdrive (/mnt/disks/media_remote) and finally on the Team Share (/mnt/disks/media_team). As soon as I delete the file on the first path in the chain, it will show the next available one.

 

My mergerfs command now looks like this:

mergerfs /mnt/user/media_cache:/mnt/user/media_array:/mnt/disks/media_remote:/mnt/disks/media_team /mnt/disks/media -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,category.search=ff,cache.files=partial,dropcacheonclose=true

I renamed media to media_cache to make it easier to see which source it is. The only added entry is "category.search=ff" which does exactly what I'm looking for.

Edited by Tabris
Link to comment

Mergerfs will read Left to Right. So in your example when it tries for the files it will load a file from cache first, media_array second and so on.

You can use RO RW and NC (no create) for determining where to write your files in the merger.
 

mergerfs /mnt/user/media_cache=RW:/mnt/user/media_array=NC:/mnt/disks/media_remote=NC:/mnt/disks/media_team=NC /mnt/disks/media

 

Link to comment

I'm a bit confused on how to map Sonarr et al with the updated scripts.  

This is what I have for Sonarr, but the uploader doesn't move the files at all (it tries to delete them instead)

/config <-> /mnt/user/appdata/sonarr
/dev/rtc <- /dev/rtc
/tv <-> /mnt/cache/local/google_vfs/tv
/downloads <-> /mnt/cache/local/google_vfs/downloads/

I have the local mergerfs folder on my cache drive so I can saturate my line as it's an SSD and capable of using my full gigabit.

Am I doing something wrong here?  Seems like rclone is excluding the tv folder in /mnt/cache/local.

 

Link to comment
5 minutes ago, Roken said:

I'm a bit confused on how to map Sonarr et al with the updated scripts.  

This is what I have for Sonarr, but the uploader doesn't move the files at all (it tries to delete them instead)


/config <-> /mnt/user/appdata/sonarr
/dev/rtc <- /dev/rtc
/tv <-> /mnt/cache/local/google_vfs/tv
/downloads <-> /mnt/cache/local/google_vfs/downloads/

I have the local mergerfs folder on my cache drive so I can saturate my line as it's an SSD and capable of using my full gigabit.

Am I doing something wrong here?  Seems like rclone is excluding the tv folder in /mnt/cache/local.

 

You mount your dockers to /mnt/user/mount_mergerfs/google_vfs and then the proper subfolder (tv/movies/downloads/etc.). If you just put it on your cache it will only see the local stored files.

Link to comment
32 minutes ago, Kaizac said:

You mount your dockers to /mnt/user/mount_mergerfs/google_vfs and then the proper subfolder (tv/movies/downloads/etc.). If you just put it on your cache it will only see the local stored files.

Ok so I don't need to download/move files to the local folder then?

Just wondering how it know what's new and what to upload?

Also the upload script points to /mnt/user/local/google_vfs, but that directory is always empty.

 

So for example so I can better understand this.

Nzbget:

/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads

Sonarr: 

/tv <-> /mnt/user/mount_unionfs/google_vfs/tv
/downloads <-> /mnt/user/downloads

 

Edited by Roken
edits
Link to comment

Everything seems to work fine for me ATM, but i am having the same issue as Kaizac.

 

When Sonarr grabs an upgraded show if a previous version exists (which it does since its an upgrade)  

 

Sonarr wont move it, i get failed to import episode.  If i manually delete the file from gdrive then Sonarr will process it fine.

 

 

Edited by Viperkc
Link to comment

2 PSA's:

 

1. If you want to use more local folders in your union/merge folder which are RO, you can use the following merge command and Sonarr will work. No access denied errors anymore. Use either mount_unionfs or mount_mergerfs depending on what you use.

mergerfs /mnt/disks/local/Tdrive=RW:/mnt/user/LocalMedia/Tdrive=NC:/mnt/user/mount_rclone/Tdrive=NC /mnt/user/mount_unionfs/Tdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

2. If you have issues with the mount script not working at start of array because docker daemon is starting. Then just put your mount script on custom settings and run it every minute (* * * * *). It will then run after array start and will work.

 

@nuhll both these fixes should be interesting for you.

Link to comment

I got the scripts set up and working.  Now I can't decide how I actually want to use it with my setup.  I see that the upload script omits "downloads" and pushes the hard linked movies/tv/etc to the google drive.  However, I basically permaseed everything and hard link.  If my file is already using up the space on my server by seeding in Downloads I'm not using any additional space in my array for my media library.  I also don't have symmetric Gigabit with Xfinity so that my upload at 40Mbps is rather slow too.

 

Anyone have a use-case like mine? I'm thinking I could replace my CrashPlan backups with this by setting up a backup folder in the mount_mergerfs folder.  Not really sure what to do, but I do have access to an unlimited google account.

Link to comment
8 hours ago, bryansj said:

I got the scripts set up and working.  Now I can't decide how I actually want to use it with my setup.  I see that the upload script omits "downloads" and pushes the hard linked movies/tv/etc to the google drive.  However, I basically permaseed everything and hard link.  If my file is already using up the space on my server by seeding in Downloads I'm not using any additional space in my array for my media library.  I also don't have symmetric Gigabit with Xfinity so that my upload at 40Mbps is rather slow too.

 

Anyone have a use-case like mine? I'm thinking I could replace my CrashPlan backups with this by setting up a backup folder in the mount_mergerfs folder.  Not really sure what to do, but I do have access to an unlimited google account.

I also only have 40mbps upload and it works great with my setup. Locally, content is instant and there is no difference from when content was stored physically locally. Remote users if they also a decent connection, everything direct plays just fine. After some tweaking and testing I've ended up limiting remote connections to 8mbps 1080p in Plex, which does transcode most content, but not a worry for me with GPU hardware transcoding.

 

The initial upload of my content took a longgggg time (literally weeks) but it has been worth it a hundred times over already by how much extra content and at higher qualities I've been able to grab for my own personal use (100mbps down allows me to stream even high bitrate 1080p content with no issues. The only thing to consider is that you cannot effectively stream with that download while torrenting content at the same time unless you limit your download speed, plex will just stutter a lot. This however isn't a big problem for me as most of my downloads occur during the working day anyway due to my timezone.

Link to comment

I think I just don't really run a setup with a problem that this solution solves.  First of all I'm up to 84TB of local storage.  Second is I like my 4K HDR remuxes to direct play on my Shield through my Atmos/DTS-X AVR.  Third is that I hardlink my downloads and want to long term seed.  So if Downloads gets omitted from the script I'll still have to maintain a local copy and then I'm just uploading it for the hell of it and causing myself to have to download everything from my library even though the source copy is still local.  A solution could be to not omit Downloads and see if seeding works from the cloud drive share and to stop buying EasyStores on sale.

 

I already had the google account with unlimited and had tried it as a CrashPlan replacement back when they stopped doing their peer to peer backup.  It turned out that Duplicati sucked so I just paid for CrashPlan.  I decided to dust off the account after coming across the rclone plugin and these scripts.  I think with these scripts I could revisit it for backup.  I also may consider pointing my NextCloud to it and having it be a Google Drive hybrid of sorts.  So if anyone has any other ideas on slick ways to use this then let me know.

Link to comment

@bryansj you can remove the /downloads exclusion if you want.  But, I've read (not tried myself) that seeding from gdrive is a bad idea as it can lead to an API ban because of all the calls.  Similarly you can add an exclusion for your 4K content folder to keep it local, and then use gdrive for non-4K, non-seeding content on top of your local 84TB i.e. have access to xxxTBs extra storage.  This is what I did initially until I went all in, and uploaded everything and sold my HDDs.

 

I would definitely use it for backups.

Link to comment

I remember from my attempt a couple years ago that gdrive and downloads didn't get along, but I couldn't remember where the problem was between them.  The API ban would cause plenty of headaches.

 

I also remember there was a catch-22 back when Plex would work straight from a gdrive before they canned that service.  You could point Plex to gdrive and the users would be able to stream from there and not use your bandwidth.  However, you couldn't encode your media and you risked Google deleting your content.  If you encode your media it has to pass through your pipe to decode so you are back to using Plex "locally".

Link to comment
46 minutes ago, bryansj said:

I remember from my attempt a couple years ago that gdrive and downloads didn't get along, but I couldn't remember where the problem was between them.  The API ban would cause plenty of headaches.

 

I also remember there was a catch-22 back when Plex would work straight from a gdrive before they canned that service.  You could point Plex to gdrive and the users would be able to stream from there and not use your bandwidth.  However, you couldn't encode your media and you risked Google deleting your content.  If you encode your media it has to pass through your pipe to decode so you are back to using Plex "locally".

Why are you on torrents? Move to usenet and get rid of that seeding bullshit. Also you can just direct play 4k from your gdrive. I do with up to 80 Gb files and it's fine. You might consider a seed box though. You can use torrents and move with gigabit speed to gdrive.

Link to comment

I started as far back as MP3s in the 1990s with Usenet and moved to private trackers a few years ago.  You might have a different opinion, but I'm not going back.  I'm not talking about crappy public trackers here.  I've done seed boxes, but they don't really meet my use case anymore.

Link to comment
1 hour ago, bryansj said:

I started as far back as MP3s in the 1990s with Usenet and moved to private trackers a few years ago.  You might have a different opinion, but I'm not going back.  I'm not talking about crappy public trackers here.  I've done seed boxes, but they don't really meet my use case anymore.

Well to each his own. For mainstream media usenet is vastly superior if set up right. If you have access to private trackers and also need non-mainstream media then torrents can bring more to the table.

Either way I think with your setup/wishes you can use rclone for your backups and replace Crashplan with it. But you don't need all this elaborate configuration for it. Just create a Gdrive/Team Drive, DO NOT mount it. Just upload to it, and let removed/older data be written to a seperate folder within Gdrive. If you get infected then it can't directly access your mount files. And in case of encrypted/infected files being uploaded you will have your old media to rollback to.

 

Just have to remember that when you want to access your backups you have to mount the rclone mount/gdrive first to see the files. Or if you don't use encryption you can just see them through the browser.

Edited by Kaizac
Link to comment

Don’t seed from google drive - you’ll most likely receive an API ban. Also I’ve seen most people recommend that if you do have a large amount of local storage, keep 4K content local too. 
 

It sounds like in your position, with 84TB of local storage and from the sounds of it no real need to increase that exponentially, this just isn’t for you.

Link to comment
On 1/25/2020 at 5:16 AM, Kaizac said:

Asking it again cause I'm very curious. Can you share your merger command?

Sorry Kaizac,

 

Been offline for a couple of weeks.  Mergrfs command below.

mergerfs /mnt/user/Media:/mnt/user/mount_rclone/google_vfs/Media:/mnt/user/mount_rclone/nachodrive/Media /mnt/user/mount_unionfs/google_vfs/Media -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.