Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

@Bjur

Ember takes me back haha. I haven't used that for like 8 years but what a god sent that program was back in the mysql+XBMC days!

 

Preview Images:

Personally, if preview images aren't a 100% must to you, don't enable them. They are a huge pain when gdrives is involved, take up a huge amount of local space (rip 2tb ssd), and have cause so many users headache due to api bans when enabled. They can be done with gdrives but it's kind of like playing with fire. Maybe something to explorer once you have everything stable.

 

File Structure:
With regards to your local location. You are correct. Your local location in your case (remote name googleFI_crypt).

All your locations look good for SAB/Filebot. You should point Sonarr/Radarr/ect to /mergerfs/remote/[movie/tv]
This will allow those programs to see the combined local/gdrive folder so the can monitor both as if it's one drive. 

/mnt/user/local/
            └──googleFI_crypt/ (YOUR REMOTE NAME)
                       └──  
                              ├── Downloads (Stuff here excluded from upload)
                              |   
                              ├── Movies (Moved when running upload)
                              | 
                              ├── TV (Moved when running upload)
                              | 
                              ├── Whatever_you_want (Moved when running upload)

 

Metadata:

Regarding the XBMCnfoMoviesImporter. That is a tricky situation. I've never used the plugin but I looked at it and it just adds a custom agent to import the nfo files. You might want to try plex with some foreign titles. I have no experience with this but you might be surprised at how well it can scrape (fix anything with the fix match tool). 

 

With regards to the .nfo files. You could exclude them and that in theory would work fine. But your /local/ file is going to get very busy quickly (lost of folders and subfolders just for a single nfo file). Might make it difficult to manage just from a file view standpoint. I personally like to open the /local file and only see my /downloads folder (my only excluded folder) and nothing else. Someone else can chime in on this practice. If your nfo files are perfectly curated, simply excluding them might be the best way to do it. That is if you really want the nfo files in the first place. 

 

 I use no renames or metadata programs for management. I let sonarr/radarr grab the files, rename them, and move to their location. Then I let plex/emby scrape them and it works 99% of the time. Anime needs a custom agent but now it scrapes just like the rest with ease. Your use may vary though with foreign media (no experience there). I'd do a test on like 20 or so ones you think plex would struggle with and see what it spits out without any nfo file. 

 

Exclusion Time:
New files can be excluded for X amount of time. YOU define this in your upload script. It's the minage variable. Set it to whatever you want. Mines at 15 minutes.

 

Regarding MergerFS mounts:

I think in your case it's best to have just one. One crypt with subfolders being /movie and /tv. My crypt has tons of subfiles (movie, movie4k, documentary, tv, tv4k, tvcomedy). If you run into the issue of 400K limit you can simply move the teamdrive folders ---> gdrive and make a new merger mount for them there. (IE. combines /local/movie, /tdrive/movie, /gdrive/movie --> /mergerfs/movie) The limit is fairly hard to reach, honestly. If you don't have music uploaded (lot of subfolders and files). It's something you can deal with once you get close to the limit.  

 

Link to comment

@DZMM, @testdasi

 

Quick question with regards to the 400K object limit. I don't have any experience with it but plan on hitting it sometime in the future.

  • Are there any advantages to moving from tdrive ---> gdrive vs. tdrive ----> tdrive2?
  • Both should still be able to do server side transfers with no quota issues, correct?
  • Ever experienced this 150k/400K+ objects slowdown on a gdrive?

 

Link to comment
18 minutes ago, watchmeexplode5 said:

@DZMM, @testdasi

 

Quick question with regards to the 400K object limit. I don't have any experience with it but plan on hitting it sometime in the future.

  • Are there any advantages to moving from tdrive ---> gdrive vs. tdrive ----> tdrive2?
  • Both should still be able to do server side transfers with no quota issues, correct?
  • Ever experienced this 150k/400K+ objects slowdown on a gdrive?

 

gdrive has no 400k limit so I'm using for my music.

 

I've just setup 2 extra teamdrives - one for UHD content and one for my grown-up TV (not kids).  The UHD server side switch went ok although the larger tv one is a bit worrying - I can't see the files on google, but the move has happened on the mount and plex is still playing everything (although appearing in both the source and destination)!  It's a fair few TBs so I assume google will catchup soon....

Edited by DZMM
Link to comment
8 hours ago, testdasi said:

The 400k object count limit (object = file + folder) is the hard limit per team drive. In practice, anything approaching about 150k objects will cause that particular teamdrive to be perceiveably slower (been there, done that).

 

Ok, I've finished splitting my movie & tv teamdrive into 3 and I can testify that performance is better with launch times back to around 3-4 seconds, whereas before they were at times 10+ seconds. 

 

I don't know when the tipping point is for creating extra team drives - I'm sharing my teamdrives to see if we can figure it out. 

 

I've got 3 now:

 

- Main:

rclone size tdrive_vfs: --fast-list
Total objects: 66914
Total size: 258.232 TBytes (283928878628609 Bytes)

- Adult TV:

rclone size tdrive_t_adults: --fast-list
Total objects: 55550
Total size: 118.501 TBytes (130292861122569 Bytes)

- UHD (TV & Movies):

rclone size tdrive_uhd:  --fast-list
Total objects: 4706
Total size: 69.986 TBytes (76950733274565 Bytes)

and an extra rclone mount for music that isn't in a teamdrive:

rclone size gdrive: --fast-list
Total objects: 95393
Total size: 4.418 TBytes (4857512752388 Bytes)

I've only got a total of 223K objects (451TB) but I think my experience proves it's not worth adding anywhere near the 400k limit if you want good performance. 

 

I might create a 4th teamdrive and move my kids movies and tv shows to a new teamdrive to see if that knocks another second or two off launch times.

Link to comment

working well now. only difficulty is when seeking in higher bitrate files, it takes like 30-60 seconds to start playing again. i was testing it on a transcode of a 1080p remux. any ideas on what to adjust for that?

 

transcoding is on a gtx 1660, not an issue of power there. also on a 850/850mbps connection.

 

my settings are all the defaults in your scripts.

Link to comment

Hello. I just updated to Mergerfs and everything is almost working except for uploading, I keep getting : rclone not installed - will try again later

 

Quote

04.05.2020 22:04:42 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_upload_vfs ***
04.05.2020 22:04:42 INFO: *** Starting rclone_upload script for gdrive_upload_vfs ***
04.05.2020 22:04:43 INFO: Script not running - proceeding.
04.05.2020 22:04:43 INFO: Checking if rclone installed successfully.
04.05.2020 22:04:43 INFO: rclone not installed - will try again later.
Script Finished May 04, 2020 22:04.43

rclone is installed so no idea why is saying this. i can browse /mnt/user/mount_rclone and see my files.

Link to comment
21 minutes ago, bugster said:

Hello. I just updated to Mergerfs and everything is almost working except for uploading, I keep getting : rclone not installed - will try again later

 

rclone is installed so no idea why is saying this. i can browse /mnt/user/mount_rclone and see my files.

Does it matter that my appdata folder is on Unassigned Devices?

Link to comment
4 hours ago, remedy said:

working well now. only difficulty is when seeking in higher bitrate files, it takes like 30-60 seconds to start playing again.

Is initial playback faster?  You could try increasing the buffer size

Link to comment

I use sabnzb/deluge and just to be sure about this point you made:

 

Quote

2. Hardlink support: Unionfs didn't support hardlinks so any torrents had to be copied to rclone_upload and then uploaded.  Mergerfs supports hardlinks so no wasted transfer.  I've added an upload exclusion to /mnt/user/local/downloads so that download files (intermediate, pending sonarr/raddarr import, or seeds) are not uploaded.  For hardlinks to work, transmission/torrent clients HAVE to be mapped to the same disk 'storage' so files need to be in /mnt/user/mount_unionfs or for newer users /mnt/user/mount_mergerfs...hope this makes sense = MUCH BETTER FILE HANDLING AND LESS TRANSFER AND FILES NOT STORED TWICE

I need to change the download path to /mnt/user/local/downloads for these?

 

Sabnzb right now maps to "/mnt/user/downloads/sabnzb/" I should change it to /mnt/user/local/downloads/sabnzb/

 

Thanks

Link to comment
42 minutes ago, bugster said:

I use sabnzb/deluge and just to be sure about this point you made:

 

I need to change the download path to /mnt/user/local/downloads for these?

 

Sabnzb right now maps to "/mnt/user/downloads/sabnzb/" I should change it to /mnt/user/local/downloads/sabnzb/

 

Thanks

The trick is to ensure that sab/nzb/deluge etc are all in alignment with radarr/sonarr etc to get hardlinks support.  So you need wherever your download clients are downloading to, to appear to be on the same drive when radarr etc look at it i.e. it's the docker mappings that count.

 

To make this as easy and foolproof as possible, ALL of my dockers have the same mappings:

 

- /user ---> /mnt/user

- /disks --> /mnt/disks

 

Just these - nothing else.  So, that no matter what paths I use within dockers, they will always appear as being on the same 'drive' for inter-docker moves, and I get the maximum file transfer benefit.

 

I.e. for you within Sab, you would download to /user/downloads/sabnzb via Sab's settings.  Radarr would look at /user/downloads/sabnzb (same /user mapping) and then move the files to /user/media/movies/whatever i.e. everything 'stays' in the 'user' drive.

 

 

Edited by DZMM
Link to comment
8 minutes ago, DZMM said:

The trick is to ensure that sab/nzb/deluge etc are all in alignment with radarr/sonarr etc to get hardlinks support.  So you need wherever your download clients are downloading to, to appear to be on the same drive when radarr etc look at it i.e. it's the docker mappings that count.

 

To make this as easy and foolproof as possible, ALL of my dockers have the same mappings:

 

- /user ---> /mnt/user

- /disks --> /mnt/disks

 

Just these - nothing else.  So, that no matter what paths I use within dockers, they will always appear as being on the same 'drive' for inter-docker moves, and I get the maximum file transfer benefit.

 

I.e. for you within Sab, you would download to /user/downloads/sabnzb via Sab's settings.  Radarr would look at /user/downloads/sabnzb (same /user mapping) and then move the files to /user/media/movies/whatever i.e. everything 'stays' in the 'user' drive.

 

 

oh and also make sure that dockers' paths are all pointing to paths inside the mergerfs mount, not the local path.  You should only ever have to use the local path in special circumstances 

Link to comment
19 hours ago, watchmeexplode5 said:

@Bjur

Ember takes me back haha. I haven't used that for like 8 years but what a god sent that program was back in the mysql+XBMC days!

 

Preview Images:

Personally, if preview images aren't a 100% must to you, don't enable them. They are a huge pain when gdrives is involved, take up a huge amount of local space (rip 2tb ssd), and have cause so many users headache due to api bans when enabled. They can be done with gdrives but it's kind of like playing with fire. Maybe something to explorer once you have everything stable.

 

File Structure:
With regards to your local location. You are correct. Your local location in your case (remote name googleFI_crypt).

All your locations look good for SAB/Filebot. You should point Sonarr/Radarr/ect to /mergerfs/remote/[movie/tv]
This will allow those programs to see the combined local/gdrive folder so the can monitor both as if it's one drive. 


/mnt/user/local/
            └──googleFI_crypt/ (YOUR REMOTE NAME)
                       └──  
                              ├── Downloads (Stuff here excluded from upload)
                              |   
                              ├── Movies (Moved when running upload)
                              | 
                              ├── TV (Moved when running upload)
                              | 
                              ├── Whatever_you_want (Moved when running upload)

 

Metadata:

Regarding the XBMCnfoMoviesImporter. That is a tricky situation. I've never used the plugin but I looked at it and it just adds a custom agent to import the nfo files. You might want to try plex with some foreign titles. I have no experience with this but you might be surprised at how well it can scrape (fix anything with the fix match tool). 

 

With regards to the .nfo files. You could exclude them and that in theory would work fine. But your /local/ file is going to get very busy quickly (lost of folders and subfolders just for a single nfo file). Might make it difficult to manage just from a file view standpoint. I personally like to open the /local file and only see my /downloads folder (my only excluded folder) and nothing else. Someone else can chime in on this practice. If your nfo files are perfectly curated, simply excluding them might be the best way to do it. That is if you really want the nfo files in the first place. 

 

 I use no renames or metadata programs for management. I let sonarr/radarr grab the files, rename them, and move to their location. Then I let plex/emby scrape them and it works 99% of the time. Anime needs a custom agent but now it scrapes just like the rest with ease. Your use may vary though with foreign media (no experience there). I'd do a test on like 20 or so ones you think plex would struggle with and see what it spits out without any nfo file. 

 

Exclusion Time:
New files can be excluded for X amount of time. YOU define this in your upload script. It's the minage variable. Set it to whatever you want. Mines at 15 minutes.

 

Regarding MergerFS mounts:

I think in your case it's best to have just one. One crypt with subfolders being /movie and /tv. My crypt has tons of subfiles (movie, movie4k, documentary, tv, tv4k, tvcomedy). If you run into the issue of 400K limit you can simply move the teamdrive folders ---> gdrive and make a new merger mount for them there. (IE. combines /local/movie, /tdrive/movie, /gdrive/movie --> /mergerfs/movie) The limit is fairly hard to reach, honestly. If you don't have music uploaded (lot of subfolders and files). It's something you can deal with once you get close to the limit.  

 

@watchmeexplode5 Thanks again for your feedback.

 

- Ember is a fantastic and customizable tool, which really have given me a lot of help.

Preview Images:

I can see I haven't enabled the video preview, which you are referring to, and I'll just keep it disabled now.

 

File Structure:

In regards to the file structure, that looks like what I want with the exection, that I'll make more sense to divide it into to drives. 1 for Movies and 1 for TV. I can't see the benefit of having it in 1 location, when there is a limit on the drive. Sonarr/Radarr whatever will just point to each drive and Sab/Filebot can do the same, so I can only see advantages of keeping them separate. Right?

 

Regarding MergerFS mounts:

Again why is it better in my case when the clients can already move to the correct location from the start, instead of having to move it around after reaching limit and according to some users after 150k it will slow down? Why should I not separate it?:)

 

@DZMM What is the benefit for you to have Movies & TV in 1 share drive and not divide it, when ? Plex can use locations if I'm not mistaken.

Link to comment

@Bjur

Feel free to separate them as you see fit. I was just purposing the simplest file structure for ease of setup. 

 

Having two remotes would certainly reduce the change of having too many objects in the drive. In reality, I'd say 90% of users are not likely to hit that object limit so I really only consider it as a [more-advanced user/special use case] issue. But again the two remotes would work fine. Just a little more effort on the end user during setup.   

 

Benefits of using 1 shared drive vs multiple are that you only have to run a single script for upload. Makes for simple setup of the scripts. If I'm not mistaken, with multiple drives you would need another instance of the script to upload to the second drive. 

 

But it does sound like DZMM saw a bit of a performance bump with seperate drives. I might have to run a few tests and see if I can squeeze out a little more performance. On my fiber gigabit line I've got about a 2-3s startup time and like 1/2s for all media. On high bitrate files (80+Mbps) there might be 1 extra second or so added.  

 

Are you going to stick with the nfo metadata and keep them locally stored? Let me know how it works. I'm always looking to improve my server haha!

 

 

Edited by watchmeexplode5
Link to comment

@DZMM

Interesting finding about the object count. 

I've got ~95K objects and I'm seeing load times similar to your time after you split your mounts. 

Are you still using mergerfs to put all the mounts in a singe folder for access? Could it be a mergerfs/rclone issue due to high object count? Or are we pretty sure there is some "soft-limit" in place?

 

I might make a test t-drive and stuff 350K objects in it and see if my r/w speeds suffer.

Link to comment
4 hours ago, remedy said:

initial playback is totally fine, starts within 5s most of the time, 10s max. which buffer should I increase?

--buffer-size in the mount script.

 

What client are you using and is it the same for all?  I've read in the past that iOS keeps opening and closing files which can cause problems.

 

7 hours ago, watchmeexplode5 said:

Are you still using mergerfs to put all the mounts in a singe folder for access?

Yes, so I don't think mergers was the culprit.  Browsing the mounts is perceptively faster so I think the fault was Google's or rclone's.

Link to comment

@DZMM,

 

Sorry to bother you, but I am stuck and its getting to be painfull. I need you help:)

 

There is my config:

[gdrive]
type = drive
client_id = -...
client_secret = -
scope = drive
token = 
team_drive = 

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = --
password2 = 

 

There is my mount script:

RcloneRemoteName="gdrive_media_vfs" 
RcloneMountShare="/mnt/user/mount_rclone" 
MergerfsMountShare="/mnt/user/mount_mergerfs" 
DockerStart="nzbget plex sonarr radarr deluge" 
LocalFilesShare="ignore"
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount



# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="172.168.4.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

There is my upload:

 

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="ignore" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="off"
BWLimit3Time="16:00"
BWLimit3="off"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="172.168.4.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

I am not sure so I ask, do I need the upload script if I want to use the MergerfsMountShare="/mnt/user/mount_mergerfs"  for anything?

anyhow

 

I mapped my dockers to /user -> /mnt/user

 

I got a problem with a downloads 

Nzbget main folder is: /user/mount_mergerfs/gdrive_media_vfs/downloads

If I mapped the app to this folder the all process is very slow and I recive this error all the time:

Could not create file /user/mount_mergerfs/gdrive_media_vfs/downloads/intermediate/The.Christmas.Bunny.2010.1080p.AMZN.WEBRip.AAC2.0.x264-FGT.#1/61.out.tmp: (null)

 

I recive similar permission problem with deluge..

 

What am I missing? I been the github page for a clean guide but I think my setup is correct.

 

thank you

Edited by norbertt
Link to comment
8 hours ago, watchmeexplode5 said:

@Bjur

Feel free to separate them as you see fit. I was just purposing the simplest file structure for ease of setup. 

 

Having two remotes would certainly reduce the change of having too many objects in the drive. In reality, I'd say 90% of users are not likely to hit that object limit so I really only consider it as a [more-advanced user/special use case] issue. But again the two remotes would work fine. Just a little more effort on the end user during setup.   

 

Benefits of using 1 shared drive vs multiple are that you only have to run a single script for upload. Makes for simple setup of the scripts. If I'm not mistaken, with multiple drives you would need another instance of the script to upload to the second drive. 

 

But it does sound like DZMM saw a bit of a performance bump with seperate drives. I might have to run a few tests and see if I can squeeze out a little more performance. On my fiber gigabit line I've got about a 2-3s startup time and like 1/2s for all media. On high bitrate files (80+Mbps) there might be 1 extra second or so added.  

 

Are you going to stick with the nfo metadata and keep them locally stored? Let me know how it works. I'm always looking to improve my server haha!

 

 

@watchmeexplode5 In regards to benefits. I can see what you mean, but if I'm going to go with 2 crypts, will I then only have to create 2 mount scripts and 2 upload scripts and adjust them to the different crypts? If that's the case it's only 1 time, I will have to do it to gain performance on T Drive. I don't think running 2 extra scripts should take up much more ressources on the Unraid?

 

In regards to nfo you got me thinking. I made a separate Plex server to test, it did found almost all without nfo. The problem right now is that, if you choose a foreign language when adding library it won't use the English fallback language, so many TV Shows would have empty plots.

Also when using foreign language and putting the local media assets in as agent (lowest in list) and using Plex Movie the genre will be mixed up.

What I think I will do is move my library to Mergerfs folder, add the folder to my existing Plex library, refresh all metadata so it still uses my existing .nfos and posters which are customized with higher quality, remove old folder, move all nfos from my folders, remove xbmcnfo agent from Plex and then finally run upload scripts. Does that sound reasonable?

 

@DZMM Why are you creating a separate UHD mergerfs + Drive? Plex can filter with UHD resolution, so wouldn't it be better to separate by media types (movies/shows) instead, or?

 

Link to comment

@Bjur

Yeah, you should have not have any resource issue running 2+ mount/upload scripts. You can run them at the same time or staggered. I'd run them staggered just to keep the api calls down. 

 

I don't know much about the nfos but I think that should work fine. I'm making the assumption that plex imports the data to it's library based off the nfo so after import the nfo file is un-needed. If the nfo file is only accessed on import you could even leave those with the files and upload them to gdrive as well. If plex is constantly referencing the nfo file, then that might lead to slow scans/navigation. Again, I don't know much about that since I've never used them or that addon. 

Link to comment
8 hours ago, norbertt said:

@DZMM,

 

Sorry to bother you, but I am stuck and its getting to be painfull. I need you help:)

 

There is my config:


[gdrive]
type = drive
client_id = -...
client_secret = -
scope = drive
token = 
team_drive = 

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = --
password2 = 

 

There is my mount script:


RcloneRemoteName="gdrive_media_vfs" 
RcloneMountShare="/mnt/user/mount_rclone" 
MergerfsMountShare="/mnt/user/mount_mergerfs" 
DockerStart="nzbget plex sonarr radarr deluge" 
LocalFilesShare="ignore"
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount



# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="172.168.4.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

There is my upload:

 


# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="ignore" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="off"
BWLimit3Time="16:00"
BWLimit3="off"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="172.168.4.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

I am not sure so I ask, do I need the upload script if I want to use the MergerfsMountShare="/mnt/user/mount_mergerfs"  for anything?

anyhow

 

I mapped my dockers to /user -> /mnt/user

 

I got a problem with a downloads 

Nzbget main folder is: /user/mount_mergerfs/gdrive_media_vfs/downloads

If I mapped the app to this folder the all process is very slow and I recive this error all the time:

Could not create file /user/mount_mergerfs/gdrive_media_vfs/downloads/intermediate/The.Christmas.Bunny.2010.1080p.AMZN.WEBRip.AAC2.0.x264-FGT.#1/61.out.tmp: (null)

 

I recive similar permission problem with deluge..

 

What am I missing? I been the github page for a clean guide but I think my setup is correct.

 

thank you

All your problems are because your mount script should have:

 

LocalFilesShare="/mnt/user/local"

not:

LocalFilesShare="ignore"

ignore is saying you don't want a mergerfs mount.  So, what's happening is nzbget is writing direct to the rclone mount i.e. direct to gdrive, so I imagine it will be very slow and cause problems!

 

LocalFilesShare2, LocalFilesShare3 and LocalFilesShare4 are the ones you want 'ignore' for if you don't want to add extra paths to your mergerfs mount.

 

Edit: Your docker mappings are correct i.e. to /mnt/user and then within the docker to mount_mergerfs.  You just need to fix the mount script.

 

Easiest way is to correct the settings, stop all the dockers that are trying to add new or move files or are accessing the mount e.g. plex, then re-run the script and then start the dockers.

Edited by DZMM
  • Like 1
Link to comment
20 minutes ago, watchmeexplode5 said:

@Bjur

Yeah, you should have not have any resource issue running 2+ mount/upload scripts. You can run them at the same time or staggered. I'd run them staggered just to keep the api calls down. 

 

I don't know much about the nfos but I think that should work fine. I'm making the assumption that plex imports the data to it's library based off the nfo so after import the nfo file is un-needed. If the nfo file is only accessed on import you could even leave those with the files and upload them to gdrive as well. If plex is constantly referencing the nfo file, then that might lead to slow scans/navigation. Again, I don't know much about that since I've never used them or that addon. 

Thanks. I think I will try again with 2 mounts. Do you know how the mountpoint work. After my last reboot I couldn't get the mergerfs mounted.

If I delete the mountcheck on gdrive and mergerfs will it then recreate a new after running mountscript? I also had a problem with creating folders in mergerfs after edit mount script but perhaps it was because I changed movies to Movies. After changing it back it worked again. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.