Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

4 hours ago, Kaizac said:

@DZMM sorry but in your first post you wrote this:

 

I've tried to understand what you're saying here, but I really can't. What exactly is the difference in user/rclone_upload with user/local? They are both local shares which you include in your union/merge. Maybe I'm missing something in your changes, since my configuration was a bit different because of more local folders.

You can ignore that and just ensure all your dockers, including nzbget etc,  use mappings within the mergerfs folder and not the local folders to fully benefit from mergerfs's better file transfer capabilities.   

 

I decided to ditch rclone_upload to make it easier for users to not mess up and to clean flows up. 

 

E.g. before my flow before was:

 

/mnt/user/downloads/movie -->/mnt/user/import/movie -->/mnt/user/mount_unionfs/movies/movie (via /mnt/user/rclone_upload/movies/movie) I.e. 3 different shares.

 

Now I have /mnt/user/mount_mergerfs/downloads/movie --> /mnt/user/mount_mergerfs/complete/movie --> /mnt/user/mount_mergerfs/movies/movie where any local files are in /mnt/user/local and I exclude mnt/user/local/downloads mnt/user/local/complete and mnt/user/local/seeds in the upload script.

 

Makes my life a hell of a lot easier and by including my 'pre-plex' files in the mount I get the full transfer benefits of mergerfs, and I think it will be easier for anyone new to the scripts.

Link to comment
17 hours ago, testdasi said:

I'll stick to unionfs for now. It's not broken for me so not seeing any need to migrate to mergefs.

If it's not broken, don't fix it !

Same as you, for the moment I fear to touch anything as it is working flawlessly ... Even if it is burning my finger to try mergerfs :D

Link to comment
1 hour ago, yendi said:

If it's not broken, don't fix it !

Same as you, for the moment I fear to touch anything as it is working flawlessly ... Even if it is burning my finger to try mergerfs :D

@testdasi and @yendi I understand.  But, it's worth it.  This isn't a small improvement - it's a major one with worthwhile performance improvements. 

 

(I) simplest migration: Just change your unionfs mount command for the new mergerfs one is worth doing - you can test it works first by just adding the mergerfs command to a new blank script and mounting in a new location

(II) best migration: do (I) + move all your mappings to within /user/mount_mergerfs i.e. /user/mount_mergerfs/downloads and point your download dockers to this location to get hardlinking, file transfer benefits etc

Edited by DZMM
Link to comment
1 hour ago, DZMM said:

@testdasi and @yendi I understand.  But, it's worth it.  This isn't a small improvement - it's a major one with worthwhile performance improvements. 

 

(I) simplest migration: Just change your unionfs mount command for the new mergerfs one is worth doing - you can test it works first by just adding the mergerfs command to a new blank script and mounting in a new location

(II) best migration: do (I) + move all your mappings to within /user/mount_mergerfs i.e. /user/mount_mergerfs/downloads and point your download dockers to this location to get hardlinking, file transfer benefits etc

I have a working mergefs mount but it just isn't used by anything.

  • I don't do torrent nor hardlink so no benefit there
  • My mount performance is bottlenecked by rclone and not unionfs so no performance benefit either
  • I consider unionfs COW as a protection layer against cryptovirus (and accidental changes). A silent infection + COW would cause my local UD drive to fill up inexplicably which would alert me that something is amiss.

So unless I'm missing something, there isn't anything that motivates me to fully switch at the moment.

 

Link to comment

Hmm today i found some errors in radarr:

20-1-11 12:23:02.4|Warn|ImportApprovedMovie|Couldn't import movie /downloads/completed/Filme/movie.1995.German.AC3.BDRip.x264-DHARMA-xpost/Jumanji.1995.German.AC3.BDRip.x264-DHARMA.mkv [v0.2.0.1459] System.UnauthorizedAccessException: Access to the path "/Archiv/Filme/movie (1995)/movie 1995.avi" is denied. at System.IO.File.Delete (System.String path) [0x00073] in <254335e8c4aa42e3923a8ba0d5ce8650>:0 at NzbDrone.Common.Disk.DiskProviderBase.DeleteFile (System.String path) [0x00068] in C:\projects\radarr-usby1\src\NzbDrone.Common\Disk\DiskProviderBase.cs:205 at NzbDrone.Core.MediaFiles.RecycleBinProvider.DeleteFile (System.String path) [0x00054] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\RecycleBinProvider.cs:90 at NzbDrone.Core.MediaFiles.UpgradeMediaFileService.UpgradeMovieFile (NzbDrone.Core.MediaFiles.MovieFile movieFile, NzbDrone.Core.Parser.Model.LocalMovie localMovie, System.Boolean copyOnly) [0x0005b] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\UpgradeMediaFileService.cs:52 at NzbDrone.Core.MediaFiles.MovieImport.ImportApprovedMovie.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.MediaFiles.MovieImport.ImportMode importMode) [0x00258] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\MovieImport\ImportApprovedMovie.cs:109

 

Any idea? it cant import some new movies? I can access that file via smb, tho. mount_unionfs\google_vfs\Filme\movie (1995)

 

 

root@Unraid-Server:~# ls -ls /mnt/user/mount_unionfs/google_vfs/Filme/movie\ *
/mnt/user/mount_unionfs/google_vfs/Filme/movie (1995):
total 1493104
1493104 -rw-rw-rw- 1 root root 1528938496 Aug 14  2011 movie\ 1995.avi

Edited by nuhll
Link to comment
Just now, nuhll said:

Hmm today i found some errors in radarr:

20-1-11 12:23:02.4|Warn|ImportApprovedMovie|Couldn't import movie /downloads/completed/Filme/movie.1995.German.AC3.BDRip.x264-DHARMA-xpost/Jumanji.1995.German.AC3.BDRip.x264-DHARMA.mkv [v0.2.0.1459] System.UnauthorizedAccessException: Access to the path "/Archiv/Filme/movie (1995)/movie 1995.avi" is denied. at System.IO.File.Delete (System.String path) [0x00073] in <254335e8c4aa42e3923a8ba0d5ce8650>:0 at NzbDrone.Common.Disk.DiskProviderBase.DeleteFile (System.String path) [0x00068] uin C:\projects\radarr-usby1\src\NzbDrone.Common\Disk\DiskProviderBase.cs:205 at NzbDrone.Core.MediaFiles.RecycleBinProvider.DeleteFile (System.String path) [0x00054] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\RecycleBinProvider.cs:90 at NzbDrone.Core.MediaFiles.UpgradeMediaFileService.UpgradeMovieFile (NzbDrone.Core.MediaFiles.MovieFile movieFile, NzbDrone.Core.Parser.Model.LocalMovie localMovie, System.Boolean copyOnly) [0x0005b] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\UpgradeMediaFileService.cs:52 at NzbDrone.Core.MediaFiles.MovieImport.ImportApprovedMovie.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.MediaFiles.MovieImport.ImportMode importMode) [0x00258] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\MovieImport\ImportApprovedMovie.cs:109

 

Any idea? it cant import some new movies? I can access that file via smb, tho. \\192.168.86.103\mount_unionfs\google_vfs\Filme\movie (1995)

Radarr has some issues lately. Had the same issues last couple of days, but now seem to have fixed it. Change your appdata link from /user/ to /cache/.

Link to comment
9 minutes ago, Kaizac said:

Radarr has some issues lately. Had the same issues last couple of days, but now seem to have fixed it. Change your appdata link from /user/ to /cache/.

Sorry, what exactly you mean by "appdata link"?

 

My appdata is always on "cache only"

 

edit:

no smethingis wrong with the mount.

 

I cant even delete the file via smb.. i can delete it, it vanishes and after some refreshes its back.. wtf

Edited by nuhll
Link to comment

Ok, first i install the new update.

 

Thanks.

 

Thats my mount:

 

fusermount -uz /mnt/user/mount_unionfs/google_vfs

#old unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RO:/mnt/user/Archiv=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

#new
# removing old binary as a precaution
rm /bin/mergerfs

docker run -v /mnt/user/appdata/other/mergerfs:/build --rm trapexit/mergerfs-static-build
mv /mnt/user/appdata/other/mergerfs/mergerfs /bin

mergerfs /mnt/user/Archiv:/mnt/user/rclone_upload/google_vfs:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

#new

Edited by nuhll
Link to comment

Ok ive upgraded unraid and changed it to cache.

 

Still same problem I cant delete \mount_unionfs\google_vfs\Filme\movie (1995)\file.avi

 

root@Unraid-Server:~# rm /mnt/user/mount_unionfs/google_vfs/Filme/movie \ \(1995\)/Jumanji\ 1995.avi 
rm: cannot remove '/mnt/user/mount_unionfs/google_vfs/Filme/movie (1995)/movie 1995.avi': Read-only file system

Edited by nuhll
Link to comment
5 minutes ago, nuhll said:

Ok ive upgraded unraid and changed it to cache.

 

Still same problem I cant delete \mount_unionfs\google_vfs\Filme\movie (1995)\file.avi

 

root@Unraid-Server:~# rm /mnt/user/mount_unionfs/google_vfs/Filme/Jumanji\ \(1995\)/Jumanji\ 1995.avi 
rm: cannot remove '/mnt/user/mount_unionfs/google_vfs/Filme/Jumanji (1995)/Jumanji 1995.avi': Read-only file system

Check your r/w settings for your mappings in your docker settings. Rw slave for mount unionfs and rw for the rest

Link to comment
2 minutes ago, Kaizac said:

Check your r/w settings for your mappings in your docker settings. Rw slave for mount unionfs and rw for the rest

I never changed these, i tried rw slave, but not difference. But its nothing about radarr. I cant even delete it in temerinal from unraid.. 

 

rm: cannot remove '/mnt/user/mount_unionfs/google_vfs/Filme/movie (1995)/movie 1995.avi': Read-only file system

 

theres something wrong with the mergefs mount command

 

mergerfs /mnt/user/Archiv:/mnt/user/rclone_upload/google_vfs:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

 

 

mnt user archiv is my local storage (should be RW)

Link to comment

I never used the upload folder so i just removed it now it looks more like in the tutorial:

mergerfs /mnt/user/Archiv:/mnt/user/mount_rclone/google_vfs /mnt/user/mount_unionfs/google_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
 

 

I report back after restart.


Edit: the mount script is too fast sometimes so i get:

 

docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
mv: cannot stat '/mnt/user/appdata/other/mergerfs/mergerfs': No such file or directory
/tmp/user.scripts/tmpScripts/rclone mount/script: line 84: mergerfs: command not found

 

ive added a sleep 10

 

entering the commands now by hand in termianl.

Edited by nuhll
Link to comment
On 1/4/2020 at 9:25 PM, Spladge said:

To monitor the changes you could try to use a PAS docker - I have a combined plex/PAS docker but have not set it up or @Stupifier uses a slightly modified version of another script (plex RCS) to do this by monitoring the log file and initiating a plex scan of that dir via the api.

https://github.com/zhdenny/plex_rcs
 

 

On 1/4/2020 at 9:30 PM, bedpan said:

I remember why I loved the UnRaid forums so much now. You guys rock...

 

Thanks for the info. I will do some more reading on this..

 

Thanks Spladge. This looks like exactly what I would like to do. More learning though!

 

Cheers folks.. As of right now plex is scanning in the libraries. Once it is done I will test some reboots to make sure everything is running correctly. Then move onto getting plex to see new stuff at a decent pace.

 

Much thanks!

 

Mike

 

On 1/5/2020 at 2:41 AM, DZMM said:

Sooooo.....I stopped using plex_rcs....I'm zhdenny on Github and I'm NOT by any means a programmer or have any talent in that arena. I merely did slight modifications to the original author's version of plex_rcs....just to keep it kicking along. That script is basically dead.

Instead, I use plex_autoscan as @DZMM also suggested. I avoided using this at first because of all the dependencies and some of the dockers for it looked intimidating. Anyway, I took the dive and was able to get a plex_autoscan docker container to work for me on Unraid.

For those curious, there are basically two options:

  1. A docker container which has Plex AND plex_autoscan all rolled in one docker. This is the easiest as it should be configured straight out of the box. The only issue is if you ALREADY have your own Plex docker setup and configured.....people do not typically want to migrate their plex setup into another container....can be done, but its just more to do.
    https://hub.docker.com/r/horjulf/plex_autoscan
  2. standalone plex_autoscan container. This is what I ended up using. You'll have to very carefully read the plex_autoscan docker container readme AND the plex_autoscan readme. All the container mappings and config.json file can get confusing. But when you finally figure it out, it just plain works great. Beware, you'll also need to grant plex_autoscan docker access to /var/run/docker.sock. You'll also have to chmod 666 the docker.sock. This is typically a no no but is necessary in order for plex_autoscan to communicate with the plex docker container.
    https://hub.docker.com/r/sabrsorensen/alpine-plex_autoscan

I'm not gonna go into detail with this stuff....cuz frankly, everyone's plex setups are different and I really REALLY don't want to write a guide or explain in detail how to do this stuff.

Edited by Stupifier
  • Haha 1
Link to comment

@DZMM

 

Hey, I haven't been on here for a bit to see the changes you've made.

Just looked over them and wanted to say that all the revisions line up with the fixes I had made to mine. So everything should work smoothly. 

 

____________________________________________

If anybody is on the fence, migration should be error/headache free now

 

I've been running mine 24/7 for 2+ weeks now without a single issue. Much cleaner script and even though I wasn't getting bottlenecks or utilizing hardlinks... More optimized is always a plus in my books (+ a minor bump in pull/push speed is always appreciated). 

____________________________________________

 

And as always -- Thank you so much @DZMM for the work you have done. 

Edited by watchmeexplode5
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.