Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

2383 posts in this topic Last Reply

Recommended Posts

  • Replies 2.4k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

3/1/20 UPDATE: TO MIGRATE FROM UNIONFS TO MERGERFS READ THIS POST.  New users continue reading 13/3/20 Update: For a clean version of the 'How To' please use the github site https://github.com/B

Multiple mounts, one upload and one tidy-up script.   @watchmeexplode5 did some testing and performance gets worse as you get closer to the 400k mark, so you'll need to do something like bel

I just made a very useful change to my scripts that has solved my problem with the limit of only being able to upload 750GB/day, which was creating bottlenecks on my local server as I couldn't upload

Posted Images

I never used the upload folder so i just removed it now it looks more like in the tutorial:

mergerfs /mnt/user/Archiv:/mnt/user/mount_rclone/google_vfs /mnt/user/mount_unionfs/google_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
 

 

I report back after restart.


Edit: the mount script is too fast sometimes so i get:

 

docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
mv: cannot stat '/mnt/user/appdata/other/mergerfs/mergerfs': No such file or directory
/tmp/user.scripts/tmpScripts/rclone mount/script: line 84: mergerfs: command not found

 

ive added a sleep 10

 

entering the commands now by hand in termianl.

Edited by nuhll
Link to post
On 1/4/2020 at 9:25 PM, Spladge said:

To monitor the changes you could try to use a PAS docker - I have a combined plex/PAS docker but have not set it up or @Stupifier uses a slightly modified version of another script (plex RCS) to do this by monitoring the log file and initiating a plex scan of that dir via the api.

https://github.com/zhdenny/plex_rcs
 

 

On 1/4/2020 at 9:30 PM, bedpan said:

I remember why I loved the UnRaid forums so much now. You guys rock...

 

Thanks for the info. I will do some more reading on this..

 

Thanks Spladge. This looks like exactly what I would like to do. More learning though!

 

Cheers folks.. As of right now plex is scanning in the libraries. Once it is done I will test some reboots to make sure everything is running correctly. Then move onto getting plex to see new stuff at a decent pace.

 

Much thanks!

 

Mike

 

On 1/5/2020 at 2:41 AM, DZMM said:

Sooooo.....I stopped using plex_rcs....I'm zhdenny on Github and I'm NOT by any means a programmer or have any talent in that arena. I merely did slight modifications to the original author's version of plex_rcs....just to keep it kicking along. That script is basically dead.

Instead, I use plex_autoscan as @DZMM also suggested. I avoided using this at first because of all the dependencies and some of the dockers for it looked intimidating. Anyway, I took the dive and was able to get a plex_autoscan docker container to work for me on Unraid.

For those curious, there are basically two options:

  1. A docker container which has Plex AND plex_autoscan all rolled in one docker. This is the easiest as it should be configured straight out of the box. The only issue is if you ALREADY have your own Plex docker setup and configured.....people do not typically want to migrate their plex setup into another container....can be done, but its just more to do.
    https://hub.docker.com/r/horjulf/plex_autoscan
  2. standalone plex_autoscan container. This is what I ended up using. You'll have to very carefully read the plex_autoscan docker container readme AND the plex_autoscan readme. All the container mappings and config.json file can get confusing. But when you finally figure it out, it just plain works great. Beware, you'll also need to grant plex_autoscan docker access to /var/run/docker.sock. You'll also have to chmod 666 the docker.sock. This is typically a no no but is necessary in order for plex_autoscan to communicate with the plex docker container.
    https://hub.docker.com/r/sabrsorensen/alpine-plex_autoscan

I'm not gonna go into detail with this stuff....cuz frankly, everyone's plex setups are different and I really REALLY don't want to write a guide or explain in detail how to do this stuff.

Edited by Stupifier
Link to post

@DZMM

 

Hey, I haven't been on here for a bit to see the changes you've made.

Just looked over them and wanted to say that all the revisions line up with the fixes I had made to mine. So everything should work smoothly. 

 

____________________________________________

If anybody is on the fence, migration should be error/headache free now

 

I've been running mine 24/7 for 2+ weeks now without a single issue. Much cleaner script and even though I wasn't getting bottlenecks or utilizing hardlinks... More optimized is always a plus in my books (+ a minor bump in pull/push speed is always appreciated). 

____________________________________________

 

And as always -- Thank you so much @DZMM for the work you have done. 

Edited by watchmeexplode5
Link to post

I have migrated from the unionFS to MergerFS, but it seems I have very very slow move speed from like sonarr or radarr to my Media folder. (Taking 3 minutes to move 4 GB file), so i think it is somehow Copying instead of Moving.

Also if I do the same thing from within the Sonarr docker it also takes this long to MOVE a file.

 

I'm using the following MergerFS command:

mergerfs /mnt/user/local/google_vfs:/mnt/user/mount_rclone/google_vfs=RO:/mnt/user/media=RO /mnt/user/mount_unionfs/google_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

 

It seems sonarr is really taking long time to move from the /data to /media in the Sonarr docker.

Path mappings: 

/data <-> /mnt/user/mount_unionfs/google_vfs/downloads/

/media <-> /mnt/user/mount_unionfs/google_vfs/

 

I can see that the SSD cache is hard at work when it is moving files.

Link to post
34 minutes ago, Thel1988 said:

 

It seems sonarr is really taking long time to move from the /data to /media in the Sonarr docker.

Path mappings: 

/data <-> /mnt/user/mount_unionfs/google_vfs/downloads/

/media <-> /mnt/user/mount_unionfs/google_vfs/

 

I can see that the SSD cache is hard at work when it is moving files.

Using both /data and /media is your problem - your dockers think these are two separate disks so you don't get hardlinking and moving instead of copying benefits.  

 

Within your containers, point nzbget etc to /media/downloads

Link to post
37 minutes ago, DZMM said:

Using both /data and /media is your problem - your dockers think these are two separate disks so you don't get hardlinking and moving instead of copying benefits.  

 

Within your containers, point nzbget etc to /media/downloads

Okay good Point.

 

I have changed to this:

Radarr:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

/media <-> /mnt/user/mount_unionfs/google_vfs/

 

for sabnzb:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

 

It is still awfully slow, does the cache settings on the local share have anything to say on this?

Link to post

@DZMM Do you never have the problem of the daemon docker not running when you run the mount script at startup? Nuhll has the same problem as me. I've put in a sleep of 30 but that's not enough. Will be increasing it more to try to get it fixed. But I find it strange that you don't have the same issue.

 

@nuhll unfortunately I have the permission denied error again. Did it come back for you?

Edited by Kaizac
Link to post
1 hour ago, Thel1988 said:

Okay good Point.

 

I have changed to this:

Radarr:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

/media <-> /mnt/user/mount_unionfs/google_vfs/

 

for sabnzb:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

 

It is still awfully slow, does the cache settings on the local share have anything to say on this?

Okay I got it to work I needed to delete the extra /media/downloads in the path mappings and then it works :)

Anways thanks for your help @DZMM you do a fantastic job on these scripts :)

Link to post
21 hours ago, watchmeexplode5 said:

I've been running mine 24/7 for 2+ weeks now without a single issue. Much cleaner script and even though I wasn't getting bottlenecks or utilizing hardlinks... More optimized is always a plus in my books (+ a minor bump in pull/push speed is always appreciated). 

I'm glad it's working perfectly for you.  I've been doing this for about 20 months now and I've only had one issue when google had problems with rclone user_IDs for about 2 days.  I moved home recently and lost my 1Gbps connection, but even on my 360/180 with lots of users I've not had any buffering, with a lot of other traffic occuring at the same time.

Link to post
7 hours ago, Thel1988 said:

Okay I got it to work I needed to delete the extra /media/downloads in the path mappings and then it works :)

Anways thanks for your help @DZMM you do a fantastic job on these scripts :)

Glad you got it working, but looking again at my post I'm not sure if doing:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/
/media <-> /mnt/user/mount_unionfs/google_vfs/

will work.  I'm no expert and what I do to also ensure I don't mess up when moving stuff around in dockers is just use these mappings for all my dockers:

 

/user <-> /mnt/user/
  
/disks <-> /mnt/disks/ (RW Slave)

That way all dockers are consistent and I don't have to remember mappings.

Link to post

Hardlink support: Unionfs didn't support hardlinks so any torrents had to be copied to rclone_upload and then uploaded.  Mergerfs supports hardlinks so no wasted transfer.  I've added an upload exclusion to /mnt/user/local/downloads so that download files (intermediate, pending sonarr/raddarr import, or seeds) are not uploaded.  For hardlinks to work, transmission/torrent clients HAVE to be mapped to the same disk 'storage' so files need to be in /mnt/user/mount_unionfs or for newer users /mnt/user/mount_mergerfs...hope this makes sense = MUCH BETTER FILE HANDLING AND LESS TRANSFER AND FILES NOT STORED TWICE

 

Thanks for this update, i plan to change very soon.  Regarding your above instructions, i am still unclear how torrents will work the new way.

 

Will they still get uploaded to tdrive?  Is it automated?  Do they get seeded from tdrive?  Or do they stay local until ratio is met?

Can you please elaborate on how this part will work now?

 

Thanks

 

Link to post
3 minutes ago, Viperkc said:

Thanks for this update, i plan to change very soon.  Regarding your above instructions, i am still unclear how torrents will work the new way.

 

Will they still get uploaded to tdrive?  Is it automated?  Do they get seeded from tdrive?  Or do they stay local until ratio is met?

Can you please elaborate on how this part will work now?

 

Thanks

 

Torrents with Unionfs:

  1. torrent gets download
  2. torrent gets copied to unionfs folder - disk write (time+wear) plus 2x torrent space taken up
  3. copied torrent gets uploaded whilst orig is seeding
  4. delete seed whenever

Torrents with mergerfs:

  1. torrent gets download
  2. hardlink created to unionfs folder - no disk write, noise, no time to copy, no 2x torrent space taken up
  3. hardlinked torrent gets uploaded whilst orig is seeding
  4. delete seed whenever

 

Link to post

This isn't exactly Unraid related but if anyone is interested Google cloud gives away $300 of credits to try google cloud.

 

I spun up an ubuntu server and put sabnzbd on it and installed rclone to point to my team drive (6 mounts with 6 different accounts shared to the one team drive).

 

Set Radarr to point to my GCP SAB Server and then wrote a simple  post processing shell script to upload to one of the 6 mounts depending on the time.  Based on the upload speed I am getting from GCP to GDrive (Around 45 - 50 MBps) I rotate the mount every 4 hours.  That way i don't hit the 750GB per user per day limit.

 

The shell script does a rclone move from my cloud VM to my google drive.  as the folder is mounted in the union on my local server radarr just hard links it (mergerfs FTW).  Side note that I pause downloads on post processing as i found sometimes i had malformed downloads and this started to cause a queue from post processing which if left unattended would cause my GCP server to run out of HD.

 

Radarr then has a remote mapping to translate my cloud mount path to the local path.

 

Should be able to get through my backlog in a few days!

Link to post

@DZMM did you configure recycling bin in your Sonarr instance? If not would you mind sharing your docker settings, which folders you put on r/w slave and which on normaly r/w. I'm still having import issues, but only when upgrading files. Getting an access denied error.

Link to post

@Kaizac - why do you need the recycling bin?   Maybe that's the problem

 

docker run -d --name='sonarr' --net='br0.55' --ip='192.168.50.95' --cpuset-cpus='1,8,9,17,24,25' --log-opt max-size='50m' --log-opt max-file='3' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/dev/rtc':'/dev/rtc':'ro' -v '/mnt/user/':'/user':'rw' -v '/mnt/disks/':'/disks':'rw,slave' -v '/boot/config/plugins/user.scripts/scripts/':'/scripts':'rw' -v '/boot/config/plugins/user.scripts/scripts/unrar_cleanup_sonarr/':'/unrar':'rw' -v '/mnt/cache/appdata/dockers/sonarr':'/config':'rw' 'linuxserver/sonarr:preview'

 

Link to post
12 minutes ago, DZMM said:

@Kaizac - why do you need the recycling bin?   Maybe that's the problem

 


docker run -d --name='sonarr' --net='br0.55' --ip='192.168.50.95' --cpuset-cpus='1,8,9,17,24,25' --log-opt max-size='50m' --log-opt max-file='3' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/dev/rtc':'/dev/rtc':'ro' -v '/mnt/user/':'/user':'rw' -v '/mnt/disks/':'/disks':'rw,slave' -v '/boot/config/plugins/user.scripts/scripts/':'/scripts':'rw' -v '/boot/config/plugins/user.scripts/scripts/unrar_cleanup_sonarr/':'/unrar':'rw' -v '/mnt/cache/appdata/dockers/sonarr':'/config':'rw' 'linuxserver/sonarr:preview'

 

I'm not using the recycling bin, but I thought you might be doing that. I just don't get why Sonarr can't upgrade files and gets an access denied, when Radarr is working fine with the same settings.

 

For downloads I like to unionfs/Tdrive/Downloads and for series I point to unionfs/Tdrive/Series. Both on r/w and mount_unionfs on rw-slave. I'm doubting I need remote mapping because sonarr and sab are on different IP's. But that isn't needed for Radarr either.

Link to post
1 hour ago, Kaizac said:

I'm not using the recycling bin, but I thought you might be doing that. I just don't get why Sonarr can't upgrade files and gets an access denied, when Radarr is working fine with the same settings.

 

For downloads I like to unionfs/Tdrive/Downloads and for series I point to unionfs/Tdrive/Series. Both on r/w and mount_unionfs on rw-slave. I'm doubting I need remote mapping because sonarr and sab are on different IP's. But that isn't needed for Radarr either.

do you mount unionfs in /mnt/user/ or /mnt/disks?  I had problems with /mnt/disks when I first started

Link to post
On 1/14/2020 at 10:29 AM, Kaizac said:

@DZMM Do you never have the problem of the daemon docker not running when you run the mount script at startup? Nuhll has the same problem as me. I've put in a sleep of 30 but that's not enough. Will be increasing it more to try to get it fixed. But I find it strange that you don't have the same issue.

 

@nuhll unfortunately I have the permission denied error again. Did it come back for you?

Didnt really restartet after it worked... but till now. knock on wood. all working. :)

 

It was really the RO, it seems line mergefs is not supporting too much directorys...??!

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.