Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

I have migrated from the unionFS to MergerFS, but it seems I have very very slow move speed from like sonarr or radarr to my Media folder. (Taking 3 minutes to move 4 GB file), so i think it is somehow Copying instead of Moving.

Also if I do the same thing from within the Sonarr docker it also takes this long to MOVE a file.

 

I'm using the following MergerFS command:

mergerfs /mnt/user/local/google_vfs:/mnt/user/mount_rclone/google_vfs=RO:/mnt/user/media=RO /mnt/user/mount_unionfs/google_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

 

It seems sonarr is really taking long time to move from the /data to /media in the Sonarr docker.

Path mappings: 

/data <-> /mnt/user/mount_unionfs/google_vfs/downloads/

/media <-> /mnt/user/mount_unionfs/google_vfs/

 

I can see that the SSD cache is hard at work when it is moving files.

Link to comment
34 minutes ago, Thel1988 said:

 

It seems sonarr is really taking long time to move from the /data to /media in the Sonarr docker.

Path mappings: 

/data <-> /mnt/user/mount_unionfs/google_vfs/downloads/

/media <-> /mnt/user/mount_unionfs/google_vfs/

 

I can see that the SSD cache is hard at work when it is moving files.

Using both /data and /media is your problem - your dockers think these are two separate disks so you don't get hardlinking and moving instead of copying benefits.  

 

Within your containers, point nzbget etc to /media/downloads

Link to comment
37 minutes ago, DZMM said:

Using both /data and /media is your problem - your dockers think these are two separate disks so you don't get hardlinking and moving instead of copying benefits.  

 

Within your containers, point nzbget etc to /media/downloads

Okay good Point.

 

I have changed to this:

Radarr:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

/media <-> /mnt/user/mount_unionfs/google_vfs/

 

for sabnzb:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

 

It is still awfully slow, does the cache settings on the local share have anything to say on this?

Link to comment

@DZMM Do you never have the problem of the daemon docker not running when you run the mount script at startup? Nuhll has the same problem as me. I've put in a sleep of 30 but that's not enough. Will be increasing it more to try to get it fixed. But I find it strange that you don't have the same issue.

 

@nuhll unfortunately I have the permission denied error again. Did it come back for you?

Edited by Kaizac
Link to comment
1 hour ago, Thel1988 said:

Okay good Point.

 

I have changed to this:

Radarr:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

/media <-> /mnt/user/mount_unionfs/google_vfs/

 

for sabnzb:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

 

It is still awfully slow, does the cache settings on the local share have anything to say on this?

Okay I got it to work I needed to delete the extra /media/downloads in the path mappings and then it works :)

Anways thanks for your help @DZMM you do a fantastic job on these scripts :)

Link to comment
21 hours ago, watchmeexplode5 said:

I've been running mine 24/7 for 2+ weeks now without a single issue. Much cleaner script and even though I wasn't getting bottlenecks or utilizing hardlinks... More optimized is always a plus in my books (+ a minor bump in pull/push speed is always appreciated). 

I'm glad it's working perfectly for you.  I've been doing this for about 20 months now and I've only had one issue when google had problems with rclone user_IDs for about 2 days.  I moved home recently and lost my 1Gbps connection, but even on my 360/180 with lots of users I've not had any buffering, with a lot of other traffic occuring at the same time.

Link to comment
7 hours ago, Thel1988 said:

Okay I got it to work I needed to delete the extra /media/downloads in the path mappings and then it works :)

Anways thanks for your help @DZMM you do a fantastic job on these scripts :)

Glad you got it working, but looking again at my post I'm not sure if doing:

/media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/
/media <-> /mnt/user/mount_unionfs/google_vfs/

will work.  I'm no expert and what I do to also ensure I don't mess up when moving stuff around in dockers is just use these mappings for all my dockers:

 

/user <-> /mnt/user/
  
/disks <-> /mnt/disks/ (RW Slave)

That way all dockers are consistent and I don't have to remember mappings.

  • Like 1
Link to comment

Hardlink support: Unionfs didn't support hardlinks so any torrents had to be copied to rclone_upload and then uploaded.  Mergerfs supports hardlinks so no wasted transfer.  I've added an upload exclusion to /mnt/user/local/downloads so that download files (intermediate, pending sonarr/raddarr import, or seeds) are not uploaded.  For hardlinks to work, transmission/torrent clients HAVE to be mapped to the same disk 'storage' so files need to be in /mnt/user/mount_unionfs or for newer users /mnt/user/mount_mergerfs...hope this makes sense = MUCH BETTER FILE HANDLING AND LESS TRANSFER AND FILES NOT STORED TWICE

 

Thanks for this update, i plan to change very soon.  Regarding your above instructions, i am still unclear how torrents will work the new way.

 

Will they still get uploaded to tdrive?  Is it automated?  Do they get seeded from tdrive?  Or do they stay local until ratio is met?

Can you please elaborate on how this part will work now?

 

Thanks

 

Link to comment
3 minutes ago, Viperkc said:

Thanks for this update, i plan to change very soon.  Regarding your above instructions, i am still unclear how torrents will work the new way.

 

Will they still get uploaded to tdrive?  Is it automated?  Do they get seeded from tdrive?  Or do they stay local until ratio is met?

Can you please elaborate on how this part will work now?

 

Thanks

 

Torrents with Unionfs:

  1. torrent gets download
  2. torrent gets copied to unionfs folder - disk write (time+wear) plus 2x torrent space taken up
  3. copied torrent gets uploaded whilst orig is seeding
  4. delete seed whenever

Torrents with mergerfs:

  1. torrent gets download
  2. hardlink created to unionfs folder - no disk write, noise, no time to copy, no 2x torrent space taken up
  3. hardlinked torrent gets uploaded whilst orig is seeding
  4. delete seed whenever

 

Link to comment

This isn't exactly Unraid related but if anyone is interested Google cloud gives away $300 of credits to try google cloud.

 

I spun up an ubuntu server and put sabnzbd on it and installed rclone to point to my team drive (6 mounts with 6 different accounts shared to the one team drive).

 

Set Radarr to point to my GCP SAB Server and then wrote a simple  post processing shell script to upload to one of the 6 mounts depending on the time.  Based on the upload speed I am getting from GCP to GDrive (Around 45 - 50 MBps) I rotate the mount every 4 hours.  That way i don't hit the 750GB per user per day limit.

 

The shell script does a rclone move from my cloud VM to my google drive.  as the folder is mounted in the union on my local server radarr just hard links it (mergerfs FTW).  Side note that I pause downloads on post processing as i found sometimes i had malformed downloads and this started to cause a queue from post processing which if left unattended would cause my GCP server to run out of HD.

 

Radarr then has a remote mapping to translate my cloud mount path to the local path.

 

Should be able to get through my backlog in a few days!

  • Thanks 1
Link to comment

@Kaizac - why do you need the recycling bin?   Maybe that's the problem

 

docker run -d --name='sonarr' --net='br0.55' --ip='192.168.50.95' --cpuset-cpus='1,8,9,17,24,25' --log-opt max-size='50m' --log-opt max-file='3' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/dev/rtc':'/dev/rtc':'ro' -v '/mnt/user/':'/user':'rw' -v '/mnt/disks/':'/disks':'rw,slave' -v '/boot/config/plugins/user.scripts/scripts/':'/scripts':'rw' -v '/boot/config/plugins/user.scripts/scripts/unrar_cleanup_sonarr/':'/unrar':'rw' -v '/mnt/cache/appdata/dockers/sonarr':'/config':'rw' 'linuxserver/sonarr:preview'

 

Link to comment
12 minutes ago, DZMM said:

@Kaizac - why do you need the recycling bin?   Maybe that's the problem

 


docker run -d --name='sonarr' --net='br0.55' --ip='192.168.50.95' --cpuset-cpus='1,8,9,17,24,25' --log-opt max-size='50m' --log-opt max-file='3' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/dev/rtc':'/dev/rtc':'ro' -v '/mnt/user/':'/user':'rw' -v '/mnt/disks/':'/disks':'rw,slave' -v '/boot/config/plugins/user.scripts/scripts/':'/scripts':'rw' -v '/boot/config/plugins/user.scripts/scripts/unrar_cleanup_sonarr/':'/unrar':'rw' -v '/mnt/cache/appdata/dockers/sonarr':'/config':'rw' 'linuxserver/sonarr:preview'

 

I'm not using the recycling bin, but I thought you might be doing that. I just don't get why Sonarr can't upgrade files and gets an access denied, when Radarr is working fine with the same settings.

 

For downloads I like to unionfs/Tdrive/Downloads and for series I point to unionfs/Tdrive/Series. Both on r/w and mount_unionfs on rw-slave. I'm doubting I need remote mapping because sonarr and sab are on different IP's. But that isn't needed for Radarr either.

Link to comment
1 hour ago, Kaizac said:

I'm not using the recycling bin, but I thought you might be doing that. I just don't get why Sonarr can't upgrade files and gets an access denied, when Radarr is working fine with the same settings.

 

For downloads I like to unionfs/Tdrive/Downloads and for series I point to unionfs/Tdrive/Series. Both on r/w and mount_unionfs on rw-slave. I'm doubting I need remote mapping because sonarr and sab are on different IP's. But that isn't needed for Radarr either.

do you mount unionfs in /mnt/user/ or /mnt/disks?  I had problems with /mnt/disks when I first started

Link to comment
On 1/14/2020 at 10:29 AM, Kaizac said:

@DZMM Do you never have the problem of the daemon docker not running when you run the mount script at startup? Nuhll has the same problem as me. I've put in a sleep of 30 but that's not enough. Will be increasing it more to try to get it fixed. But I find it strange that you don't have the same issue.

 

@nuhll unfortunately I have the permission denied error again. Did it come back for you?

Didnt really restartet after it worked... but till now. knock on wood. all working. :)

 

It was really the RO, it seems line mergefs is not supporting too much directorys...??!

Link to comment
13 hours ago, nuhll said:

Didnt really restartet after it worked... but till now. knock on wood. all working. :)

 

It was really the RO, it seems line mergefs is not supporting too much directorys...??!

Yep I removed one of my local folders which was in my mergerfs and now Sonarr works. Too bad that doesn't work.

  • Like 1
Link to comment

First of all, thanks a lot for your scripts. They were very helpful in setting up my Unraid environment in combination with my encrypted Google cloud drive for Plex streaming and downloading. :)

 

I've got a question though which someone here may know the answer to: Does MergerFS have a concept of drive priorities like UnionFS does? My idea was to have certain files duplicated on the local Unraid array for offline playback. I know I could do this by naming them differently than what they're called on the cloud drive, but then they would appear in Plex as two separate versions or I'd have to create an entirely separate library for this, but I wanted to make this completely transparent to the user. Unfortunately I wasn't able to find any information on setting any kind of access priority when merging drives with MergerFS. The order of the drives only seems to determine which drive is used for writing.

 

Does anybody know if this is possible at all with MergerFS?

Link to comment
38 minutes ago, Tabris said:

First of all, thanks a lot for your scripts. They were very helpful in setting up my Unraid environment in combination with my encrypted Google cloud drive for Plex streaming and downloading. :)

 

I've got a question though which someone here may know the answer to: Does MergerFS have a concept of drive priorities like UnionFS does? My idea was to have certain files duplicated on the local Unraid array for offline playback. I know I could do this by naming them differently than what they're called on the cloud drive, but then they would appear in Plex as two separate versions or I'd have to create an entirely separate library for this, but I wanted to make this completely transparent to the user. Unfortunately I wasn't able to find any information on setting any kind of access priority when merging drives with MergerFS. The order of the drives only seems to determine which drive is used for writing.

 

Does anybody know if this is possible at all with MergerFS?

Not a mergerfs expert, but that's the way it works in my script i.e. writes to the first directory in the union.

 

I'm not sure how your media folder is setup, but the way I'd handle your problem is to have the files you want uploading in /google_vfs/movies and the ones you want to keep offline (or is it online??!!) in google_vfs/movies_offline and then add an exclusion for /movies_offline to your upload script.

--exclude movies_offline/**

 

Edited by DZMM
Link to comment
2 minutes ago, DZMM said:

Not a mergerfs expert, but that's the way it works in my script i.e. writes to the first directory in the union.

 

I'm not sure how your media folder is setup, but the way I'd handle your problem is to have the files you want uploading in /google_vfs/movies and the ones you want to keep offline (or is it online??!!) in google_vfs/movies_offline and then add an exclusion for /movies_offline to your upload script.


--exclude movies_offline/**

 

My merge command looks like this:

mergerfs /mnt/user/media:/mnt/user/media_array:/mnt/disks/media_remote:/mnt/disks/media_team /mnt/disks/media -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

/mnt/user/media = media folder on cache drive that everything is written to

/mnt/user/media_array = media folder on the array that I'd like to put offline versions of some of my media

/mnt/disks/media_remote = encrypted Gdrive mounted via Rclone

/mnt/disks/media_team = encrypted Gdrive team share mounted via Rclone, only used if I hit the daily upload limit and I'm later moving the files to the regular Gdrive

Sonarr and Radarr are moving the downloaded files to /mnt/user/media and I'm using an upload script to move those files to the cloud drive later. Plex can see the files from all four sources via /mnt/disks/media.

This all works fine, so there isn't any issue with the expected functionality of MergerFS. I'm only curious about the specific edge case when a file simultaneously exists in more than one source.

 

Where would MergerFS load the file from in that case? In UnionFS you can change that via the order of the merged drives in the mount command, but I haven't found any defined logic about how MergerFS handles this. I tested it by creating four files with different content but the same name in all of the merged sources and when I accessed the file via /mnt/disks/media it loaded the file from /mnt/disks/media_remote even if I switched the drive order in the command around. My first thought was that it's choosing the newest file by default because when I edited one of them it did update the timestamp accordingly, but when I opened the file it still loaded the same as before, not the one that actually has the highest timestamp. So it just seems to me that this isn't really an expected use case for MergerFS. It expects a file to only exist once in any of the merged sources.

I guess the only way to achieve what I'd like to do is by naming the local file differently and having it as an additional version in Plex. It's too bad that you can't choose a name for a different version to make it look a bit nicer in Plex, at least not without editing the database entry manually.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.