Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

8 minutes ago, aesthetic-barrage8546 said:

I mean I can`t run two scripts at one port at same time. 
I need some help how to map my scripts to differrent ports or merge two rclone mounts into one script. 

Now i try add custom command --rc-addr :5573 and seems all started, bu I don`t test this thing yet

The port problem, you can fix it the way you do now. But I wonder if it's not another rclone script (maybe the upload?) that's using the --rc. I can't find the remote command in the mount script.

 

Anyway, that's not what you are trying to do. You want to combine multiple mounted remotes into 1 merged folder. That's perfectly possible, but not with this script. You will need a custom script for which does another mount and then change the merge. You could try to copy part of the script to the top for the first mounting.

 

The snippet for the merge would look like this:

mergerfs /mnt/user/local/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs:/mnt/user/mount_rclone/onedrive /mnt/user/mount_mergerfs/gdrive_media_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

 

Test this first with some separate test folders to make sure it works as you'd expect.

Link to comment

Moving from Google Drive to UnRaid - SaltBox

 

Problem:

I think I am not the only one with same problem as stated in the previous post.

Plex/Emby users with huge libraries got stuck after recent changes.

Google put Workspace accounts in read-only mode and surely will delete them at some point.

Dropbox was temp solution as they are limiting "unlimited" plan (after everyone started moving there) and will shut it down eventually also.

My local UnRaid server can't handle 350TB yet and will require significant investments to do so.

I've decided to do hybrid setup for now and start spending money to go local completely. 

This script is the key for that. Thank you for creating it as before I found it I was hopeless couldn't afford one time investment to move all data.

I've had saltbox server (modern version of Cloudbox) so I was able to copy rclone config with SA accounts and it's mounting TeamDrives nicely.

Media were split within few TeamDrives which are mounting using "45.Union" option from Rclone and show up basically as one Media folder which is great. (same as SaltBox).

 

Idea:

Merge local folder with cloud drive under the name unionfs to keep all Plex/RR's/etc configs when moving Apps.

In that case:

1) all new files would be downloaded locally to "mnt/user/data/local/downloads" folder

2) RR's would rename it locally to "mnt/user/data/local/Media" folder and never got uploaded

3) old GDrive files would be mounted as "mnt/user/data/remote/Media"

4) Merged folder would be "mnt/user/data/unionfs/Media"

5) Plex and RR's would use "mnt/user/data/"  linked as "/mnt"  in Docker settings  (this is just to keep folder scheme from SaltBox).

 

Questions:

- How to avoid cache mount ?  I would love RR's or Plex write directly to "mnt/user/data/local/Media" folder.  if I create there a file or directory in command line it works like intended being visible in merged "mnt/user/data/unionfs/Media". But when Plex scanned one library (using "mnt/user/data/unionfs/Media" path) it created metafiles with proper directory structure but in cache mount.

 

- What would be the script/command etc to start moving data from Gdrive to this "mnt/user/data/local/Media" folder which at the end of this long process will have all the Media?  If it can be somehow manually controlled by folder or data cap it would be great as I would love to do it when adding one or few disks at the time (budget restricted).

 

So far only thing I've had to change in the script was to delete "$RcloneRemoteName  from path in all 3 variables (to have local and Teamdrives root content directly in merged "mnt/user/data/unionfs" folder.

 

RcloneMountLocation="$RcloneMountShare"
LocalFilesLocation="$LocalFilesShare"
MergerFSMountLocation="$MergerfsMountShare"

 

I hope my thoughts can be useful to someone... gladly can help with the part of the process I was able to figure out (with my limited linux skills) and I am hoping for some insights/help also. 

 

Edited by remserwis
Link to comment
9 hours ago, remserwis said:

Moving from Google Drive to UnRaid - SaltBox

 

Problem:

I think I am not the only one with same problem as stated in the previous post.

Plex/Emby users with huge libraries got stuck after recent changes.

Google put Workspace accounts in read-only mode and surely will delete them at some point.

Dropbox was temp solution as they are limiting "unlimited" plan (after everyone started moving there) and will shut it down eventually also.

My local UnRaid server can't handle 350TB yet and will require significant investments to do so.

I've decided to do hybrid setup for now and start spending money to go local completely. 

This script is the key for that. Thank you for creating it as before I found it I was hopeless couldn't afford one time investment to move all data.

I've had saltbox server (modern version of Cloudbox) so I was able to copy rclone config with SA accounts and it's mounting TeamDrives nicely.

Media were split within few TeamDrives which are mounting using "45.Union" option from Rclone and show up basically as one Media folder which is great. (same as SaltBox).

 

Idea:

Merge local folder with cloud drive under the name unionfs to keep all Plex/RR's/etc configs when moving Apps.

In that case:

1) all new files would be downloaded locally to "mnt/user/data/local/downloads" folder

2) RR's would rename it locally to "mnt/user/data/local/Media" folder and never got uploaded

3) old GDrive files would be mounted as "mnt/user/data/remote/Media"

4) Merged folder would be "mnt/user/data/unionfs/Media"

5) Plex and RR's would use "mnt/user/data/"  linked as "/mnt"  in Docker settings  (this is just to keep folder scheme from SaltBox).

 

Questions:

- How to avoid cache mount ?  I would love RR's or Plex write directly to "mnt/user/data/local/Media" folder.  if I create there a file or directory in command line it works like intended being visible in merged "mnt/user/data/unionfs/Media". But when Plex scanned one library (using "mnt/user/data/unionfs/Media" path) it created metafiles with proper directory structure but in cache mount.

 

- What would be the script/command etc to start moving data from Gdrive to this "mnt/user/data/local/Media" folder which at the end of this long process will have all the Media?  If it can be somehow manually controlled by folder or data cap it would be great as I would love to do it when adding one or few disks at the time (budget restricted).

 

So far only thing I've had to change in the script was to delete "$RcloneRemoteName  from path in all 3 variables (to have local and Teamdrives root content directly in merged "mnt/user/data/unionfs" folder.

 

RcloneMountLocation="$RcloneMountShare"
LocalFilesLocation="$LocalFilesShare"
MergerFSMountLocation="$MergerfsMountShare"

 

I hope my thoughts can be useful to someone... gladly can help with the part of the process I was able to figure out (with my limited linux skills) and I am hoping for some insights/help also. 

 

I'm a bit confused by your post, since you seem to be sharing info and also asking questions?

 

Firstly, I think people with big libraries should obviously first decide what they want to do with their cloud media. Do you need all those files, or can you sanitize? Secondly, I think something that many people now haven't done (there was no need to) is being more restrictive on media quality and file size. Another possibility now, that I think not many used, is to add something like Tdarr to your flow to re-encode your media.

 

So what I think the problem with moving local right now is that we have often nested the download location within our merge/union folder. Now that you disable the downloading, the files are stuck on your local drive (often a cache drive for speed). So when your mover starts moving your media to your array, your download folder structure also gets moved, breaking dockers like Sab/NZBget.
 

I've thought a lot about this and how to work around this. Some idea might be to create a separate share and add it to the merged folder. But I still expect problems with the RR's not being able to move the files, because your local folder is still the write folder.

 

I think the easiest choice is to leave everything as it is. Install the moving tuner plugin from the app store. In which you can input through a file with paths in it, which files/directories need to be ignored by the mover. This way you can just keep native Unraid functionality with the mover, no need for rclone. And you can keep your structure intact whilst making your share cache to array instead of cache only.


And in case you didn't know, you can use the /mnt/user0/ path to write directly to your array, bypassing your cache. However, within the union/merger folder I don't see how we can use that and also have the speed advantages of using the merged folder with a root structure of /mnt/user/ as /user.

 

Regarding the script, I think you have no other option than to define the directories you want to move over manually. Something like this:

#!/bin/bash

rclone copy gdrive_media_vfs:Movies '/mnt/user0/local/gdrive_media_vfs/Movies' -P --buffer-size 128M --drive-chunk-size 32M --checkers 8 --fast-list --transfers 6 --bwlimit 80000k --tpslimit 12

exit

 

I don't see why you would want to pump all the media over without being selective, though. Getting 350TB local is not something you will be achieving without a big investment in drives and a place to put them. You'd almost certainly need to move to server equipment to store this big number of drives. And with current prices, you'd be looking at about 5k of drives you need.

 

So you could also add another folder for each category, like Movies_Keep. And then add that folder to your Plex libraries. Then within the RR's you can determine per item if you want it local or not. The RR's will take care of moving the files, and Plex won't notice a difference. And you can just run your rclone copy script to move those folder's contents, without need for more specification.

Link to comment

Thank You for taking a time Kaizac.  

I am sorry for a little confusion...it's because I didn't master or completely understand background mechanics of rclone nor mergefs.

I just learn by setting and watching results. 

 

So this cache I was talking about...looks like it's this mergefs mount created with accessed folders structure for fast access. I don't care to much of it ( it's faking the system so the cloud files looks offline ) Originally I thought it was content and files downloaded by plex or RR's. Looks like my RR's through sabnzbd download properly to local folder in my data share and it's merged with online files provided by rclone in unionfs folder. LOVE IT.

 

I will check today but my unraid cache should be irrelevant....data folder will keep freshly downloaded files in unraid cache until mover will put them in array but it will work invisible on the share level so shouldn't affect merged unionfs etc.

 

I will set up this mergefs cache mount to keep it in cache all the time (backed to array only) so mover won't touch it and access should be very fast.

 

And finally I mentioned my 350TB library as in last 2 months Google shut down unlimited plan for cloud....Dropbox being only alternative is limiting/shutting it down as we speak....so I know (also from previous replies in this post) there is plenty of data hoarders like me with libraries between 100TB and 1PB  for Plex purposes which are looking for a way out.

Yes only way is costly $8K-$10K minimum, investment in to drives, SAS cards and better servers.... My hybrid approach suppose to be "one disk at the time" solution for them when you can upgrade your unraid server gradually  as you download data from Gdrive. I think google will give some time on notice before deleting all currently read-only accounts with hundreds TB's on them.

 

Thank you also for the script to copy folders. It won't work or at least will be hard to use by folder for me. I have only 6 of them.

Hopefully there is other option, maybe moving (not copying) data until for ex 20TB quota (when one disk added) so when executed again month later with another disk added, it will move next batch of files.

 

I manage directories and files in plex for years now...and splitting into many folders while managing it with RR's is pain in the ass.

Constantly changing root folders to match RR, Plex and genre ( kids, movies etc) doesn't work well with 15K+ movies and 3K+ TV shows.

 

To make things funnier I intend to encode some old stuff to H265, after having them locally...some of it RR's will redownload in this format, but as library of  4K HDR UHD movies with lossless quality is growing (every one is 60-80GB). I am looking towards higher total space needed.

 

Anyway this post derailed from main topic...I am sorry for that...but as reading past replies to learn I've seen lot of similar interest from others.

 

 

 

Edited by remserwis
Link to comment
5 hours ago, remserwis said:

Thank You for taking a time Kaizac.  

I am sorry for a little confusion...it's because I didn't master or completely understand background mechanics of rclone nor mergefs.

I just learn by setting and watching results. 

 

So this cache I was talking about...looks like it's this mergefs mount created with accessed folders structure for fast access. I don't care to much of it ( it's faking the system so the cloud files looks offline ) Originally I thought it was content and files downloaded by plex or RR's. Looks like my RR's through sabnzbd download properly to local folder in my data share and it's merged with online files provided by rclone in unionfs folder. LOVE IT.

 

I will check today but my unraid cache should be irrelevant....data folder will keep freshly downloaded files in unraid cache until mover will put them in array but it will work invisible on the share level so shouldn't affect merged unionfs etc.

 

I will set up this mergefs cache mount to keep it in cache all the time (backed to array only) so mover won't touch it and access should be very fast.

 

And finally I mentioned my 350TB library as in last 2 months Google shut down unlimited plan for cloud....Dropbox being only alternative is limiting/shutting it down as we speak....so I know (also from previous replies in this post) there is plenty of data hoarders like me with libraries between 100TB and 1PB  for Plex purposes which are looking for a way out.

Yes only way is costly $8K-$10K minimum, investment in to drives, SAS cards and better servers.... My hybrid approach suppose to be "one disk at the time" solution for them when you can upgrade your unraid server gradually  as you download data from Gdrive. I think google will give some time on notice before deleting all currently read-only accounts with hundreds TB's on them.

 

Thank you also for the script to copy folders. It won't work or at least will be hard to use by folder for me. I have only 6 of them.

Hopefully there is other option, maybe moving (not copying) data until for ex 20TB quota (when one disk added) so when executed again month later with another disk added, it will move next batch of files.

 

I manage directories and files in plex for years now...and splitting into many folders while managing it with RR's is pain in the ass.

Constantly changing root folders to match RR, Plex and genre ( kids, movies etc) doesn't work well with 15K+ movies and 3K+ TV shows.

 

To make things funnier I intend to encode some old stuff to H265, after having them locally...some of it RR's will redownload in this format, but as library of  4K HDR UHD movies with lossless quality is growing (every one is 60-80GB). I am looking towards higher total space needed.

 

Anyway this post derailed from main topic...I am sorry for that...but as reading past replies to learn I've seen lot of similar interest from others.

 

 

 

 

I think you're talking about rclone VFS cache? That's just to keep recently played files cached locally to not have to download them everything you start accessing the files. It is not some file system you can access.

I was talking about the Unraid cache, which you use for a Share.

 

About the moving and Google deleting accounts. I think people are expecting too long of a period where they can keep access to the files. Maybe Google has some kind of legal obligation to keep access to the files as long as the user is paying, but I doubt it. I would count on 6 months maximum before accounts get deleted.

I think you are also one of the VERY few who will be going the full local route. The investment is just way too big, and there are much cheaper solutions which cause a lot less headaches.

 

Moving files when you have space again is still foreign to me. I think most will make a selection first of what data they want to save local, instead of just moving everything over without prioritizing. But if that's what you really want to do, you could use the rclone move script to move from your cloud to your /mnt/disks/diskXXX once you added a new one. Once that drive is full, the script won't work anymore.

 

I still believe the 2 root paths per folder with the RR's is still the way to go for most people who want to move files locally, but selectively. Setting that up and adding the paths in Plex would take 15 minutes maximum. After that, you can go through Radarr and Sonarr per item.

 

Obviously, with your volume, this is time-consuming. But like I said, your plan is not something many will do. I don't know your personal and financial situation, so I don't want to make too many assumptions. But knowing that I just purchased 88 TB of storage with deals, yet still spending 1300 euros, I don't think going to 300+ TB is reasonable. And I don't even plan to go local, I just needed drives for backups and such.

 

I don't think you need to worry about the main post anymore. It's not possible anymore, so now it's mostly about moving local again ;).

 

Link to comment
15 minutes ago, fzligerzronz said:

i have a problem. I have my uploader script set to run every 2 hours or so, to keep my drives not too full.

When it runs tho, it says that script is already running, and I have to run the rclone unmount script just to get the uploader to properly run. 

 

Has anyone had this problem before?

It's a checker file in appdata/other/rclone/renote/gdrive_media_vfs. Just delete that file and you can start the upload.

Link to comment
10 hours ago, fzligerzronz said:

will I have to keep deleting this all the time or will it permanently stop the cehcking.

No, it's often when you had an unclean shutdown, or the script didn't finish correctly. So in that case you need to remove the file manually, or with some automation, for example at start of array.

 

If the script runs correctly it will delete the file at the end. It's to prevent to have the same script starting a second instance of the same job.

Link to comment
  • 3 weeks later...

I just came across an issue after restarting my server where mergefs wasn't installing. After some investigation it seems the latest build from mergerfs does not work properly and I solved the problem by adding the previous (and currently listed as latest) version to the tag (line 184). I'm not sure why the latest build is looking for 2.37 but 2.36 is listed by trapexit as latest on his github.

 

I hope this helps someone escape some frustration and that the problem resolves itself soon..

 

image.thumb.png.ca93af52139dc51d964d2ac9a6addd98.png

  • Like 7
  • Thanks 3
Link to comment
16 hours ago, ianstagib said:

I just came across an issue after restarting my server where mergefs wasn't installing. After some investigation it seems the latest build from mergerfs does not work properly and I solved the problem by adding the previous (and currently listed as latest) version to the tag (line 184). I'm not sure why the latest build is looking for 2.37 but 2.36 is listed by trapexit as latest on his github.

 

I hope this helps someone escape some frustration and that the problem resolves itself soon..

 

image.thumb.png.ca93af52139dc51d964d2ac9a6addd98.png

just what i was looking for! thanks! 

Link to comment
On 9/4/2023 at 11:21 AM, ianstagib said:

I just came across an issue after restarting my server where mergefs wasn't installing. After some investigation it seems the latest build from mergerfs does not work properly and I solved the problem by adding the previous (and currently listed as latest) version to the tag (line 184). I'm not sure why the latest build is looking for 2.37 but 2.36 is listed by trapexit as latest on his github.

 

I hope this helps someone escape some frustration and that the problem resolves itself soon..

 

image.thumb.png.ca93af52139dc51d964d2ac9a6addd98.png

 

Thanks for this...

Link to comment

am i the only one where the mergefs is no longer working?

 

edit:

thanks for the fix, that is working now again... -E TAG worked. https://github.com/trapexit/mergerfs/issues/1246

 

edit2:

 

I came up with 


cd /tmp

wget https://github.com/trapexit/mergerfs/releases/download/2.36.0/mergerfs-static-linux_amd64.tar.gz
tar -xzvf mergerfs-static-linux_amd64.tar.gz
mv /tmp/usr/local/bin/mergerfs /bin
rm -r /tmp/usr/

sleep 10
 

 

edit3:

 

trape came up with a better version


VER=2.36.0

wget -qO- https://github.com/trapexit/mergerfs/releases/download/${VER}/mergerfs-static-linux_amd64.tar.gz | \ tar xvz -C /
 

 

it installs it in usr local bin , im not quite sure if that makes a difference or not, but ill test with next restart. anyway my approch should be definitly working (but not so elegant)

Edited by nuhll
Link to comment
On 9/7/2023 at 4:19 PM, NewDisplayName said:

am i the only one where the mergefs is no longer working?

 

edit:

thanks for the fix, that is working now again... -E TAG worked. https://github.com/trapexit/mergerfs/issues/1246

 

edit2:

 

I came up with 


cd /tmp

wget https://github.com/trapexit/mergerfs/releases/download/2.36.0/mergerfs-static-linux_amd64.tar.gz
tar -xzvf mergerfs-static-linux_amd64.tar.gz
mv /tmp/usr/local/bin/mergerfs /bin
rm -r /tmp/usr/

sleep 10
 

 

edit3:

 

trape came up with a better version


VER=2.36.0

wget -qO- https://github.com/trapexit/mergerfs/releases/download/${VER}/mergerfs-static-linux_amd64.tar.gz | \ tar xvz -C /
 

 

it installs it in usr local bin , im not quite sure if that makes a difference or not, but ill test with next restart. anyway my approch should be definitly working (but not so elegant)

 

Thank you thank you.

I just modified my user mount script line from 

docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
to 
docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm -e TAG=2.36.0 trapexit/mergerfs-static-build

and all is working again.

  • Like 1
  • Thanks 1
Link to comment

Hi I just had a power cut and after I run my scripts I can't mount them anymore.

It says.

make[2]: Leaving directory '/tmp/mergerfs/libfuse'
make[2]: *** [Makefile:125: build/fuse_loop.o] Error 1
make[1]: *** [Makefile:104: objects] Error 2
make[1]: Leaving directory '/tmp/mergerfs/libfuse'
make: *** [Makefile:257: libfuse] Error 2
strip: 'build/mergerfs': No such file
/tmp/build-mergerfs: line 18: build/mergerfs: not found
cp: can't stat 'build/mergerfs': No such file or directory
mv: cannot stat '/mnt/user/appdata/other/rclone/mergerfs/mergerfs': No such file or directory
12.09.2023 11:37:06 INFO: *sleeping for 5 seconds
12.09.2023 11:37:11 ERROR: Mergerfs not installed successfully. Please check for errors. Exiting.

 

Hope someone can help.

 

 

Thanks @Josephgrosskopf the last line made it work again.

Edited by Bjur
Link to comment

For those switching to local storage and don't want to buy HDD's constantly, i have used the power of chatGPT and created a script to cleanup TV and Movies after a certain time period or if a min storage space threshold is met. This combined along with OVERSEER and Sonarr/Radarr integration means i can just request anything i want to watch and it will stay on my server until the threshold of time of space is met and it will deleted. If in the future you want to download it again, you use the unmonitorr feature in ARRs to allow overseer to re-request it. This combined along with FileFlows for HEVC conversion should allow me to store content for a significant amount of time. The trick is to add stuff you want to your watchlist so you dont forget about it in plex!

 

https://github.com/bolagnaise/plex-cleaner-script

Link to comment
On 9/12/2023 at 5:20 AM, Rysz said:

Also leaving this here in case someone should need the latest mergerFS 2.37.1 for permanent installation on their UNRAID system without having to rely on the Docker image anymore:

 

 

 

Thanks. Is this meant to replace the script entirely? Or if we use it, what do we need to change in the script? I'm looking forward to not having to install MergerFS on boot via script.

Link to comment
  • 3 weeks later...

I have a question I hope someone can help with.

After google drive are now read only I face the problem when using Sonarr that sometimes delete a show after finding another version. The problem is it now downloads locally since it can't upload anymore and therefore delete the version on Google drive. Can I somehow have Sonarr not delete the Google drive version? Would the recycle bin be an option or would I not get a chance to move the files back afterwards since it's read only? 

Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.