Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

On 8/29/2023 at 2:12 AM, axeman said:

 

Yup. Cloud storage is dead for many and many of us.

 

What I have to work out is how to have files on a read-only cloud drive, and then have them deleted from that cloud drive and stored locally whenever one of the starr apps wants to upgrade the file (usually radarr).

Link to comment
14 hours ago, JonathanM said:

I think I know what you mean, but if it's read only, you can't delete. Deleting means updating the table of contents, which is a write.

 

I see. Well I assume deleting the files would be something Google would let me do as they're the ones wanting me to free-up space. I'm not actually at the read-only stage yet, but it happens eventually. To quote some random from Reddit:

Quote

Mine already on read-only mode. You can add folder, rename folder, rename file & move existing files & folders around. The only thing u cannot do is obviously add new files.

 

So I'll call it 'read-only' in the Google, offered unlimited storage for years and years and is no longer doing so, sense of the word. What I'm thinking is, maybe I can have unionfs/mergefs mount the local media folder and cloud files, so that radarr/lidarr/sonarr think they're all housed in the same directory on the same file system, and then just not doing the next step that seedbox/cloud solutions (like Cloudbox/Saltbox) do and uploading the files to the cloud with cloudplow. I assume that radarr/lidarr/sonarr would still be able to delete the previous file (the one that would be located in the cloud), and then simply store the new one locally on UnRaid.

Edited by Lebowski89
Link to comment
3 hours ago, Lebowski89 said:

 

I see. Well I assume deleting the files would be something Google would let me do as they're the ones wanting me to free-up space. I'm not actually at the read-only stage yet, but it happens eventually. To quote some random from Reddit:

 

So I'll call it 'read-only' in the Google, offered unlimited storage for years and years and is no longer doing so, sense of the word. What I'm thinking is, maybe I can have unionfs/mergefs mount the local media folder and cloud files, so that radarr/lidarr/sonarr think they're all housed in the same directory on the same file system, and then just not doing the next step that seedbox/cloud solutions (like Cloudbox/Saltbox) do and uploading the files to the cloud with cloudplow. I assume that radarr/lidarr/sonarr would still be able to delete the previous file (the one that would be located in the cloud), and then simply store the new one locally on UnRaid.

 

What @JonathanM means is the technical correct explanation for how read only permissions work with file systems.

What Google is doing is just limiting uploading and calling it "read only". So if you have set up your system as written down in this topic, you can just disable your upload script. It will then see it still as 1 folder, but just keep your new files locally stored. Just make sure /mnt/user/local/ is not cache only, or you will run out of space eventually.

Edited by Kaizac
Link to comment
On 8/6/2023 at 10:47 AM, remserwis said:

Idea:

Merge local folder with cloud drive under the name unionfs to keep all Plex/RR's/etc configs when moving Apps.

In that case:

1) all new files would be downloaded locally to "mnt/user/data/local/downloads" folder

2) RR's would rename it locally to "mnt/user/data/local/Media" folder and never got uploaded

3) old GDrive files would be mounted as "mnt/user/data/remote/Media"

4) Merged folder would be "mnt/user/data/unionfs/Media"

5) Plex and RR's would use "mnt/user/data/"  linked as "/mnt"  in Docker settings  (this is just to keep folder scheme from SaltBox).

 

This is what I'm aiming for, btw. Missed this post before I made mine. I'm just about there, just dealing with some permission issues with the docker containers trying to access the mergerfs folder. In my case, I have:

 

Quote

# Rclone & mergerfs mount points
MOUNT_POINT="/mnt/user/data/remote"
MOUNT_POINT_LOCAL="/mnt/user/data/local"
MOUNT_POINT_MERGERFS="/mnt/user/data/mergerfs"

 

The remote gdrive files correctly show up on the remote mountpoint and are successfully merged in the mergerfs mount point. With radarr and co's docker data path pointed at: /mnt/user/data/

 

But I just tried to add the mergerfs folder as the root folder in radarr and it was a no-go. Permission issue. 'Folder '/data/mergerfs/Media/Movies/' is not writable by user 'hotio' (using hotio containers for the most part). Will double check permissions and mount points and will and go from there.

Edited by Lebowski89
Link to comment
13 minutes ago, Lebowski89 said:

 

This is what I'm aiming for, btw. Missed this post before I made mine. I'm just about there, just dealing with some permission issues with the docker containers trying to access the mergerfs folder. In my case, I have:

 

 

The remote gdrive files correctly show up on the remote mountpoint and are successfully merged in the mergerfs mount point. With radarr and co's docker data path pointed at: /mnt/user/data/

 

But I just tried to add the mergerfs folder as the root folder in radarr and it was a no-go. Permission issue. 'Folder '/data/mergerfs/Media/Movies/' is not writable by user 'hotio' (using hotio containers for the most part). Will double check permissions and mount points and will and go from there.

You're merging a subfolder with its parent folder. That doesn't work.

Link to comment
10 hours ago, Kaizac said:

You're merging a subfolder with its parent folder. That doesn't work.

What do you mean?  Did it the same format as Saltbox. It works - remote files are mounted in remote and merged into mergerfs. And to test if it was working correctly, I copied some files into the local folder and they were successfully merged into the merferfs folder to show up with the remote files.

Edited by Lebowski89
Link to comment
11 hours ago, Lebowski89 said:

What do you mean?  Did it the same format as Saltbox. It works - remote files are mounted in remote and merged into mergerfs. And to test if it was working correctly, I copied some files into the local folder and they were successfully merged into the merferfs folder to show up with the remote files.

 

Sorry, the mobile view screwed your Quote, and it seemed like you merged everything to /mnt/user/data/ which is the parent folder. But it merges to /mnt/user/data/mergerfs, which is fine.

 

The permission problems have been discussed a couple of times in this topic, you don't need to go back too far for it.

 

Can you show your docker container template for radarr? And are you downloading with Sabnzbd or something? What path mapping is used there?

 

Also, you can go to the seperate /mnt/user/data/xx folders (local/remote/mergerfs) with your terminal 

cd /mnt/user/data/local/

and type

ls -la

 

It shows you the owners of the folder. I would expect it to say "nobody/users".

Link to comment
  • 2 weeks later...

How much would this take to port over to ubuntu? 

i have already tried running this, but im getting 

Failed to create file system for "teldrive:": didn't find section in config file

 

there is a config, up and running. does anybody have an idea to how much work it would take to port it over? 

 

thanks all! :)

Link to comment
On 10/20/2023 at 5:40 PM, drpudding said:

Made this using the mergerFS and rclone plugins that was on the CA page already
it supports rclone subfolder mounting and 2 rclone remotes (for those that need it?)

https://github.com/drpoutine/rclone_mount/tree/latest/Unraid_Scripts

hope someone finds this useful :)

 

Good script. Found the options for rclone/mergerfs a bit off for having files sent to the local directory. Files were being uploaded straight to the remote. Sub-optimal if you want to use something like cloudplow for uploads, or don't want uploads. Found these do the job for a Google remote (directory cache time set high because the remote will be polled frequently for changes, but if you were running something like an sftp remote you'd have a very low directory cache time):

 

Quote

COMMON_RCLONE_OPTIONS="--use-mmap --dir-cache-time 8760h --timeout 10m --umask 002 --uid 99 --gid 99 --allow-other --poll-interval=15s --buffer-size 64M --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 2048M --drive-pacer-min-sleep=10ms --drive-pacer-burst=1000 --log-level $LOG_LEVEL"

 

Quote

 $MERGERFS_BIN $MOUNT_POINT_LOCAL:$MOUNT_POINT $MOUNT_POINT_MERGERFS -o category.create=ff,async_read=true,cache.files=partial,dropcacheonclose=true,minfreespace=0,fsname=mergerfs,xattr=nosys,statfs=base,statfs_ignore=nc,umask=002,noatime

 

(With local folder mounted first). Essentially the same options as Saltbox uses. category.create=ff being the key setting to have files actually show up in the local directory.

Edited by Lebowski89
Link to comment
  • 2 months later...
On 12/21/2021 at 12:43 PM, MadMatt337 said:

I am having an issue on a new setup in which I do not have permission to add files to the created folders under /mnt/user/mount_mergerfs/gdrive_media_vfs

 

The mount script appears to run fine without errors, but I cant add media to the above folder to test, tried using both windows via SMB and Krusader. I changed very little in the mount script other than the rclone remote name

 

This is my mount script:

 

And my log:

 

Where is this mount script from?

Link to comment
  • 2 weeks later...
On 9/21/2023 at 7:08 PM, thekiefs said:

 

Thanks. Is this meant to replace the script entirely? Or if we use it, what do we need to change in the script? I'm looking forward to not having to install MergerFS on boot via script.

 

Bumping this up - anyone know how I can stop relying on the script to build MergerFS? The script on boot is not very stable and I sometimes have to go in to force start it on occasion.

Link to comment
  • 3 weeks later...
4 hours ago, Bjur said:

Can anyone tell what the best way to move the files back locally from Google?

I got an email from Google saying they will delete my files within x days.

 

What kind of Google account do you have? Mine has been in read-only for some time. Been putting off moving the files back but might have to up the priority if they're threatening deletion 😳

Link to comment
5 hours ago, Bjur said:

Can anyone tell what the best way to move the files back locally from Google?

I got an email from Google saying they will delete my files within x days.

I think it depends on your connection. But my download speed was a lot faster than my upload speed (1gb down vs 40mb up). So I just ended up moving groups of folders manually. But I thought a few posts up somebody had noted how to reverse the script to change it to download.

Link to comment
19 hours ago, MTA99 said:

 

What kind of Google account do you have? Mine has been in read-only for some time. Been putting off moving the files back but might have to up the priority if they're threatening deletion 😳

Had Google Workspace after being forced from the old G-Suite. It's been read only since july I think, so I got 30 days to move it over before deleting all of it.

@axeman Thanks I have 1gb DL speed so I think I will move it manually then. Did you use Double Commander or Mc to move it over manually? I want to keep all the attributes. stamps etc.

Edited by Bjur
Link to comment

Similar to everyone else, I have some till the end of the month to move my files back. How can I limit the speed of Rclone? I want to run it 247 but I need it not to hog my whole connection since I work from home. 

Personally, I been using Krusader to move them manually but it can crash during the night if the move command is too large. 

Link to comment
On 2/8/2024 at 6:49 PM, axeman said:

I think it depends on your connection. But my download speed was a lot faster than my upload speed (1gb down vs 40mb up). So I just ended up moving groups of folders manually. But I thought a few posts up somebody had noted how to reverse the script to change it to download.

Did you change something in the mount script?

I also have a 1 gig line but only get 67 MB when DL. Should be almost double. I'm using DoubleCommander but time is important so hope someone has a better solution.

Link to comment
On 9/22/2023 at 3:08 AM, thekiefs said:

 

Thanks. Is this meant to replace the script entirely? Or if we use it, what do we need to change in the script? I'm looking forward to not having to install MergerFS on boot via script.

I would love to know if anyone has managed to use the CA for this in the script?

I am unable to get my script running anymore, it fails at installing the MergerFS any ideas whats going on?

Here are the logs:

Script location: /tmp/user.scripts/tmpScripts/rclone_mount script/script
Note that closing this window will abort the execution of this script
19.02.2024 00:30:32 INFO: Not creating local folders as requested.
19.02.2024 00:30:32 INFO: Creating MergerFS folders.
19.02.2024 00:30:32 INFO: *** Starting mount of remote Gdrive
19.02.2024 00:30:32 INFO: Checking if this script is already running.
19.02.2024 00:30:32 INFO: Script not running - proceeding.
19.02.2024 00:30:32 INFO: *** Checking if online
19.02.2024 00:30:33 PASSED: *** Internet online
19.02.2024 00:30:34 INFO: Success Gdrive remote is already mounted.
19.02.2024 00:30:34 INFO: Mergerfs not installed - installing now.
fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/community/x86_64/APKINDEX.tar.gz
v3.18.6-35-g28268c5b274 [https://dl-cdn.alpinelinux.org/alpine/v3.18/main]
v3.18.6-36-g70c32235660 [https://dl-cdn.alpinelinux.org/alpine/v3.18/community]
OK: 20076 distinct packages available
(1/9) Installing ca-certificates (20230506-r0)
(2/9) Installing brotli-libs (1.0.9-r14)
(3/9) Installing libunistring (1.1-r1)
(4/9) Installing libidn2 (2.3.4-r1)
(5/9) Installing nghttp2-libs (1.57.0-r0)
(6/9) Installing libcurl (8.5.0-r0)
(7/9) Installing libexpat (2.6.0-r0)
(8/9) Installing pcre2 (10.42-r1)
(9/9) Installing git (2.40.1-r0)
Executing busybox-1.36.1-r2.trigger
Executing ca-certificates-20230506-r0.trigger
OK: 18 MiB in 24 packages
Cloning into 'mergerfs'...
2.39.0
Note: switching to '2.39.0'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

git switch -c

Or undo this operation with:

git switch -

Turn off this advice by setting config variable advice.detachedHead to false

HEAD is now at ae6c4f7c Rework mergerfs vs X section of readme (#1298)
/tmp/build-mergerfs: line 12: ./tools/install-build-pkgs: not found
/tmp/build-mergerfs: line 14: make: not found
/tmp/build-mergerfs: line 16: strip: not found
/tmp/build-mergerfs: line 18: build/mergerfs: not found
cp: can't stat 'build/mergerfs': No such file or directory
mv: cannot stat '/mnt/user/appdata/other/rclone/mergerfs/mergerfs': No such file or directory
19.02.2024 00:30:45 INFO: *sleeping for 5 seconds
19.02.2024 00:30:50 ERROR: Mergerfs not installed successfully. Please check for errors. Exiting.

I am at a loss as to how to get it to install MergerFS any ideas?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.