[Support] Rclone-mount - with FUSE support for Plex/Emby/Sonarr etc. (beta)


Recommended Posts

  • 3 weeks later...

Looking at trying this versus the Plugin. When I run rclone version from the console it shows version 1.45 

I have version v1.47.0-073-gcff85f0b-beta running using the Plugin. How do I pull the most recent beta from the Rclone-mount docker?

 

I tried 

curl https://rclone.org/install.sh | sudo bash -s beta

from within the Rclone docker console but it won't run as the container throws errors

 

/ # curl https://rclone.org/install.sh | sudo bash -s beta
sh: curl: not found
sh: sudo: not found

Link to comment
  • 7 months later...
Executing => rclone mount --config=/config/.rclone.conf --allow-other --read-only cache: /data

2020/01/03 22:32:52 Failed to create file system for "cache:": failed to create cache directory /root/.cache/rclone/cache-backend: mkdir /root/.cache: permission denied

I have a cache of an encrypted GDrive mount.
Mounting the normal gdrive or secure mount works fine, but when trying to mount the cache, ^ this is what I get.

Link to comment
  • 6 months later...

So I have a share called "Recordings" that I have Shinobi record the security cam footage into. I tried to let rclone mount it to my designated sync folder on my Google Drive. But it appears to change the directory permission so that only user "911" has write permission.

 

With "rclone mount" docker turned off:

root@Tower:/mnt/user# ll -d Recordings
drwxrwxrwx 1 nobody users 6 Jul 13 04:13 Recordings/

 

With "rclone mount" docker running:

root@Tower:/mnt/user# ll -d Recordings
drwxr-xr-x 1 911 911 0 Jul 13 04:17 Recordings/

This prevents me to writing anything in it when mounted on my PC as a SMB share. Pretty sure it will also prevent Shinobi from writing videos into it. How can I resolve this issue? I googled various combination of "rclone" and "user 911" but wasn't able to find anything other this thread and the other rclone thread on the Unraid forum.

Edited by Phoenix Down
Link to comment
On 7/13/2020 at 4:33 AM, Phoenix Down said:

So I have a share called "Recordings" that I have Shinobi record the security cam footage into. I tried to let rclone mount it to my designated sync folder on my Google Drive. But it appears to change the directory permission so that only user "911" has write permission.

 

With "rclone mount" docker turned off:


root@Tower:/mnt/user# ll -d Recordings
drwxrwxrwx 1 nobody users 6 Jul 13 04:13 Recordings/

 

With "rclone mount" docker running:


root@Tower:/mnt/user# ll -d Recordings
drwxr-xr-x 1 911 911 0 Jul 13 04:17 Recordings/

This prevents me to writing anything in it when mounted on my PC as a SMB share. Pretty sure it will also prevent Shinobi from writing videos into it. How can I resolve this issue? I googled various combination of "rclone" and "user 911" but wasn't able to find anything other this thread and the other rclone thread on the Unraid forum.

In case anyone is looking for answers, I figured it out. To get the rclone mount to have the same user/group/permissions, you have to add the following to your RCLONE_MOUNT_OPTIONS (under Show More Settings):

 

--uid=99 --gid=100 --umask=0000

User "nobody" has uid of 99, and group "users" has the gid of 100. The umask matches the default permissions of the share directory.

Edited by Phoenix Down
Link to comment
On 7/19/2020 at 2:07 AM, Phoenix Down said:

In case anyone is looking for answers, I figured it out. To get the rclone mount to have the same user/group/permissions, you have to add the following to your RCLONE_MOUNT_OPTIONS (under Show More Settings):

 


--uid=99 --gid=100 --umask=0000

User "nobody" has uid of 99, and group "users" has the gid of 100. The umask matches the default permissions of the share directory.

Actually, I give up... I can't figure this out. The /mnt/user/ (and /mnt/user0/) level directories (i.e. top level Shares directories) are owned by user "users" of group "nobody". However, the directories and files under the share directory are owned by different users. In the case of my Recordings share, user "root" of group "root". The issue is that while the Google Drive sync now works, using rclone mount on the share directory /mnt/user0/Recordings causes it to become detached from mnt/user/Recordings:

root@Tower:~# ll /mnt/user/Recordings/EVSxZIbF5O/v9rIt25frt -t | head -n 5
total 10046608
-rwxrwxrwx 1 root root  42222057 Jul 19 16:01 2020-07-19T16-00-01.mp4*
-rwxrwxrwx 1 root root 149096106 Jul 19 16:00 2020-07-19T15-55-02.mp4*
-rwxrwxrwx 1 root root 148676838 Jul 19 15:55 2020-07-19T15-50-03.mp4*
-rwxrwxrwx 1 root root 147915613 Jul 19 15:50 2020-07-19T15-45-02.mp4*

root@Tower:~# ll /mnt/user0/Recordings/EVSxZIbF5O/v9rIt25frt -t | head -n 5
total 10018788
-rw-rw-rw- 1 root root  41576895 Jul 20 10:31 2020-07-20T10-30-07.mp4
-rw-rw-rw- 1 root root 150686208 Jul 20 10:30 2020-07-20T10-25-06.mp4
-rw-rw-rw- 1 root root 145905208 Jul 20 10:25 2020-07-20T10-20-10.mp4
-rw-rw-rw- 1 root root 144632439 Jul 20 10:20 2020-07-20T10-15-09.mp4

This means if I mount the share on my Windows box, I don't see any new files written into /mnt/user0/Recordings (where I have Shinobi set to write into). And if I try to rclone mount /mnt/user/Recordings instead, rclone fails completely with permission errors, regardless if I set the uid/gid to users/nobody or root/root.

 

I guess the moral of this story is: don't try to rclone mount a Share directory.

 

Link to comment

I don't think I'm the only one missing something, a brief tutorial would be super helpful.  In Unraid, I can't create the conf file until I install the container using CA; not a big deal but moving on.  Pull the container and then try to create the config file (note the container has to be up), also not clear if I should run the command from the host terminal or the container console; I did the former.  I made it part way through the guided setup and get to the part where I need to load a webpage to allow rclone access to my personal gdrive (just testing).  The URL uses localhost...(127.0.0.1/ etc....)  Bascially stuck as I can't get an API key from google to continue.  Did anyone else encounter this?

Any other tutorials I could find seem to be running in a VM/Baremetal instead of a container, and didn't see any that gave the base URL + whatever AUTH code request is being made to google.  Appreciate any help in advance

Link to comment
On 7/24/2020 at 4:40 PM, loond said:

I don't think I'm the only one missing something, a brief tutorial would be super helpful.  In Unraid, I can't create the conf file until I install the container using CA; not a big deal but moving on.  Pull the container and then try to create the config file (note the container has to be up), also not clear if I should run the command from the host terminal or the container console; I did the former.  I made it part way through the guided setup and get to the part where I need to load a webpage to allow rclone access to my personal gdrive (just testing).  The URL uses localhost...(127.0.0.1/ etc....)  Bascially stuck as I can't get an API key from google to continue.  Did anyone else encounter this?

Any other tutorials I could find seem to be running in a VM/Baremetal instead of a container, and didn't see any that gave the base URL + whatever AUTH code request is being made to google.  Appreciate any help in advance

Figured it out; here is a brief How-to for Unraid specifically; alternately @SpaceInvaderOne made an excellent video about the plugin a few years ago, which inspired me to try the following:

 

  1. Pull the container using CA, and make sure you enter the mount name like "NAME:"
  2. In the host (aka Unraid) terminal run the provided command on page 1 of this post:
    docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config
  3. Follow the onscreen guide; most flow with other tutorials and the video referenced above.  UNTIL - you get to the part about using "auto config" or "manual".  Turns out it is WAY easier to just use "manual" as you'll get a one-time URL to allow access for rclone to your GDrive.
  4. After logging in and associating the auth request to your gmail account you'll get an auth key with a super easy copy button.
  5. Paste the auth/token key into the terminal window
  6. Continue as before, and complete the config setup
  7. CRITICAL - go to:
    cd /mnt/disks/
    ls -la

    Make sure the rclone_volume is there, and then correct the permissions so the container can see the folder as noted previously in this thread

    chown 911:911 /mnt/disks/rclone_volume/

    *assuming you're logged in as root, otherwise add "sudo"

  8. Restart the container, and verify you're not seeing any connection issues in the logs

  9. From the terminal

    cd /mnt/disks/rclone_volume
    ls -la

    Now you should see your files from GDrive

 

I was just testing to see if I could connect without risking anything in my drive folder, so everyting was in read only including the initial mount creation with the config.  As such, I didn't confirm any other containers could see the mount, but YMMV.  Have a great evening and weekend.

Edited by loond
  • Like 1
Link to comment
51 minutes ago, loond said:

I don't think I'm the only one missing something, a brief tutorial would be super helpful.  In Unraid, I can't create the conf file until I install the container using CA; not a big deal but moving on.  Pull the container and then try to create the config file (note the container has to be up), also not clear if I should run the command from the host terminal or the container console; I did the former.  I made it part way through the guided setup and get to the part where I need to load a webpage to allow rclone access to my personal gdrive (just testing).  The URL uses localhost...(127.0.0.1/ etc....)  Bascially stuck as I can't get an API key from google to continue.  Did anyone else encounter this?

Any other tutorials I could find seem to be running in a VM/Baremetal instead of a container, and didn't see any that gave the base URL + whatever AUTH code request is being made to google.  Appreciate any help in advance

First of all, as I've learned the hard way, support and development for this rclone container is basically dead. Most people have moved on to the rclone plugin, which is actively developed and supported, and its support thread is pretty active. Also, one side effect I found is that doesn't matter what I do, the container writes to the docker image somewhere everytime rclone runs, which wakes up my SSDs (cache pool).

 

With that said, to answer your question: you run it from the container's console. When it gets to this part:

 

Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No

DO NOT choose "yes". Choose "no" instead, and it will give you a link to Google website to get the access code.
 

  • Thanks 1
Link to comment
  • 1 month later...

Hmm, ive been banging my head against the desk all day, can someone here give me some advice on how to fix this?

So this issue that was already mentioned several times, i get this. But the solution mentioned does not work after server restart.

Executing => rclone mount --config=/config/.rclone.conf --allow-other --read-only --allow-other --acd-templink-threshold 0 --buffer-size 1G --timeout 5s --contimeout 5s my_gdrive: /data
2020/09/02 14:00:21 mount helper error: fusermount: user has no write access to mountpoint /data

2020/09/02 14:00:21 Fatal error: failed to mount FUSE fs: fusermount: exit status 1

 

First of all, i have the docker installed, and all the settings as mentioned throughout this thread. I do also have couple of extra rclone flags passed in, but these arnt the issue.

 

So, lets say the mount point defined is `/mnt/disks/rclone_volume`, when i restart the server (docker auto starts), and i see the above mentioned error. If i stop the docker, and do `ls -la` i see the ownership is `root:root` for `/mnt/disks/rclone_volume`. alright, sure, `chmod 777`, `chown 911:911` the rclone_volume, then restart the docker, cool, everything works. `/mnt/disks/rclone_volume` gets mounted correctly (`ls -la` shows 911:911 great), i can browse the files, no errors in the docker logs. Sweet, everything is sorted right? No, unfortunately not. 

The moment i reboot the unraid server (remember the docker auto starts), i get the above mentioned error on the docker logs, and obviously the drive is not mounted. so back to `ls -la` on the `/mnt/disks/rclone_volume`, and its back to `root:root` and `755`. So basically, every time start my server, i have to manually `chmod 777` and/OR `chown 911:911` the `/mnt/disks/rclone_volume`, and then start the docker? 

 

Any idea whats causing this? I cant be the only one having this issue can i?

 

So, essentially, for this docker to successfully mount a drive, it needs the mount destination to either be `777` or `911:911`. But for what ever reason, at rebbot/start or unraid, the ownership of `/mnt/disks/rclone_volume` gets reset to `root:root` even if you had set it to `911:911` prior to restart (i assume user 911 doesnt exist at the very start, so it defaults to root?). at the start of the boot, unraid (?) also sets `/mnt/disks/rclone_volume` to 755 (even if you had it set to 777 before restart). wtf? could this be related to another plugin i might have?

Edited by syrys
Link to comment

Alright, here is my "hacky" solution to the above problem. It works for now, if someone has a better solution, let me know.

 

Install User Scrips plugin (if you dont have it already), and add the following script:

#!/bin/bash
mkdir /mnt/disks/rclone_volume
chmod 777 /mnt/disks/rclone_volume

obviously you can set -p flag on the mkdir if you need nested directories or if you have issues with sub directories not being there, but from trial and error on my unraid setup, at boot (before array starts), `/mnt/disks/` exist. edit the script to include all the mount folders you want (if you have multiple mounts), and chmod 777 each of them.

 

Set the above user script to run on every array start.

 

Just to make sure my container doesnt start prior to this finishing (unsure if it can happen?), i added a random other container above my rclone container (a container that doesn't need drives to be mounted), and set a delay to 5 secs (so rclone container waits 5 seconds). This might be unnecessary.

 

Hope it helps someone.

  • Like 1
Link to comment
  • 3 weeks later...
On 9/3/2020 at 7:30 AM, syrys said:

...Set the above user script to run on every array start.

 

Just to make sure my container doesnt start prior to this finishing (unsure if it can happen?), i added a random other container above my rclone container (a container that doesn't need drives to be mounted), and set a delay to 5 secs (so rclone container waits 5 seconds). This might be unnecessary.

 

Hope it helps someone.

Thanks a lot syrys it works perfectly !

Edited by doobyns
Link to comment
  • 10 months later...

So happy. This container isn’t just read only. I mounted gdrive and can write sonarr and radarr etc to it with all the content I want.

 

i was getting major issues with the rclone plug-in so I am so thankful you made this docker. I am a huge fan as I pulled my hair out trying to get the plug-in to work but it just randomly stopped working one day and I could never get it to work again.

 

 Please keep this container alive as it definitely is a great alternative to the plug-in.

Link to comment
  • 4 weeks later...
On 7/27/2021 at 2:41 PM, ritty said:

So happy. This container isn’t just read only. I mounted gdrive and can write sonarr and radarr etc to it with all the content I want.

 

i was getting major issues with the rclone plug-in so I am so thankful you made this docker. I am a huge fan as I pulled my hair out trying to get the plug-in to work but it just randomly stopped working one day and I could never get it to work again.

 

 Please keep this container alive as it definitely is a great alternative to the plug-in.

Dont suppose would be interested in helping me with mine?

Link to comment
1 hour ago, Mat1987 said:

Dont suppose would be interested in helping me with mine?

Let people know your problem...

 

Pull the container using CA, and make sure you enter the mount name like "NAME:"

In the host (aka Unraid) terminal run the provided command on page 1 of this post:

docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config

Follow the onscreen guide; most flow with other tutorials and the video referenced above.  UNTIL - you get to the part about using "auto config" or "manual".  Turns out it is WAY easier to just use "manual" as you'll get a one-time URL to allow access for rclone to your GDrive.

After logging in and associating the auth request to your gmail account you'll get an auth key with a super easy copy button.

Paste the auth/token key into the terminal window

Continue as before, and complete the config setup

CRITICAL - go to:

cd /mnt/disks/ ls -la

Make sure the rclone_volume is there, and then correct the permissions so the container can see the folder as noted previously in this thread

chown 911:911 /mnt/disks/rclone_volume/

*assuming you're logged in as root, otherwise add "sudo"

Restart the container, and verify you're not seeing any connection issues in the logs

From the terminal

cd /mnt/disks/rclone_volume ls -la

Now you should see your files from GDrive

 

Link to comment
  • 5 months later...

Looking for some help, so I have the restarting unraid issue of having to reset the owner etc, however, that's not the real issue for me, the issue i have is plex see's the mount, I can see the files when adding the /data/[MySubFolder] to the plex library, but plex doesn't have permission to load that file to scan it to actually add the file to the library (thus library is empty)

 

when i ls -la, the files have:

 

-rw-r--r--

 

Do they need execute also?

 

I have set the ownership to 911:911, chmod 777 has not impact.

 

What am i doing wrong?

 

I'm considering installing rclone 'in' the plex docker at this rate, but i'm sure that'll be a new world of pain, and not as 'clean'

  • Upvote 1
Link to comment
  • 4 months later...

Hello Everyone.

I'm still pretty new to Unraid, so please be gentle :)

I've installed rclone-mount yesterday and played around a bit.

This morning I had this waiting from the "fix common problems" plug in - 

"Docker application Rclone-mount has volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option"

Problem is, the docker doesn't really have any option to set a path writing mode as slave, as far as I can see.

Please advise.

Thank you!

 

Link to comment
  • 7 months later...
  • 2 months later...
  • 6 months later...

I cant connect to gdrive using "no" / headless option
google returns

 

Error 400: invalid_request

The out-of-band (OOB) flow has been blocked in order to keep users secure. Follow the Out-of-Band (OOB) flow migration guide linked in the developer docs below to migrate your app to an alternative method.

Request details: redirect_uri=urn:ietf:wg:oauth:2.0:oob

 

any way to get past this? or does the developer need to fix it?

Link to comment
  • 1 month later...

Thank you @thomast_88 for this excellent container, it does exactly what I need.

 

Unfortunately the Rclone version (and perhaps other components in the image) is quite outdated, so I tried to exchange the executable for the newest Version manually

 

After updating the Rclone executable inside the docker container, I've been facing challenges with mounting

 

Symptoms: After the update, when attempting to mount using Rclone, I receive errors indicating that the data directory is already mounted (directory already mounted error) or issues related to FUSE (fusermount: exec: "fusermount3": executable file not found in $PATH).

 

Configuration: I'm using the following mount command: rclone mount --config=/config/.rclone.conf --allow-other --vfs-cache-mode full -v --bwlimit 10M --vfs-cache-poll-interval 10m --vfs-cache-max-age 6h --vfs-cache-max-size 250G --dir-cache-time 24h --vfs-write-back 30s --cache-dir=/var/rclonecache M-Z: /data.

 

Attempts to Resolve: I've tried unmounting and remounting, checking for processes using the mount, restart docker, reboot etc. The directory /data appears to be empty, yet the issues persist. In the meantime I have learned you are not supposed to simply modify a running docker image like that. 

 

I need to use the newer rclone feature because "--vfs-write-back" seems not to work with the older version as well as general improvements in the newer version. So in summary I think an update of the container would be due, however this is far beyond my capabilities unfortunately so I would rely on your help or any other skilled docker developers. 

 

my wishlist for the new version would be:

  1. updated rclone version
  2. in the docker template: cache / VFS-cache Path
  3. design it with running multiple containers in mind. I would assume the best practice is to run multiple dockers (one for each mount). In this case the mounting path should have some kind of sub folder e.g. /mnt/disks/rclone_volumes/Mount1 instead of /mnt/disks/rclone_volume
    alternatively it should have a way of autostarting multiple mounts in the one container

 

I would greatly appreciate any help and any insights or suggestions you might have. If additional information is required, I am happy to provide it. Thank you for your continued development and support of Rclone – it’s an invaluable tool for many of us.

 

 

Best regards, timetraveler

Edited by timetraveler
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.