Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

44 minutes ago, Bjur said:

 

@KaizacI tried using the UID/PID/UMASK in userscripts mount and added it to the section in the mountscript:

# create rclone mount
    rclone mount \
    --allow-other \
    --buffer-size 256M \
    --dir-cache-time 720h \
    --drive-chunk-size 512M \
    --log-level INFO \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --vfs-cache-mode writes \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &
    --uid 99
    --gid 100
    --umask 002
 

Sonarr still won't get import access from the complete local folder where it's at.

 

My rclone mount folders are still showing root:

image.png.862d3ae33ce4e4c5b725f7d898d57556.png

 

 

@BolagnaiseIf I try the permission docker tool, I would risk breaking Plex transcoder, which I don't want.

 

Also if I run the tool.

Would I only have to run it once of each time I reboot?

 

@DZMM In regards to the Rclone share missing, it has happened a few times even when watching a movie, where I need to reboot to get that specific share working again while the other ones still working.

 

Why would it break your Plex transcoder?

 

You can try running the tool when you don't have your rclone mounts mounted. So reboot the server without the mount script and then run the tool on mount_mergerfs (and subdirectories) and mount_rclone. Maybe it will be enough for Sonarr.

Link to comment

Yeah it wont break your plex transcode folder. If it does, all you need to do is stop plex, delete the transcode folder and restart the container and it will recreate the folder with the correct user:nobody permissions.

 

BTW the root permissions in rclone mean absolutely nothing, google drive has no ability to store permission metadata inside the folder structure. The --allow other flag means that even if the file is root inside your local mergerFS location prior to upload, then other users can access it.

 

The reason your google drive still shows as root user is because you have not added a \ after each entry, your syntax is wrong.

 

Heres what i would do to fix your issues:

 

1. Fix your mount script syntax.

2. Disable your mount script schedule

3. Make sure all your containers are using 99/100 umask 022

4.reboot your server

5. upon restart, run the new permissions tool for all shares including cache

6. Delete the transcode folder for plex if have not set it up as a user share/its on tmp

7. run your mount script

8.start your dockers

 

 

 

Edited by Bolagnaise
Link to comment

Thanks for the help guys.

I've stopped all my rclone mounts, and ran the permission tool on my disk. I didn't include cache, since that was not advised, so I have included dockers.

 

Should I also run it on cache with dockers, seems the are correct folder wise.

 

I added the UID, PID, Umask to all my user scripts just in case.

 

I will try and test now.

Link to comment
7 minutes ago, Bjur said:

Thanks for the help guys.

I've stopped all my rclone mounts, and ran the permission tool on my disk. I didn't include cache, since that was not advised, so I have included dockers.

 

Should I also run it on cache with dockers, seems the are correct folder wise.

 

I added the UID, PID, Umask to all my user scripts just in case.

 

I will try and test now.

Don't overthink it. You have the Tools > New Permissions functionality which you can use to fix permissions on a folder level. For you that would be (I suppose) mount_rclone and mount_mergerfs and your localdata folder if you have it and if the permissions are not correct there. You don't need to run these permissions on your appdata/dockers.

 

I don't know if you saw the edit of Bolagnaise above? Read it, cause your mount script won't work like this. Add \ to every addition you made. So I would put it like this:

 

# create rclone mount
    rclone mount \
    --allow-other \
    --buffer-size 256M \
    --dir-cache-time 720h \
    --drive-chunk-size 512M \
    --log-level INFO \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --vfs-cache-mode writes \
    --bind=$RCloneMountIP \
    --uid 99 \
    --gid 100 \
    --umask 002 \
    $RcloneRemoteName: $RcloneMountLocation &

 

You have to finish with that "&".

Link to comment
5 hours ago, Bjur said:

@DZMM In regards to the Rclone share missing, it has happened a few times even when watching a movie, where I need to reboot to get that specific share working again while the other ones still working.

How stable is your connection?  The mount can occasionally drop and the script is designed to stop Dockers, test, and re-mount.  I'm convinced mine's only done this about 3 times though

Link to comment

Hi there,

I am attempting to use rClone to make backup and sync with GDrive.
I went through the installation, where token and encryption was generated, following Space Invader's tutorial.

I was careful to correctly setup paths while installing Krusader.

 

The issue I'm having is such that;
When I am trying to move/copy a file to GDrive mounted in unassigned devices, Krusader claims that there is not enough space on the disk to write a file.

Moving a file via Krusader to GDrive mounted as a remote disk via SMB, without using rClone, works just fine.

The second issue I have is that, google drive and encrypted "secure" directories created while connecting rClone with GDrive, are empty.
The way I understand it, is that rClone is mapping GDrive the way, unRaid thinks the cloud is actually a local unassigned disk. Should all the files from GDrive be already visible there?
 

Could someone direct me where to look for the cause?

Thanks,
DS

Edited by Digital Shamans
Link to comment
2 hours ago, Digital Shamans said:

Hi there,

I am attempting to use rClone to make backup and sync with GDrive.
I went through the installation, where token and encryption was generated, following Space Invader's tutorial.

I was careful to correctly setup paths while installing Krusader.

 

The issue I'm having is such that;
When I am trying to move/copy a file to GDrive mounted in unassigned devices, Krusader claims that there is not enough space on the disk to write a file.

Moving a file via Krusader to GDrive mounted as a remote disk via SMB, without using rClone, works just fine.

The second issue I have is that, google drive and encrypted "secure" directories created while connecting rClone with GDrive, are empty.
The way I understand it, is that rClone is mapping GDrive the way, unRaid thinks the cloud is actually a local unassigned disk. Should all the files from GDrive be already visible there?
 

Could someone direct me where to look for the cause?

Thanks,
DS

Are you using the scripts in this thread?  It sounds like your trying to do something different with rclone which probably should be posted somewhere else.

Link to comment
On 9/8/2022 at 2:47 PM, Kaizac said:

Don't overthink it. You have the Tools > New Permissions functionality which you can use to fix permissions on a folder level. For you that would be (I suppose) mount_rclone and mount_mergerfs and your localdata folder if you have it and if the permissions are not correct there. You don't need to run these permissions on your appdata/dockers.

 

I don't know if you saw the edit of Bolagnaise above? Read it, cause your mount script won't work like this. Add \ to every addition you made. So I would put it like this:

 

# create rclone mount
    rclone mount \
    --allow-other \
    --buffer-size 256M \
    --dir-cache-time 720h \
    --drive-chunk-size 512M \
    --log-level INFO \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --vfs-cache-mode writes \
    --bind=$RCloneMountIP \
    --uid 99 \
    --gid 100 \
    --umask 002 \
    $RcloneRemoteName: $RcloneMountLocation &

 

You have to finish with that "&".

No sorry I didn't see the \

I've tried now and test if it works. Will let you guys know.

 

Link to comment
On 9/9/2022 at 9:50 AM, DZMM said:

Are you using the scripts in this thread?  It sounds like your trying to do something different with rclone which probably should be posted somewhere else.

Thank you for the reply.

No, I used Space Invader's scripts from a 5yrs old yt video. ;)

Just run the mount script from the GitHub linked in the 1st post in this topic.
https://github.com/BinsonBuzz/unraid_rclone_mount

The script run successfully (so it says) and done tonnes of something (many lines were produces).

At the end, it did change nothing, except nesting another gdrive folder within gdrive folder.
None of the files are visible in Krusader (they are in Unraid). 
When trying copy a file into a secure ot gdrive folder, I have a prompt about no access.

Why use Krusader at all?

Shouldn't rClone itself be sufficient enough?

 

Link to comment

HI all

 

so i use drive api i think( i dont use team folders)

I tryd to use SA account but i think i messed that up. (ended up loosing files/making duplicates ect)

 

But back to my question. 

 

Today i have:

 

User 1 Using a crypt with folder in it for evrything. (lets call this the stadard setup)

 

can i add a secound user (user 2) to help with the upload to get 2x 700gb a day?

 

 

Feel free to PM if you feel you can help.

 

 

 

 

 

 

Link to comment

Hi,
Root user is driving me nuts!

Since I upgraded Unraid to 6.10, I can't manage to keep my dockers working.

Every docker is using -e 'PUID'='99' -e 'PGID'='100' (I've 2 sonarrs (1080-4k) and 2 radarrs, 2 syncars + 1 qbittorrent + Cloudplow).


My folders are :
Movies/HD and Movies/UHD.

 

Every once a while, my folders HD and UHD are switching owner to root and then Sonarr/raddar can't import anymore!

My script is updated and I think now that it's mergerfs running as root (as my scripts).

 

What should I try to keep it working?


Thank you,

 

Link to comment
11 hours ago, Logopeden said:

HI all

 

so i use drive api i think( i dont use team folders)

I tryd to use SA account but i think i messed that up. (ended up loosing files/making duplicates ect)

 

But back to my question. 

 

Today i have:

 

User 1 Using a crypt with folder in it for evrything. (lets call this the stadard setup)

 

can i add a secound user (user 2) to help with the upload to get 2x 700gb a day?

 

 

Feel free to PM if you feel you can help.

 

 

 

 

 

 

You can only use service accounts with team drives. And I don't think multiple actual accounts work for your own drive. You would have to share the folder with the other account. But I think it will then fail on making the rclone mount cause rclone can't see the shared folder.

If you have the option, team drives are the easiest option for everything. Including being consistent with the storage limits of Google Workspace.

7 hours ago, HpNoTiQ said:

Hi,
Root user is driving me nuts!

Since I upgraded Unraid to 6.10, I can't manage to keep my dockers working.

Every docker is using -e 'PUID'='99' -e 'PGID'='100' (I've 2 sonarrs (1080-4k) and 2 radarrs, 2 syncars + 1 qbittorrent + Cloudplow).


My folders are :
Movies/HD and Movies/UHD.

 

Every once a while, my folders HD and UHD are switching owner to root and then Sonarr/raddar can't import anymore!

My script is updated and I think now that it's mergerfs running as root (as my scripts).

 

What should I try to keep it working?


Thank you,

 

Read a few posts up with responses to Bjur from me and Bolagnaise that should solve your issue.

 

On 9/16/2022 at 11:30 AM, Digital Shamans said:

Thank you for the reply.

No, I used Space Invader's scripts from a 5yrs old yt video. ;)

Just run the mount script from the GitHub linked in the 1st post in this topic.
https://github.com/BinsonBuzz/unraid_rclone_mount

The script run successfully (so it says) and done tonnes of something (many lines were produces).

At the end, it did change nothing, except nesting another gdrive folder within gdrive folder.
None of the files are visible in Krusader (they are in Unraid). 
When trying copy a file into a secure ot gdrive folder, I have a prompt about no access.

Why use Krusader at all?

Shouldn't rClone itself be sufficient enough?

 

I don't want to be rude. But you have no idea what you are doing. And I don't think we can understand what you are trying to do. I don't see why you bring in Krusader which is just a file browser which can also browse to mounted files. I suggest you first read up on rclone and then what the scripts in this topic do before you continue. Or be precise in what you are trying to accomplish.

Link to comment
53 minutes ago, Kaizac said:

You can only use service accounts with team drives. And I don't think multiple actual accounts work for your own drive. You would have to share the folder with the other account. But I think it will then fail on making the rclone mount cause rclone can't see the shared folder.

If you have the option, team drives are the easiest option for everything. Including being consistent with the storage limits of Google Workspace.

Read a few posts up with responses to Bjur from me and Bolagnaise that should solve your issue.

 

I don't want to be rude. But you have no idea what you are doing. And I don't think we can understand what you are trying to do. I don't see why you bring in Krusader which is just a file browser which can also browse to mounted files. I suggest you first read up on rclone and then what the scripts in this topic do before you continue. Or be precise in what you are trying to accomplish.

Forgot to say, I've already tried bolognaise script and safe docker new perm.


I've already in my script :

 

# Add extra commands or filters
Command1="--rc"
Command2="--uid=99 --gid=100 --umask=002"

Link to comment
2 minutes ago, HpNoTiQ said:

Forgot to say, I've already tried bolognaise script and safe docker new perm.


I've already in my script :

 

# Add extra commands or filters
Command1="--rc"
Command2="--uid=99 --gid=100 --umask=002"

You ran safe docker perms when you didn't have your rclone mounts mounted?

And I think the default script already runs umask so seems like you are doubling that flag now?

Link to comment
56 minutes ago, Kaizac said:

You can only use service accounts with team drives. And I don't think multiple actual accounts work for your own drive. You would have to share the folder with the other account. But I think it will then fail on making the rclone mount cause rclone can't see the shared folder.

If you have the option, team drives are the easiest option for everything. Including being consistent with the storage limits of Google Workspace.

Read a few posts up with responses to Bjur from me and Bolagnaise that should solve your issue.

 

 

So i tryd to make SA work in the pastbut did not end up making it work. ( it uploaded files but somthing with ovnership of the difrent files maked it odd when trying to read the files/mount ect)

If you might help me making it work with SA id love to learn how to do it.

 

1 the Team Drive limitations (how to one get around this?)

is ther a way to make a Seedbox do all the DL and upload to google so i dont have to do it?

 

 

 

Link to comment
1 minute ago, Logopeden said:

 

 

So i tryd to make SA work in the pastbut did not end up making it work. ( it uploaded files but somthing with ovnership of the difrent files maked it odd when trying to read the files/mount ect)

If you might help me making it work with SA id love to learn how to do it.

 

1 the Team Drive limitations (how to one get around this?)

is ther a way to make a Seedbox do all the DL and upload to google so i dont have to do it?

 

 

 

What team drive limitations are you talking about? If your seedbox can use rclone it can mount directly to Google drive. But a seedbox also indicates torrent seeding and that's generally a problem with Google drive because of the api hits and the resulting temporary bans.

 

If you just want to use your seedbox to download and move the files to your Google drive that's entirely possible and is what DZMM seems to be doing.

Link to comment
4 minutes ago, Kaizac said:

What team drive limitations are you talking about? If your seedbox can use rclone it can mount directly to Google drive. But a seedbox also indicates torrent seeding and that's generally a problem with Google drive because of the api hits and the resulting temporary bans.

 

If you just want to use your seedbox to download and move the files to your Google drive that's entirely possible and is what DZMM seems to be doing.

 

Talking of the file and folder limitations? or have i been reading this wrong

 

i just whant it to move files to google drive. and then make unraid/radarr/sonarr move the file to crypt ect?

 

 

do i need to have 1 mount for evry of the 100 users i got made?

Link to comment
14 minutes ago, Logopeden said:

 

Talking of the file and folder limitations? or have i been reading this wrong

 

i just whant it to move files to google drive. and then make unraid/radarr/sonarr move the file to crypt ect?

 

 

do i need to have 1 mount for evry of the 100 users i got made?

As far as I know there is only a file amount limitation like 400k files. Which is a big amount. And if you are smart with using seperate team drives for separate purposes you should be fine.

 

You can use just 1 service account for the mount itself. But the uploading is done through multiple service accounts. It is for continuous use though and not for a one time upload.

 

So let's say you have a backlog of multiple terabytes waiting to be uploaded you will have to stick to 750gb per day until you are caught up. Or you create multiple mounts to the same team drive with seperate service accounts and find a way to seperate the files you upload. Per folder or per disk drive for example.

 

Once you have a continuous download and upload going you can use the script from DZMM. It will see the files you downloaded and start uploading and then switch to the next service account and so on. Keep in mind that download speed may not outpace upload speed or you will be troubled with the upload quota again.

Link to comment
On 9/18/2022 at 6:23 PM, Kaizac said:

You ran safe docker perms when you didn't have your rclone mounts mounted?

And I think the default script already runs umask so seems like you are doubling that flag now?

I've done :

 

1. Fix your mount script syntax. -> Updated to last version  + Delete --umask 000 (line 138).

2. Disable your mount script schedule -> Done 

3. Make sure all your containers are using 99/100 umask 022 -> Done but maybe a typo : is it 002 or 022? In script it is --umask 002 \

4.reboot your server -> Done 

5. upon restart, run the new permissions tool for all shares including cache -> Done to all disk/all shares

6. Delete the transcode folder for plex if have not set it up as a user share/its on tmp -> Done

7. run your mount script ->DOne 

8.start your dockers -> DOne by script.

 

image.thumb.png.c26756d87541e5c0246352760ff8cdd5.png

After a period of time, my HD and UHD folder are switching from nobody to root. (not the parent one neither the children folders).

image.thumb.png.8725f17b91f0d0a5083d810fad4ef866.png

 

 

image.thumb.png.94537a20ac4290b4a628c4b8bb2e2983.png

 

I have these docker accessing this folder :

2 Sonarr(1080/4k) 2 Radarr (1080/4k) Cloudplow Plex.
They are all running with -e 'PUID'='99' -e 'PGID'='100' and some with -e 'UMASK'='022'.


It's driving me nuts! (Sas, Teamdrive, cache, downloads, traktarr...) everything automated but stuck with a permission error dropping every day!

 

 

 

Link to comment
29 minutes ago, HpNoTiQ said:


It's driving me nuts! (Sas, Teamdrive, cache, downloads, traktarr...) everything automated but stuck with a permission error dropping every day!

 

 

 

I have the same problem and tried the fixes suggested. My upload script deletes empty folders, mount script is set to run every 10 minutes and when it creates the folders again they have owner root. 

 

Not very elegant but I just run a user script to change those permissions on the local folder daily. Set to a couple of hours after the upload script has done it's thing

  • Like 1
Link to comment
15 minutes ago, tsmebro said:

I have the same problem and tried the fixes suggested. My upload script deletes empty folders, mount script is set to run every 10 minutes and when it creates the folders again they have owner root. 

 

Not very elegant but I just run a user script to change those permissions on the local folder daily. Set to a couple of hours after the upload script has done it's thing

Indeed, I just figured cloudplow deleted my UHD and HD Folders (which are shared with my dockers  I think Unraid recreated them with root.)
I've changed 

"remove_empty_dir_depth": 2, by "remove_empty_dir_depth": 3, (adapted to your architecture if you read me later).

It's now fully working!

Edited by HpNoTiQ
  • Thanks 1
Link to comment

Hi guys

 

I'm getting problem with file permission inside my share every folder that were created a long time ago like years are ok but every new one i create at root of my main folder are blocked files only got 666 permission i cannot rename etc.

 

anyone can help me please 

 

Unraid 6.10.3

 

# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 000 \
	--dir-cache-time $RcloneMountDirCacheTime \
	--attr-timeout $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--vfs-read-ahead 500m \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

 

 

thx

Link to comment

Hi all I've been using this command for past year and never had issues , I see this script/guide includes cache and other things involved, can someone explain what it does ? 
 

#!/bin/bash
#mounts cloud storage to unraid
rclone mount --max-read-ahead 1024k --allow-other --uid 99 --gid 100 --umask 002 --allow-non-empty -vv --tpslimit=10 plex: /mnt/user/data/media/gdrive/



I have no permission issues or disconnects and api bans, but will gladly take suggestions for improving it 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.