Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

10 hours ago, Syaoran68 said:

After you run it the first time are you going into your logs and checking what it says?

i would do that first.. i have no problems just running it once even from a cold start of my entire Unraid machine.. it does take a min or two since it has to download the mergerFS repo and build it manually.. 

I have tried that. My logs not showing anything unusual. But hey.. it works. i tried waiting. up to 10 minutes. but whe i run it again it installs fuse right away.

 

i was just wondering. as long as it works, then all is good :)

 

i just wanted to know if i'm the only one that has to run it twice :)

Link to comment
14 hours ago, Syaoran68 said:

 

when you run the mount script and go into your google_merged folder do you see any files in there? if not there is something wrong with your mount set up.

the naming is always a bit wonky but the here is an easy explanation.
google_remote : this is a LOCAL copy of what exists on your remote (in this case whatever is inside your google drive)

google_local: this is a local location for all the files that are added into the mergerFS (files will stay here until the upload script is run)

user0/google_remote : this is rclone's VFS caching system that will pull items from the remote (google) and cache them locally on your machine. ( in this case i would refrain from using user0 since user0 is specifically for items that exist on your array ONLY plus it makes it hard to differentiate from the other above google_remote mount you have. i would use something like [user/google_remote_cache])

google_merged: this is the amalgamation of the google_remote and the google_local mounts. (when you run the upload script it will process files in your google_local folder and push them into the cloud. after that the files will be deleted off of the local mount and should be accessible inside your merged folder.)

 

I would check first on mount that you have 3 things.. i do this every time i mount just to make sure everything is running correctly..

1. run the mount script. (sometimes it takes a min or two since it downloads mergerFS on the first shot... after that check the logs and make sure everything is mounted correctly. )

2. check merged folder to make sure all your files are in there..

 

in your case i would also check the google_local for the mount folders.. they should be there BUT all of them should be empty. 

check your google_remote folder to make sure all your remote files are there.. and also check your google_remote_cache folder. you should see a VFS folder for the rclone VFS cache along with maybe a metadata folder. 

 

test running a file where you know it is only in remote.. and you should see some the same files populate in the VFS cache..

 

then test moving some files into the merged folder. you should also see that file show up in the google_local folder.. 

 

then lastly run the upload script. and you should see it disappear from the google local folder but stay in your merged folder. 

 

That's the weird thing, when I run the mount script, I don't see any errors. Within google_local and google_merged, there are the directories that are supposed to be created (There is also a gdrive folder which shouldn't be there, like mnt/user/google_merged/gdrive/gdrive, but that's not a huge issue for now). 

 

google_remote has directories cache and gdrive, but when I try to access the gdrive directory, I get errors. Through windows explorer (this user has read write perms on the share), I get 'Windows can't access ....', through winscp logged in as root I get general failure error code 4, and if I try to cd or ls the /mnt/user/google_remote/gdrive, I get the error 'Transport endpoint is not connected'

 

If I mount the same rclone settings (nothing changed) using 'rclone mount --max-read-ahead 1024k --allow-other gdrive: /mnt/user/rclone/google &', it mounts normally without any issues and I can access all the files on the drive (This is just run by itself, not within the mergerfs mount script). 

 

Clearly something seems to be going wrong with the mount within the script, but I'm not sure what or how to troubleshoot that.

 

Edit:

I did some tests by running the mount alone within another script with all the same arguments and removing the variables and just hard coded them. It mounts fine there, and when the hardcoded mount is put into the main mount script, it also works before the mergerfs stuff happens. Once that happens, then I get all the errors that I posted about above. At a loss for what the issue could possibly be tbh.

Edited by TacosWillEatUs
More information
Link to comment
3 hours ago, TacosWillEatUs said:

There is also a gdrive folder which shouldn't be there, like mnt/user/google_merged/gdrive/gdrive, but that's not a huge issue for now).

 

I think this is exactly the problem. I made an error with setting up your rclone mounts through rclone config. Can you show your rclone config for your gdrive mount and crypt if you use that? Just wipe any identifying info before posting.

Link to comment
9 hours ago, Kaizac said:

 

I think this is exactly the problem. I made an error with setting up your rclone mounts through rclone config. Can you show your rclone config for your gdrive mount and crypt if you use that? Just wipe any identifying info before posting.

[gdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/google/accounts/sa_gdrive_upload1.json
team_drive = XXXXXXXX
root_folder_id = 

I swear I looked over this multiple times, and it just occurred to me that within the main post that comparing to the service account layout, I'm missing server_side_across_configs = true, and I have a 'root_folder_id = ', which may also not need to be there.

 

I'll adjust the config and see if that sorts it out

Link to comment
4 minutes ago, TacosWillEatUs said:
[gdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/google/accounts/sa_gdrive_upload1.json
team_drive = XXXXXXXX
root_folder_id = 

I swear I looked over this multiple times, and it just occurred to me that within the main post that comparing to the service account layout, I'm missing server_side_across_configs = true, and I have a 'root_folder_id = ', which may also not need to be there.

 

I'll adjust the config and see if that sorts it out

Root folder is fine. Server side transfers true should be added indeed. After that I would do a fresh reboot and run the mount script once. And then see the mapping. Gdrive/gdrive should not be happening unless you made that folder within the folder.

Link to comment
4 hours ago, MrCrispy said:

Can I use this outside unraid? I will just be running Debian for now since it only has 2 hdd's.

this can be used outside of unraid but you'd just have to put it inside its own files. and run them manually through like ./ or something like that.. make sure you set a log file though, since its running in the background you're gonna need something that holds logs for you otherwise you're fucked lols.. if you ever need to kill the process manually i you can use ps -ef|grep rclone and it should pick up the 3 processes that get started through the mount script. 

Link to comment
On 5/22/2023 at 10:21 AM, TacosWillEatUs said:

That's the weird thing, when I run the mount script, I don't see any errors. Within google_local and google_merged, there are the directories that are supposed to be created (There is also a gdrive folder which shouldn't be there, like mnt/user/google_merged/gdrive/gdrive, but that's not a huge issue for now). 

 

google_remote has directories cache and gdrive, but when I try to access the gdrive directory, I get errors. Through windows explorer (this user has read write perms on the share), I get 'Windows can't access ....', through winscp logged in as root I get general failure error code 4, and if I try to cd or ls the /mnt/user/google_remote/gdrive, I get the error 'Transport endpoint is not connected'

 

If I mount the same rclone settings (nothing changed) using 'rclone mount --max-read-ahead 1024k --allow-other gdrive: /mnt/user/rclone/google &', it mounts normally without any issues and I can access all the files on the drive (This is just run by itself, not within the mergerfs mount script). 

 

Clearly something seems to be going wrong with the mount within the script, but I'm not sure what or how to troubleshoot that.

 

Edit:

I did some tests by running the mount alone within another script with all the same arguments and removing the variables and just hard coded them. It mounts fine there, and when the hardcoded mount is put into the main mount script, it also works before the mergerfs stuff happens. Once that happens, then I get all the errors that I posted about above. At a loss for what the issue could possibly be tbh.

Are you running the mount script in the background from user scripts plugin? If not, you should always run in the background, otherwise the script will just end when you close the pop-up.

 

I've altered my own mount script to your situation. I'm not using VFS cache (waste of storage I think), so it's a simpler script than the one you are using now. If this one still has the same issues, there is something wrong in your setup that you are missing with folders or maybe an instable connection.
Just save this as a new script in user scripts and run in the background. Make sure your folders are not mounted right now, so you have a fresh/clean start before running my script. And also delete the checker files in /mnt/user/appdata/other/rclone if you have those.

 

#!/bin/bash

##################  Check if script is already running  ###################
#
sleep 1
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_mount script ***"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

########## create directories for rclone mount and mergerfs mount ##############

mkdir -p /mnt/user/appdata/other/mergerfs/
mkdir -p /mnt/user/google_remote/gdrive
mkdir -p /mnt/user/google_merged/gdrive

#######  Start rclone gdrive mount  ##########

# check if gdrive mount already created

if [[ -f "/mnt/user/google_remote/gdrive/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

rclone mount --allow-other --umask 002 --buffer-size 256M --dir-cache-time 9999h --drive-chunk-size 512M --attr-timeout 1s --poll-interval 1m --drive-pacer-min-sleep 10ms --drive-pacer-burst 500 --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes --uid 99 --gid 100 gdrive: /mnt/user/google_remote/gdrive &

sleep 15

# check if mount successful with slight pause to give mount time to finalise
echo "$(date "+%d.%m.%Y %T") INFO: sleeping 5 sec"
sleep 5
echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
if [[ -f "/mnt/user/google_remote/gdrive/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone gdrive mount  ##########

#######  Start mergerfs mount  ##########

if [[ -f "/mnt/user/google_merged/gdrive/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, mergerfs already mounted."

else

# Build mergerfs binary and delete old binary as precaution
rm /bin/mergerfs

# Create Docker
docker run -v /mnt/user/appdata/other/mergerfs:/build --rm trapexit/mergerfs-static-build

# move to bin to use for commands
mv /mnt/user/appdata/other/mergerfs/mergerfs /bin

# Create mergerfs mount
mergerfs /mnt/user/google_local/gdrive:/mnt/user/google_remote/gdrive /mnt/user/google_merged/gdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

sleep 5

if [[ -f "/mnt/user/google_merged/gdrive/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, mergerfs mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: mergerfs Remount failed."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End Mount mergerfs   ##########

rm /mnt/user/appdata/other/rclone/rclone_mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

 

Link to comment
1 hour ago, Bjur said:

@DZMM @Kaizac Hi what's the updated recommended buffer settings etc. It's been a while since I checked and have upgraded to 64 GB memory since then.

I'm using 256M, DZMM is using 512M. It depends on the amount of users/transfers you have and how stable/fast your internet connection is I think. I'm running 1 gig fiber with no hiccups and I have almost no users. But each transfer will use the buffer size, so it can quickly add up.
I run a lot of sync/backup jobs which also eat up RAM. So even though I have 64GB RAM I try to stay a bit conservative. Say, you open a file it will buffer 512M first for example. You close it and then re-open it, it will again start to buffer the 512M. I find that wasteful for my situation.

 

Default is 16M, so both 256M and 512M is already way over default.

  • Like 1
  • Upvote 1
Link to comment
15 hours ago, francrouge said:

Hi guys since most of us are going to loose the Unlimited gdrive is there any order provider or config to be able to play from it ? Thx

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

15 hours ago, francrouge said:

Hi guys since most of us are going to loose the Unlimited gdrive is there any order provider or config to be able to play from it ? Thx

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

What do you mean loose the unlimited😬

Link to comment
1 hour ago, Bjur said:

It says my account will need to be changed within 52 days or interruptions will be made... 

What do you guys do? I'm guessing everyone has the same problem? 

Can you share a screenshot of where you see that? I don't see that on my admin console

Link to comment
1 hour ago, Bjur said:

It says my account will need to be changed within 52 days or interruptions will be made... 

What do you guys do? I'm guessing everyone has the same problem? 

I'm also curious where you see it. Somehow some people get it and some don't. I think it might depend on whether you have an actual company connected to the account. If you are using encryption or not and if you are storing on team drives or mostly personal google drive.

 

I think the cheapest solution is to get a dropbox advanced account (you need 3 accounts). You might be able to pool together with others if you trust those. But it also depends on how much you store. Local storage could be more interesting financially.

Link to comment
6 minutes ago, Kaizac said:

I'm also curious where you see it. Somehow some people get it and some don't. I think it might depend on whether you have an actual company connected to the account. If you are using encryption or not and if you are storing on team drives or mostly personal google drive.

 

I think the cheapest solution is to get a dropbox advanced account (you need 3 accounts). You might be able to pool together with others if you trust those. But it also depends on how much you store. Local storage could be more interesting financially.

I see it now - as soon as I log into my Admin Console, it's right up top. 

 

I do have an organization tied to it, but only 1 user. Does your organization have more than 1 user? 

Link to comment
11 minutes ago, axeman said:

I see it now - as soon as I log into my Admin Console, it's right up top. 

 

I do have an organization tied to it, but only 1 user. Does your organization have more than 1 user? 

Only 1 user and I don't see any notification. Maybe it happens when you run close to your next billing cycle? No idea so far.

Link to comment

I have two account with different subscription and i just receive. The email for my other account so both account have too much storage

Dropbox seem good but the featire of playing from it would still work ?

Also dropbox is min 3user so for me its like 90$ CAD per month

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment
1 hour ago, francrouge said:

I have two account with different subscription and i just receive. The email for my other account so both account have too much storage

Dropbox seem good but the featire of playing from it would still work ?

Also dropbox is min 3user so for me its like 90$ CAD per month

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

Still nothing for me, no banner and no e-mail.

 

Dropbox works just as well as Google Drive is what I'm reading. Some people who got annoyed by the API limits of Google already switched a while ago. 

Link to comment
Still nothing for me, no banner and no e-mail.
 
Dropbox works just as well as Google Drive is what I'm reading. Some people who got annoyed by the API limits of Google already switched a while ago. 
What plan do you have i have a business standart and got the email today.

Truing to switch to enterprise to see if they are going to bother me

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment
What plan do you have i have a business standart and got the email today.

Truing to switch to enterprise to see if they are going to bother me

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Just spoke with google and entreprise standard is 5 users min to get Unlimited so like 135$CAD

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment
1 hour ago, francrouge said:

What plan do you have i have a business standart and got the email today.

Truing to switch to enterprise to see if they are going to bother me

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

Enterprise Standard, always had this.

 

 

41 minutes ago, francrouge said:

Just spoke with google and entreprise standard is 5 users min to get Unlimited so like 135$CAD

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

Even with Enterprise Standard 5 users you are not guaranteed of getting the storage. You will still have to ask for each time and only get 5TB and you might even have to explain your use case for getting more. Plus it's more expensive than Dropbox.
But paying 800 USD per year or more for storing some media is just nonsense in my opinion. Better to get more hard drives yourself, or use alternatives like debrid or plex shares.

Link to comment
Even with Enterprise Standard 5 users you are not guaranteed of getting the storage. You will still have to ask for each time and only get 5TB and you might even have to explain your use case for getting more. Plus it's more expensive than Dropbox.
But paying 800 USD per year or more for storing some media is just nonsense in my opinion. Better to get more hard drives yourself, or use alternatives like debrid or plex shares.
Yeah its just because its was easy to stream from gdrive but i will find another solution if its not dropbox then i will just pick another one and keep it for archive thing only

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.