Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

7 minutes ago, workermaster said:

Right. I got pip working (just had to install it in the nerdpack)

image.thumb.png.50cff1e4f63afc5360e225309098ed88.png

 

image.thumb.png.b8a6a3595138b380e2b12efc101e93d0.png

 

I am now getting a lot of errors while running the image.png.3ff3ec3590914813665733a28cecda8b.png command. I didn't get these before on Windows. What could be wrong here?

 

image.png

What does your unraid terminal show when you type in:

python

 

and

 

python3

 

 

Link to comment
17 hours ago, Kaizac said:

Try running

pip3 install --upgrade setuptools

And then run the requirements command again. (don't forget to cd to the right folder again).

I ran that command and then ran the requirements again. It worked. The errors were gone. I then looked again at step 2 of creating the service accounts, and since I already have a project (you need to make one to enable the drive api) I thought this command: 

python3 gen_sa_accounts.py --quick-setup -1

was the best one to use. I entered that into the console and this time, it told me to go to a link to give access to the script. This is where I get my next problem. 

image.thumb.png.438368e0fa1b1f737bd57065261f6b49.png

 

The request seems to be invalid:

image.png.794a3600ad6b215299c79c48d4d6760a.png

image.png.0102c5c81ccef79271d1d4545e9b51f0.png

It says that the access is denied and that I should contact the developer for this problem. I tried doing this on a pc that has the Unraid UI open, and also on the server itself (booted into gui mode). I get the same error. 

 

I also tried running the other commands:

python3 gen_sa_accounts.py --quick-setup 1 --new-only
python3 gen_sa_accounts.py --quick-setup 1

to see if they gave a different result, but nothing helped. Do you know why the request is invalid?

Edited by workermaster
Link to comment
27 minutes ago, workermaster said:

I ran that command and then ran the requirements again. It worked. The errors were gone. I then looked again at step 2 of creating the service accounts, and since I already have a project (you need to make one to enable the drive api) I thought this command: 

python3 gen_sa_accounts.py --quick-setup -1

was the best one to use. I entered that into the console and this time, it told me to go to a link to give access to the script. This is where I get my next problem. 

image.thumb.png.438368e0fa1b1f737bd57065261f6b49.png

 

The request seems to be invalid:

image.png.794a3600ad6b215299c79c48d4d6760a.png

image.png.0102c5c81ccef79271d1d4545e9b51f0.png

It says that the access is denied and that I should contact the developer for this problem. I tried doing this on a pc that has the Unraid UI open, and also on the server itself (booted into gui mode). I get the same error. 

 

I also tried running the other commands:

python3 gen_sa_accounts.py --quick-setup 1 --new-only
python3 gen_sa_accounts.py --quick-setup 1

to see if they gave a different result, but nothing helped. Do you know why the request is invalid?

Look at https://github.com/xyou365/AutoRclone/issues/89

 

You have to edit some code in the script. You can use notepad++ for that.

Link to comment
2 hours ago, Kaizac said:

Look at https://github.com/xyou365/AutoRclone/issues/89

 

You have to edit some code in the script. You can use notepad++ for that.

I have already found the next problem. I did try and search for it on the Github link you gave me, but couldn't find it. 

I have now created a new api according to step 3 of the process (here image.thumb.png.78539cb53bc96df1fbdd4395c0abaa63.png)

and am trying to run the command:

python quickstart.py

that I have to run according to this page:

https://developers.google.com/admin-sdk/directory/v1/quickstart/python

 

I now get the error that there are insufficient permissions. I expected a screen where I had to give permission to rclone. image.png.72b5ec356480ed852f5edb13899ec3eb.png

 

Since I thought that the problem was most likely the previous command:

python3 gen_sa_accounts.py --quick-setup -1

because I am not 100% if that was the option I should have used, I tried figuring out if the service accounts belong to that existing project. They appear to do so. Here is one of the accounts:

image.png.265b2bb763749847fbe39b6762d843fc.png

Edited by workermaster
Link to comment

I tried running the same command directly on the server, but that made no differnce. I also tried removing all files in the project folder and start from scratch, to rule out any mistakes or other weird things since I first tried most commands on Windows. I got to the same point where I am now. 

 

Since I am stuck at the first step of this part:

image.thumb.png.8887e776ddbc67f06f7041b5e86e8c2b.png

 

I decided to try and do step 2. Step 2 is completed and I think that the group I made, should work for the next few steps. 

 

I did make sure that the credentials file used in activating the Directory API is the correct one. 

 

I tried running step 3, but that gave me this error:

image.png.a2ad23b7255b5eec8761dbfefbc66a3d.png

 

I think that I first need to get step 1 to work. Or figure out how to manally add them to the group

Edited by workermaster
Link to comment

Right, so I kept trying things and thought that maybe I should try to run:

python3 gen_sa_accounts.py --quick-setup 1

and have Python create a new project with accounts. There were no errors during this step. 

 

Then I went ahead to step 3 and tried the steps again to enable the API:

image.thumb.png.9ff727377a2a7c1516ff597b6272d9ce.pnghttps://developers.google.com/admin-sdk/directory/v1/quickstart/python

 

I can do all the steps on that page, except the last one where I have to run:

python quickstart.py

 

I now get a different error when I run that command:

image.png.76c3a9c155d6f6f95cfd651868bad6e9.png

 

I also did not see a second project:

 

 

So I am not entirely sure that a new project was created in the previous step. Maybe that is the reason why I keep getting errors in this step?

 

I am not touching anything for now in fear of making an even bigger mess. 

image.png

 

 

 

EDIT: how do I manually add the service accounts? I have this screen:

image.thumb.png.27f7fc3b5c095b9697102db4999bbfa2.png

 

And these accounts:

image.png.67588e90d6819befdd12b3b9ba432aee.png

 

What do I need to copy into what field?

Edited by workermaster
Link to comment
4 hours ago, workermaster said:

 

And these accounts:

image.png.67588e90d6819befdd12b3b9ba432aee.png

 

What do I need to copy into what field?

I'm not an expert here and I think I got lucky doing this at the first attempt, but it looks like you've got your service accounts right there - they just need renaming!

Link to comment
12 minutes ago, DZMM said:

I'm not an expert here and I think I got lucky doing this at the first attempt, but it looks like you've got your service accounts right there - they just need renaming!

I think that I managed to create 100 of them, but according to the manual, I now need to enable a SDK API and I am stuck there. I get the error's mentioned in the posts above. I can skip those steps and manually add these service accounts to the group I created, but I don't know how to do that. 

 

What do you mean renaming? None of the steps mention that. Do you know how I should rename them and how to add them to my project? I am trying these steps (Step 2 is complete, a group is made, but step 1 gives me trouble): image.thumb.png.d9fe2f699009e3ab08ae1d601d828e48.png

 

In the meantime, I was reading up on the next steps. I see that I need to have a teamdrive. When I login to Google drive, I only have 2 options:

image.png.ba8d9c78702f63c4c89e9e70880996f5.png

 

The top one is where I am currently uploading data. The bottom one is a shared drive. Is a shared drive the same thing as a team drive? 

image.png

Link to comment
31 minutes ago, workermaster said:

 

In the meantime, I was reading up on the next steps. I see that I need to have a teamdrive. When I login to Google drive, I only have 2 options:

image.png.ba8d9c78702f63c4c89e9e70880996f5.png

 

The top one is where I am currently uploading data. The bottom one is a shared drive. Is a shared drive the same thing as a team drive? 

The rclone drive is a team drive i.e. shared.

If you created the Group and added the service accounts as per Step 3, and then added the Group address to the team drive as per Step 4, you have finished setting up your SAs.

All that's left is to remane the SAs to whatever you want, store them in a folder somewhere, and then use them in the script where it tells you want to do. 

 

If you're unsure, please search this thread for service accounts where I ran through how to use them - find the first instance and go from there.  Everything you need is in here several times.

 

Quote

 

Optional: Create Service Accounts (follow steps 1-4).To mass rename the service accounts use the following steps:

Place Auto-Generated Service Accounts into /mnt/user/appdata/other/rclone/service_accounts/

Run the following in terminal/ssh

Move to directory: cd /mnt/user/appdata/other/rclone/service_accounts/

Dry Run:

n=1; for f in *.json; do echo mv "$f" "sa_gdrive_upload$((n++)).json"; done

Mass Rename:

n=1; for f in *.json; do mv "$f" "sa_gdrive_upload$((n++)).json"; done

 

 

Edited by DZMM
Link to comment
14 minutes ago, DZMM said:

The rclone drive is a team drive i.e. shared.

If you created the Group and added the service accounts as per Step 3, and then added the Group address to the team drive as per Step 4, you have finished setting up your SAs.

All that's left is to remane the SAs to whatever you want, store them in a folder somewhere, and then use them in the script where it tells you want to do. 

 

If you're unsure, please search this thread for service accounts where I ran through how to use them - find the first instance and go from there.  Everything you need is in here several times.

 

 

Thanks! Good to know that the shared drive I have already created is a team drive. That clears up some confusion for me. 

 

While I have created the group, I have trouble adding the service accounts to the group. I hope that this is the last of the problems I encounter, so I can start the upload process. As I mentioned, I need to enable a SDK API, and then run a command to open the quickstart.py and give permission to something. But I get an error when I do that. 

image.thumb.png.dc7b5649f9661ae6f86e3852460c433a.png

 

I hope @Kaizac or someone else can help with that error. I get the feeling that his is the last of the problems still in my way. Then again, I am a professional idiot and tend to find all the problems that you can find, so who knows. 

Link to comment
On 10/8/2022 at 10:20 AM, FabrizioMaurizio said:

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.2 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_t1_1" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/gdrive_upload" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="100G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
#MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--vfs-read-ahead 30G"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
	LocalFilesLocation="/tmp/$RcloneRemoteName"
	eval mkdir -p $LocalFilesLocation
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
	eval mkdir -p $LocalFilesLocation
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
	mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--dir-cache-time $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 15s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 5
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
		docker stop $DockerStart
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
		exit
	fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
	else
# check if mergerfs already installed
		if [[ -f "/bin/mergerfs" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
		else
# Build mergerfs binary
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
			sleep 5
			if [[ -f "/bin/mergerfs" ]]; then
				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
			else
				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
				exit
			fi
		fi
# Create mergerfs mount
		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
		if [[  $LocalFilesShare2 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare2=":$LocalFilesShare2"
		else
			LocalFilesShare2=""
		fi
		if [[  $LocalFilesShare3 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare3=":$LocalFilesShare3"
		else
			LocalFilesShare3=""
		fi
		if [[  $LocalFilesShare4 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare4=":$LocalFilesShare4"
		else
			LocalFilesShare4=""
		fi
# make sure mergerfs mount point is empty
		mv $MergerFSMountLocation $LocalFilesLocation
		mkdir -p $MergerFSMountLocation
# mergerfs mount command
		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
		else
			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
			docker stop $DockerStart
			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
			exit
		fi
	fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
	else
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
		docker start $DockerStart
	fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

  

image.png

 

Sorry for the late reply, I've been away. Have you gotten it to work yet? I noticed you're on 96.9.2 instead of the 96.9.3 which fixed some permissions with the umask. Update your script to the latest version and let us know if you're still having issues.

Link to comment
On 10/8/2022 at 6:04 PM, francrouge said:

I did try right now and it does not seem to be effective.

 

I can create files folders but i cant delete or rename them and only on my moun_rclone folder.

 

 

thx

 

 

Trying to circle back on what I've missed. Have you gotten it to work? You might try running the "Docker Safe New Perms" located under the "Tools" tab and see if that helps at all. You might want to give it a restart as well. If that still doesn't work, we can try to look at your SMB settings. 

Link to comment
1 hour ago, workermaster said:

Thanks! Good to know that the shared drive I have already created is a team drive. That clears up some confusion for me. 

 

While I have created the group, I have trouble adding the service accounts to the group. I hope that this is the last of the problems I encounter, so I can start the upload process. As I mentioned, I need to enable a SDK API, and then run a command to open the quickstart.py and give permission to something. But I get an error when I do that. 

image.thumb.png.dc7b5649f9661ae6f86e3852460c433a.png

 

I hope @Kaizac or someone else can help with that error. I get the feeling that his is the last of the problems still in my way. Then again, I am a professional idiot and tend to find all the problems that you can find, so who knows. 

 

You don't have to do the sample part of that website, just enable the API. You should be able to continue on.

Link to comment
12 hours ago, Roudy said:

 

Trying to circle back on what I've missed. Have you gotten it to work? You might try running the "Docker Safe New Perms" located under the "Tools" tab and see if that helps at all. You might want to give it a restart as well. If that still doesn't work, we can try to look at your SMB settings. 

Hi 

 

I did try it but i think i need to check the smb config

 

Anythin hint ii should check ?

 

 

 

image.thumb.png.b09fcd94f7c1efb77f423ccca92ebf31.png

 

image.thumb.png.463b04ee0afa1d174f200dc41f2d710b.png

image.thumb.png.e02d4b500a5fa4b4be60f6af6a91bb4f.png

 

thx

Link to comment
14 hours ago, Roudy said:

 

You don't have to do the sample part of that website, just enable the API. You should be able to continue on.

Thanks for the help. 

 

I tried running this the command agian to create the service accounts (wanted to start a few steps back to make sure that I do everything correctly). I get an error telling me to authenticate again, but I don't know how to do that. 

image.thumb.png.3fe199ed3d50a9cf863104619b0be2f4.png

 

I also tried to continue with the accounts that I already have, but got an error telling me that the index is out of range:

image.png.dc3ed074bd907adcde112f5acf70a8d1.png

 

I realise that the last step is optional. You can also add the accounts manually to the group. Do you know how to do that? 

I made some screenshots in this post asking how to do it: 

Do I need to copy the emailadresses from the service accounts and paste them somewhere?

 

 

 

EDIT: I think that I figured it out. I opened up 10 of the SA .json files and copied the email address inside. Then added that to the goup and then added the group to the teamdrive:

image.thumb.png.0e7c376501ebcf4982c54ffaa4acee19.png

 

I have only done 10 for now, but 7.5TB per day is plenty for me. So now I have to figure out how to rename the service accounts, where to save them and how to edit the mount and upload script so they both know to use the teamdrive. 

Edited by workermaster
Link to comment

So, I am almost there. Turns out that I created about 800 SA. Some with a new project and some not. I don't think that should matter. I have removed a lot of them and only kept 20. Of these 20, I have added 10 to the group and added that group to the teamdrive. 

 

The accounts are saved in appdata and renamed according to the instructions. As far as I can tell, this only leaves the editing of the mount and upload script. I do need a little bit of help here because I can't figure out how to edit them so they point to a teamdrive and not just Google drive. 

 

This is the upload script I have now:

image.thumb.png.7c5d8715ecb99ff9938c4a3cc8f88a8e.png

 

And this is the mount script:

image.thumb.png.e1275922f7423e5a3b8f43c84f6e1e32.png

 

They pointed to the drive at the top but now need to point to the teamdrive:

image.png.02897c3b0295041d7cf01cd8694054c7.png

 

And (hopefully my last question) how do I move the data that is already uploaded in the secure folder, to the teamdrive? Someone mentioned using the move command in Rclone, but I don't know how to use that. Can I just have 2 mount scripts running at the same time and copy the data already uploaded to my local sytem and then later have it moved to the teamdrive?

Edited by workermaster
Link to comment
2 hours ago, workermaster said:

So, I am almost there. Turns out that I created about 800 SA. Some with a new project and some not. I don't think that should matter. I have removed a lot of them and only kept 20. Of these 20, I have added 10 to the group and added that group to the teamdrive. 

I hope they are in a recycling bin somewhere so you can restore.  given the difficulty you had creating them, you could have just put them in a folder for future use e.g. I'm using about 90 service accounts now across multiple mounts.

 

2 hours ago, workermaster said:

And (hopefully my last question) how do I move the data that is already uploaded in the secure folder, to the teamdrive?


as long you have server side transfers setup in your rclone config i.e.

 

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive_new.json
team_drive = xxxxxxxxxxxxxxxxxxxxxx
server_side_across_configs = true

 

then it's as simple as running:

 

rclone move source_mount: target_mount:

 

you can add in other arguments if you want e.g. this is how I move files between from my main tdrive to one of my movies tdrives as an overnight job (again, this is all covered in this thread several times):
 

rclone move --min-age 30d tdrive:crypt/encrypted_movies_folder_name tdrive_movies_adults:crypt/encrypted_movies_folder_name \
--user-agent="transfer2" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit \
--delete-empty-src-dirs

 

Link to comment
3 minutes ago, DZMM said:

I hope they are in a recycling bin somewhere so you can restore.  given the difficulty you had creating them, you could have just put them in a folder for future use e.g. I'm using about 90 service accounts now across multiple mounts.

 


as long you have server side transfers setup in your rclone config i.e.

 

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive_new.json
team_drive = xxxxxxxxxxxxxxxxxxxxxx
server_side_across_configs = true

 

then it's as simple as running:

 

rclone move source_mount: target_mount:

 

you can add in other arguments if you want e.g. this is how I move files between from my main tdrive to one of my movies tdrives as an overnight job (again, this is all covered in this thread several times):
 

rclone move --min-age 30d tdrive:crypt/encrypted_movies_folder_name tdrive_movies_adults:crypt/encrypted_movies_folder_name \
--user-agent="transfer2" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit \
--delete-empty-src-dirs

 

Thanks for the info. I still have a copy of all the accounts saved somewhere, so they are safe. 

 

I tried creating a tdrive and got to the part where it asks me the location of the SA credentials:

image.png.e323dd025539b1570ee4fa19a12113b2.png

 

I assume that I need to put the path of the SA accounts there. That would be:

image.thumb.png.0787fac2b86a964f5da0824f9b87c374.png

I had to save them there according to the renaming instructions. 

 

But when I enter that path, it doesn't work:

image.png.36f0e043033fde2879235aeb85f3efac.png

because it can't find the files in the next step where I say that it is a team drive:

image.thumb.png.bb3bc83c6fc33e4d464f0722e39e6eb5.png

 

I see that it is asking for a file, and not a path, but I thought it needed the path to all 20 accounts? What am I supposed to put there?

Link to comment
1 hour ago, workermaster said:

Thanks for the info. I still have a copy of all the accounts saved somewhere, so they are safe. 

 

I tried creating a tdrive and got to the part where it asks me the location of the SA credentials:

image.png.e323dd025539b1570ee4fa19a12113b2.png

 

I assume that I need to put the path of the SA accounts there. That would be:

image.thumb.png.0787fac2b86a964f5da0824f9b87c374.png

I had to save them there according to the renaming instructions. 

 

But when I enter that path, it doesn't work:

image.png.36f0e043033fde2879235aeb85f3efac.png

because it can't find the files in the next step where I say that it is a team drive:

image.thumb.png.bb3bc83c6fc33e4d464f0722e39e6eb5.png

 

I see that it is asking for a file, and not a path, but I thought it needed the path to all 20 accounts? What am I supposed to put there?


Just add the service accounts directly to your rclone config file via the plugin editing window. When done your tdrive remote "pairs" should look like this:

 

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/user/path_to_first_service_account/sa_tdrive_new.json
team_drive = xxxxxxxxxxxxxxxxxxxx
server_side_across_configs = true

[tdrive_vfs]
type = crypt
remote = tdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxxxxxxxxx


The whole point of the service accounts is that the script automatically rotates the service account in use, so that you can upload 750GB on each run of the script - read the script notes and it will be clear e.g. If you say in the script to rotate 10 SAs and your SA files start with sa_tdrive_new, then the script will change the SA used on each run (that must all be in the same location) i.e.


sa_tdrive_new1.json
sa_tdrive_new2.json

sa_tdrive_new3.json

sa_tdrive_new4.json

sa_tdrive_new5.json

sa_tdrive_new6.json

sa_tdrive_new7.json

sa_tdrive_new8.json

sa_tdrive_new9.json

sa_tdrive_new10.json
 

and on the 11th run, back to 1:

 

sa_tdrive_new1.json

sa_tdrive_new2.json

etc etc

 

You need 14-16 SAs to safely max out a gig line.

Edited by DZMM
Link to comment
6 minutes ago, DZMM said:


Just add the service accounts directly to your rclone config file via the plugin editing window. When done your tdrive remote "pairs" should look like this:

 

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/user/path_to_first_service_account/sa_tdrive_new.json
team_drive = xxxxxxxxxxxxxxxxxxxx
server_side_across_configs = true

[tdrive_vfs]
type = crypt
remote = tdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxxxxxxxxx


The whole point of the service accounts is that the script automatically rotates the service account in use, so that you can upload 750GB on each run of the script - read the script notes and it will be clear e.g. If you say in the script to rotate 10 SAs and your SA branch starts with sa_tdrive_new, then the script will change the SA used on each run (that must all be in the same location) i.e.


sa_tdrive_new1.json
sa_tdrive_new2.json

sa_tdrive_new3.json

sa_tdrive_new4.json

sa_tdrive_new5.json

sa_tdrive_new6.json

sa_tdrive_new7.json

sa_tdrive_new8.json

sa_tdrive_new9.json

sa_tdrive_new10.json
 

and on the 11th run, back to 1:

 

sa_tdrive_new1.json

sa_tdrive_new2.json

etc etc

 

You need 14-16 SAs to safely max out a gig line.

I cant figure out how to add them and what window you mean. I assume this window:

image.png.4a96f7ffe15e015e79bb92b10d279481.png

 

What I did (for testing) is instead of pointing the SA location in the previous post to the folder containing all 20 SA accounts, I now went a level deeper and gave it the path to the first SA account in that folder. It now points to:

image.thumb.png.018a1954ee819b54146b5fc1cecb8b67.png

 

Then created new upload and mount scripts, and now it has been uploading for half an hour at full speed. I hope this works and that it will change over to a new service account when it reaches the 750gb mark. 

 

Link to comment
38 minutes ago, workermaster said:

I cant figure out how to add them and what window you mean. I assume this window:

image.png.4a96f7ffe15e015e79bb92b10d279481.png

 

What I did (for testing) is instead of pointing the SA location in the previous post to the folder containing all 20 SA accounts, I now went a level deeper and gave it the path to the first SA account in that folder. It now points to:

image.thumb.png.018a1954ee819b54146b5fc1cecb8b67.png

 

Then created new upload and mount scripts, and now it has been uploading for half an hour at full speed. I hope this works and that it will change over to a new service account when it reaches the 750gb mark. 

 

You're misunderstanding the way service accounts work. They function as regular accounts. So instead of using your mail account, you use a service account to create your mount. What I did is rename some of the service account files to the mount they represent. You can put them in a different folder. Like sa_gdrive_media.json for your media mount. So for this you will have to not put in the path to the folder with the service account, but to the exact folder. Which you did in your last post. By seperating this you will also not use these SA's for your upload script, seperating any potential API bans.

 

The upload script will pick the first of the service accounts. Then when it finishes because it hits the 750gb api limit it will stop. That's why you put it on a cron job so it will start again with service account 2 until that one hits the api limit. And so on.

Just so you understand, the script doesn't just keep running through all your service accounts. You will have to restart it through a cron job.

 

About moving your current folder to your teamdrive. It depends whether the files are encrypted the same as your new team drive mount. So with the same password and salt. If that is the case you can just drag the whole folder from your gdrive to your team drive from the Google Drive website. Saves a lot of time waiting for rclone transfers to finish. You can even do this to transfer to another account. The encryption has to be identical though.

Link to comment
1 hour ago, Kaizac said:

You're misunderstanding the way service accounts work. They function as regular accounts. So instead of using your mail account, you use a service account to create your mount. What I did is rename some of the service account files to the mount they represent. You can put them in a different folder. Like sa_gdrive_media.json for your media mount. So for this you will have to not put in the path to the folder with the service account, but to the exact folder. Which you did in your last post. By seperating this you will also not use these SA's for your upload script, seperating any potential API bans.

 

The upload script will pick the first of the service accounts. Then when it finishes because it hits the 750gb api limit it will stop. That's why you put it on a cron job so it will start again with service account 2 until that one hits the api limit. And so on.

Just so you understand, the script doesn't just keep running through all your service accounts. You will have to restart it through a cron job.

 

About moving your current folder to your teamdrive. It depends whether the files are encrypted the same as your new team drive mount. So with the same password and salt. If that is the case you can just drag the whole folder from your gdrive to your team drive from the Google Drive website. Saves a lot of time waiting for rclone transfers to finish. You can even do this to transfer to another account. The encryption has to be identical though.

I think that I might still not understand fully how to set it up then. I get how it works now (I think) but am not sure if I did set it up correctly now. 

 

I think that I did use a service account to create the mount. I noticed that I didn't have to login to Google to verify myself when creating these mount points. Is this mount correct?;

image.thumb.png.b57770aee3fe3798fd3b9360626908ff.png

The service account file is the first file in the folder:

image.png.33d976fc5ec47389d86117c77752f71e.png

 

I assume that when the upload reaches 750gb, it shuts itself down, and like you said, the next time it starts from a cron job, it will use the second account? Or do I have something setup wrong and need to change things?

 

Sorry for asking so many questions. I am trying but am not that good when it comes to new things like these. 

 

 

 

EDIT:

I realised that I didn't change any parameters in the upload script for the service accounts:

image.thumb.png.25db8875f736116697548b24ad354b5e.png

I guess that I need to change these as well. I will do that now:

image.thumb.png.d5783981d1ac8ca0a95e5ebbdc123c95.png

Edited by workermaster
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.