Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

I haven't gotten the message yet but decided to take migrate everything to my own storage again.
Didn't really use much more storage in the cloud than i have local anyway so it was more of a backup than anything. Just have to find a new offsite backup solution for non-replacable data.
I think the (almost) free ride from Google is finally comming to an end.

Link to comment
1 minute ago, Waseh said:

I haven't gotten the message yet but decided to take migrate everything to my own storage again.
Didn't really use much more storage in the cloud than i have local anyway so it was more of a backup than anything. Just have to find a new offsite backup solution for non-replacable data.
I think the (almost) free ride from Google is finally comming to an end.

Look at backblaze if you just want to backup. Seems to get positive reviews. Their only issue atm is apparently that restoring is on file level. Don't know how that would work with tar snapshots. There are some other alternatives. I think I'll just stick to an offsite backup at a family's place and sync with their NAS for mutual backup.

Link to comment

So I'm making the jump from Google Drive to Dropbox 3 users (i've found 3 users who will share with me)

 

With Gdrive being read-me, my plan is the following

  • Only load new files onto Dropbox, and hiring a VPS to transfer from Gdrive to Dropbox for the older files
  • Keep the GDrive readonly on Unraid until said transfer is done

My question is: Can i run 2 instances of Rclone/Usercript, and change the Rclone Uploader to upload to dropbox? The thing that would be abit complicated i think is running the mount_mergerfs folder differently so it can differentiate it from one being dropbox, and the other Gdrive? 

 

To what I think how things works

  • isn't the mount_mergerfs folder for CURRENT files ready to be uploaded, and so that Plex recognises new files?
  • can I use the mount_rclone folder for Gdrive since its readonly and nothing will be uploaded to it?

Am i making sense here? 

Link to comment

Not that it matters that much anymore, with unlimited storage coming to an end, but are there any tweaks I can make to mergerfs to make it handle torrent traffic better? I only get about 50MB/s on my gigabit connection with qbittorrent, while nzb's goes full speed.
I'm assuming this is because mergerfs cant hang with bittorrents random writes?

Link to comment

Hi 

is there any guide to mount 10 share drive ?

if someone can make example of the config for like 2 share drive that would be awesome

on my each share drive I set up like this example 

drive 1 name = movie - inside drive = Media/movie 

drive 2 name = tv - inside drive = Media/tv

 

rclone config
Current remotes:

Name                 Type
====                 ====
4kmovie              drive
4ktv                 drive
gdrive               drive
movie                drive
ppv                  drive
remux                drive
tv                   drive
tvb                  drive
 

Link to comment
Hi 
is there any guide to mount 10 share drive ?
if someone can make example of the config for like 2 share drive that would be awesome
on my each share drive I set up like this example 
drive 1 name = movie - inside drive = Media/movie 
drive 2 name = tv - inside drive = Media/tv
 
rclone config
Current remotes:
Name                 Type
====                 ====
4kmovie              drive
4ktv                 drive
gdrive               drive
movie                drive
ppv                  drive
remux                drive
tv                   drive
tvb                  drive
 
If its all on the same cloud provider just make one and create folders otherwise you would need to tweak the script a bit i think.

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment

So using the upload script, what would I need to change to move files back to my local drives? as Google has kindly informed me I'm over my limits sadly. I have tried messing with it myself but to no avail. I tried changing the local and remote paths around but clearly I'm not doing something correctly. I even asked ChatGPT and tried that but still can't get it to work.

 

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local1" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone1" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="1d" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="0"
BWLimit2Time="08:00"
BWLimit2="3000"
BWLimit3Time="16:00"
BWLimit3="3000"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.11.4" # Choose IP to bind upload to.
NetworkAdapter="eth1" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

###############################################################################
#####    DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING   #####
###############################################################################

####### Preparing mount location variables #######
if [[  $BackupJob == 'Y' ]]; then
	LocalFilesLocation="$LocalFilesShare"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
else
	LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
fi

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount

####### create directory for script files #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
fi

#######  check if rclone installed  ##########
echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
else
	echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
	exit
fi

####### Rotating serviceaccount.json file if using Service Accounts #######
if [[ $UseServiceAccountUpload == 'Y' ]]; then
	cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
	CounterNumber=$(find -name 'counter*' | cut -c 11,12)
	CounterCheck="1"
	if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
		echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		CounterNumber="1"
	fi
	ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
	echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
	ServiceAccount=""
fi

#######  Upload files  ##########

# Check bind option
if [[  $CreateBindMount == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
		echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	else
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
		ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
	fi
else
	RCloneMountIP=""
fi

#  Remove --delete-empty-src-dirs if rclone sync or copy
if [[  $RcloneCommand == 'move' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
	DeleteEmpty="--delete-empty-src-dirs "
else
	echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
	DeleteEmpty=""
fi

#  Check --backup-directory
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
	LocalFilesLocation="$LocalFilesShare"
	BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
else
	BackupRemoteLocation=""
	BackupRemoteDeletedLocation=""
	BackupRetention=""
	BackupDir=""
fi

# process files
	rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
	--user-agent="$RcloneUploadRemoteName" \
	-vv \
	--buffer-size 512M \
	--drive-chunk-size 512M \
	--tpslimit 8 \
	--checkers 8 \
	--transfers 4 \
	--order-by modtime,$ModSort \
	--min-age $MinimumAge \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--exclude *fuse_hidden* \
	--exclude *_HIDDEN \
	--exclude .recycle** \
	--exclude .Recycle.Bin/** \
	--exclude *.backup~* \
	--exclude *.partial~* \
	--drive-stop-on-upload-limit \
	--bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
	--bind=$RCloneMountIP $DeleteEmpty

# Delete old files from mount
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
	rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
fi

#######  Remove Control Files  ##########

# update counter and remove other control files
if [[  $UseServiceAccountUpload == 'Y' ]]; then
	if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
	else
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		CounterNumber=$((CounterNumber+1))
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
		echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
	fi
else
	echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
fi

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

Link to comment

Is anyone else getting slow upload speeds recently?  My script has been performing perfectly for years, but for the last week or so my transfer speed has been awful
 

2023/06/18 22:37:15 INFO  : 
Transferred:          28.538 MiB / 2.524 GiB, 1%, 3 B/s, ETA 22y21w1d
Checks:                 2 / 3, 67%
Deleted:                1 (files), 0 (dirs)
Transferred:            0 / 1, 0%
Elapsed time:       9m1.3s


It's been so long since I looked at my script I don't even know what to look at first ;-)

Have I missed some rclone / gdrive updates? Thanks

Link to comment
On 6/18/2023 at 10:48 PM, DZMM said:

Is anyone else getting slow upload speeds recently?  My script has been performing perfectly for years, but for the last week or so my transfer speed has been awful
 

2023/06/18 22:37:15 INFO  : 
Transferred:          28.538 MiB / 2.524 GiB, 1%, 3 B/s, ETA 22y21w1d
Checks:                 2 / 3, 67%
Deleted:                1 (files), 0 (dirs)
Transferred:            0 / 1, 0%
Elapsed time:       9m1.3s


It's been so long since I looked at my script I don't even know what to look at first ;-)

Have I missed some rclone / gdrive updates? Thanks

Don't think so but it's soon over, so we can't upload anymore:(

Link to comment
On 6/18/2023 at 4:48 PM, DZMM said:

Is anyone else getting slow upload speeds recently?  My script has been performing perfectly for years, but for the last week or so my transfer speed has been awful
 

2023/06/18 22:37:15 INFO  : 
Transferred:          28.538 MiB / 2.524 GiB, 1%, 3 B/s, ETA 22y21w1d
Checks:                 2 / 3, 67%
Deleted:                1 (files), 0 (dirs)
Transferred:            0 / 1, 0%
Elapsed time:       9m1.3s


It's been so long since I looked at my script I don't even know what to look at first ;-)

Have I missed some rclone / gdrive updates? Thanks

 

I've only be repatriating my data back to my array. As the deadline for me is 7/10 for it to go into read-only state. Perhaps your account hit that state already? 

 

My notices just started about a week ago or so ago, and say 60 days, but the deadline is like 30 days... so it's definitely coming!

Link to comment

I’m downloading my data more selectively but just wanted to thank you for years of flawless updates and work @DZMM. I had no idea about how any of this worked until I found this thread and you (and others) were so helpful and patient. 

I’m bummed to be losing the unlimited drive but I’m also oddly excited to be owning my data outright again. Silver linings and all that!

Link to comment

I've just read about 10 pages of posts to try and get up to speed on the "shutdown".  Firstly, a big thanks to @Kaizac for patiently supporting everyone while I've been busy with work. I wrote the scripts as a challenge project as someone who isn't a coder over a few months - I literally had to Google for each step "what command do I use to do xxx?", So it's great he's here to help with stuff outside the script like issues regarding permissions etc as I wouldn't be able to help!

 

Back to business - Can someone share what's happening with the "shutdown" please as I'm out of the loop? I moved my cheaper Google account to a more expensive one I think about a year ago, and all was fine until my recent upload problems - but I think that was from my seedbox and unrelated, as I've started uploading from my unraid server again and all looks ok.

 

I've read mentions of emails and alerts in the Google dashboard - could someone share their email/screenshots please and also say what Google account they have?

Link to comment
7 hours ago, DZMM said:

I've just read about 10 pages of posts to try and get up to speed on the "shutdown".  Firstly, a big thanks to @Kaizac for patiently supporting everyone while I've been busy with work. I wrote the scripts as a challenge project as someone who isn't a coder over a few months - I literally had to Google for each step "what command do I use to do xxx?", So it's great he's here to help with stuff outside the script like issues regarding permissions etc as I wouldn't be able to help!

 

Back to business - Can someone share what's happening with the "shutdown" please as I'm out of the loop? I moved my cheaper Google account to a more expensive one I think about a year ago, and all was fine until my recent upload problems - but I think that was from my seedbox and unrelated, as I've started uploading from my unraid server again and all looks ok.

 

I've read mentions of emails and alerts in the Google dashboard - could someone share their email/screenshots please and also say what Google account they have?

They already started limiting new Workspace accounts a few months ago to just 5TB per user. But recently they also started limiting older Workspace accounts with less than 5 users, sending them a message that they either had to pay for the used storage or the account would go into read-only within 30 days. Paying would be ridiculous, because it would be thousands per month. And even when you purchase more users, so you get the minimum 5, there isn't a guarantee you will actually get your storage limit lifted. People would have to chat with support to request 5TB additional storage, and would even be asked to explain the necessity, often refusing the request.

 

So far, I haven't gotten the message yet on my single user Enterprise Standard account. Some speculate that only the accounts using massive storage on the personal gdrive get tagged and not the ones who only store on team drives. I think it probably has to do with your country and the original offering, and Google might be avoiding European countries because they don't want the legal issues. I don't know where everyone is from though, so that also might not be true.

 

Anyway, when you do get the message, your realistic only alternative is moving to Dropbox or some other more unknown providers. It will be pricey either way.

 

 

  • Upvote 1
Link to comment

I personally store everything on a team drive but I don't have as much data on there as most (around 30TB) and everything is still working for me and I haven't received any emails about it. I've added some additional drives to my setup and have been converting video files to H265 and moving them locally but still using the mount to stream. I'll continue to use it as long as I can. Once it's gone I have a friend who is interested in the unlimited Dropbox to share the cost with me. Just need one more haha!

Link to comment
15 hours ago, Kaizac said:

They already started limiting new Workspace accounts a few months ago to just 5TB per user. But recently they also started limiting older Workspace accounts with less than 5 users, sending them a message that they either had to pay for the used storage or the account would go into read-only within 30 days. Paying would be ridiculous, because it would be thousands per month. And even when you purchase more users, so you get the minimum 5, there isn't a guarantee you will actually get your storage limit lifted. People would have to chat with support to request 5TB additional storage, and would even be asked to explain the necessity, often refusing the request.

 

So far, I haven't gotten the message yet on my single user Enterprise Standard account. Some speculate that only the accounts using massive storage on the personal gdrive get tagged and not the ones who only store on team drives. I think it probably has to do with your country and the original offering, and Google might be avoiding European countries because they don't want the legal issues. I don't know where everyone is from though, so that also might not be true.

 

Anyway, when you do get the message, your realistic only alternative is moving to Dropbox or some other more unknown providers. It will be pricey either way.

 

 

I got that message. Within 30 days my account would be read only. When I go to dashboard it says there maybe interruptions after. I've come from Gsuite to Workspace Enterprise Standard and I live in Europe. I really don't know what to do.

I have around 50 TB on Teamdrive.

What the support said there's nothing I can do after the date I won't be able do upload anymore. It should be possible to download or delete data. Maybe I will be keeping the data for a few month until I get to the point of buying the drives.

What cloud service are you guys migrating to if any?

@DZMM You said you already migrated a year ago?

Thanks for the great work on this and support through the years especially @DZMM and @Kaizac. It's a shame it can't continue :(

 

Link to comment
45 minutes ago, Bjur said:

I got that message. Within 30 days my account would be read only. When I go to dashboard it says there maybe interruptions after. I've come from Gsuite to Workspace Enterprise Standard and I live in Europe. I really don't know what to do.

I have around 50 TB on Teamdrive.

What the support said there's nothing I can do after the date I won't be able do upload anymore. It should be possible to download or delete data. Maybe I will be keeping the data for a few month until I get to the point of buying the drives.

What cloud service are you guys migrating to if any?

@DZMM You said you already migrated a year ago?

Thanks for the great work on this and support through the years especially @DZMM and @Kaizac. It's a shame it can't continue :(

 

I think he means the Gsuite to Enterprise Standard switch we all had to do with the rebranding of Google.

 

But you have Enterprise Standard? And if so, if you go to billing or products. What does it say below your product? For me it says unlimited storage.

 

Right now I don't have to migrate, but as soon as I do I will go fully local. You can join a Dropbox group, but you would need to trust those people. That is too much of an insecurity for me. So with your storage of "just" 50TB, it would be a no-brainer for me. Just get the drives. You will have repaid them within the year. In the end, Dropbox will also end their unlimited someday and it will be the same problem.

 

And look at your use case. Is it just Media or mostly backups? Backups can be done with Backblaze and other offerings that aren't too costly.

Media has alternatives in Plex shares and using Debrid services. The last one I'm hugely impressed by. But also that depends on how particular you are about your media.

  • Like 1
Link to comment
49 minutes ago, Bjur said:

I got that message. Within 30 days my account would be read only. When I go to dashboard it says there maybe interruptions after. I've come from Gsuite to Workspace Enterprise Standard and I live in Europe. I really don't know what to do.

I have around 50 TB on Teamdrive.

What the support said there's nothing I can do after the date I won't be able do upload anymore. It should be possible to download or delete data. Maybe I will be keeping the data for a few month until I get to the point of buying the drives.

What cloud service are you guys migrating to if any?

@DZMM You said you already migrated a year ago?

Thanks for the great work on this and support through the years especially @DZMM and @Kaizac. It's a shame it can't continue :(

 

I'm on Enterprise Standard.

 

I hope I don't have to move to Dropbox as I think it will be quite painful to migrate as I have over a PB stored.

 

I have a couple of people I could move with I think - @Kaizac just use your own encryption passwords of you're worried about security.

 

I actually think if I do move, I'll try and encourage people to use the same collection e.g. just have one pooled library and if anyone wants to add anything, just use the web-based sonarr etc. It seems silly all of us maintaining separate libraries when we can have just one.

 

Some of my friends have done that already in a different way - stopped their local Plex efforts and just use my Plex server.

Link to comment
1 minute ago, DZMM said:

I'm on Enterprise Standard.

 

I hope I don't have to move to Dropbox as I think it will be quite painful to migrate as I have over a PB stored.

 

I have a couple of people I could move with I think - @Kaizac just use your own encryption passwords of you're worried about security.

 

I actually think if I do move, I'll try and encourage people to use the same collection e.g. just have one pooled library and if anyone wants to add anything, just use the web-based sonarr etc. It seems silly all of us maintaining separate libraries when we can have just one.

 

Some of my friends have done that already in a different way - stopped their local Plex efforts and just use my Plex server.

Security is not my issue, it's continuity. 1 person will be the admin of a Dropbox account. So when I'm not the Admin, someone can just delete my data or user. That's a big issue for me.

 

But if you have people to trust then sharing a storage and pooled library is an option. But like I said, Dropbox is going to end as well. Maybe in 1, maybe in 5 years. The problem will be the same as we have right now.

Link to comment
On 6/25/2023 at 3:51 AM, DZMM said:

I'm on Enterprise Standard.

 

I hope I don't have to move to Dropbox as I think it will be quite painful to migrate as I have over a PB stored.

 

I have a couple of people I could move with I think - @Kaizac just use your own encryption passwords of you're worried about security.

 

I actually think if I do move, I'll try and encourage people to use the same collection e.g. just have one pooled library and if anyone wants to add anything, just use the web-based sonarr etc. It seems silly all of us maintaining separate libraries when we can have just one.

 

Some of my friends have done that already in a different way - stopped their local Plex efforts and just use my Plex server.

 

 

A BIG thank you to you, @DZMM! I had heard from various friends that were doing this - but never thought about it until I saw your post. I didn't really get to take FULL advantage of this, as I changed homes and my internet upload speeds were painfully slow. 

 

It was fun while it lasted, but Google is known for killing things. They announced at the start of the year that Education accounts were losing unlimited storage, so it was only a matter of time before they came for us. 

 

I am slowly repatriating my data back to my array. 

 

 

Anyone else notice that Google's Usage is vastly overstated? My usage showed 60TB used when I started downloading. It's dropped to 30TB now, but my local array only used up 17TB. I ran out and got a bunch of drives thinking I need 60TB free on my array. Maybe I don't

 

Link to comment
On 6/25/2023 at 3:51 AM, DZMM said:

I actually think if I do move, I'll try and encourage people to use the same collection e.g. just have one pooled library and if anyone wants to add anything, just use the web-based sonarr etc. It seems silly all of us maintaining separate libraries when we can have just one.

 

I actually do this with a friend of mine at the moment. It makes thins so easy to have a shared cloud storage for both of us to access since we are geographically separated. We mainly use Overseerr for requests and both have access to Starr apps to fix anything. I have 1 box that does all the media management and has the R/W service accounts and he has a few RO service accounts so it doesn't get out of sync. It's been working well for years. 

  • Upvote 1
Link to comment

Hey me and my friend hade the mount script working and after unraid update it did't not mouent fix ther erro with port in use 

it just say poll-interval is not support but it didt run berfor the update not shour what to lock for is it a folder that i need to det delet or other tabele for ports 

(dropbox)

Link to comment
On 4/26/2023 at 11:17 PM, Kaizac said:

 

Dropbox root 'crypt': Failed to get StartCursor: path/not_found/

as you can see in the titel im getting this error message in the log file: 

i have changed over to dropbox, and im setting up the crypt for the very first time.

 

image.thumb.png.56c760c73df6bceae50412c160618201.png

as you might see in the picture, is this the correct way to do this? i am not sure on how the remote: part should work in the crypt part of the config! 

Edited by Playerz
Link to comment
1 hour ago, Playerz said:

Dropbox root 'crypt': Failed to get StartCursor: path/not_found/

as you can see in the titel im getting this error message in the log file: 

i have changed over to dropbox, and im setting up the crypt for the very first time.

 

image.thumb.png.56c760c73df6bceae50412c160618201.png

as you might see in the picture, is this the correct way to do this? i am not sure on how the remote: part should work in the crypt part of the config! 

 

Are you using Dropbox business? If so, you need to add a / before your folder.

 

So how the crypt remote works is as follows. It will use your regular remote, in your case "dropbox". And with adding : you can define what folder the crypt will point to. So let's say you have a remote for your root dropbox directory. If you then point the crypt remote to dropbox:subfolder it will create the subfolder in your root directory and all the files you upload through your crypt will go into that subfolder.

 

In your case dropbox_media_vfs is the name of your remote. But now you also use it as the subfolder name, which seems a bit too long for practicality. So maybe you want to use something like "archive" or "files", whatever you like.

 

Then your crypt remote configuration would be as follows:

[dropbox_media_vfs]

type = crypt

remote = dropbox:/archive

password = XX

password2 = XX

 

This is assuming you use Dropbox Business which requires the / in the remote, which is different from Google and explains why you run into the error.

Link to comment

Thanks @Kaizac for your fast response! seems like you got alot of knowledge in this. 

so might if you wanna do a little sanity check of my setup, and see if im messing something up? 

first issue im hitting alot is: 

image.png.d6ad468b076ad3c9f7386318b439bce7.png

this is each time i need to reboot the system. 

second is my mounts: 

image.thumb.png.cd1f8a41eca8310993a4f9398d2efa9e.png

 

and my upload script: 

image.thumb.png.915bc1ab3e539c196ba7e44303c5eeb6.png

is there anything amiss? it has worked, but might i have done something wrong i dont know about? also all containers er mounted with /dropbox/ -> /mnt/user/

And thanks for the help so far :)

Link to comment
23 minutes ago, Playerz said:

Thanks @Kaizac for your fast response! seems like you got alot of knowledge in this. 

so might if you wanna do a little sanity check of my setup, and see if im messing something up? 

first issue im hitting alot is: 

image.png.d6ad468b076ad3c9f7386318b439bce7.png

this is each time i need to reboot the system. 

second is my mounts: 

image.thumb.png.cd1f8a41eca8310993a4f9398d2efa9e.png

 

and my upload script: 

image.thumb.png.915bc1ab3e539c196ba7e44303c5eeb6.png

is there anything amiss? it has worked, but might i have done something wrong i dont know about? also all containers er mounted with /dropbox/ -> /mnt/user/

And thanks for the help so far :)

That looks good. If you get your remote sorted these scripts should work fine.

 

Regarding the reboot issue, it's known that with rclone shutdowns are often problematic because rclone keeps continuous processes going which unraid has trouble shutting down.

What does alarm me is that it's trying to remove /mnt/user. Is that something in your unmointing script?

 

So you need to get a good unmointing script activated when your array stops. In the github of DZMM in the topic start you can find an issue with an adjusted script in the comments that should help properly shutting down.

 

I use that in combination with the fuzermount commands. It's a bit trial and error. If you open the unraid console and type "top" you should see all the processes running. Each one has an ID. Then you can use the kill command to shutdown that process manually. Google the exact command, I think it's kill -p XX, but I can't check right now.

 

Once done you shouldn't get this error when shutting down. It's not advised to use this regularly. But you can use it now just to rule out that there are no other issues causing this error.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.