DZMM

Members
  • Posts

    2801
  • Joined

  • Last visited

  • Days Won

    9

Posts posted by DZMM

  1. oh well, it looks like the google drive days are coming to an end. I kept my storage even though I couldn't add new stuff as it still had years worth of content. However, I just got this message today saying my account will be cancelled:

     

    Your Google Workspace Enterprise Standard for your account xxxxx.com has been scheduled for suspension and will soon be canceled, and your data will be lost
    
    Hello,
    
    We’ve noticed that your account xxxx.com has been using more storage than currently available to you. For this reason we placed your account in a “read-only” state. Learn more about what happens when you exceed storage limits.
    
    Because you have not taken the necessary steps to free up or get more storage, we will suspend your Google Workspace Enterprise Standard subscription in 15 days on April 20, 2024.
    
    If you take no action your Google Workspace Enterprise Standard subscription will be canceled. You can export all your organization's data before the subscription is canceled. You will be notified prior to your subscription being canceled. Once your subscription has been canceled, you will lose all your data and cannot recover it.
    
    Sincerely,
    
    The Google Workspace Team

     

  2. 49 minutes ago, Bjur said:

    I got that message. Within 30 days my account would be read only. When I go to dashboard it says there maybe interruptions after. I've come from Gsuite to Workspace Enterprise Standard and I live in Europe. I really don't know what to do.

    I have around 50 TB on Teamdrive.

    What the support said there's nothing I can do after the date I won't be able do upload anymore. It should be possible to download or delete data. Maybe I will be keeping the data for a few month until I get to the point of buying the drives.

    What cloud service are you guys migrating to if any?

    @DZMM You said you already migrated a year ago?

    Thanks for the great work on this and support through the years especially @DZMM and @Kaizac. It's a shame it can't continue :(

     

    I'm on Enterprise Standard.

     

    I hope I don't have to move to Dropbox as I think it will be quite painful to migrate as I have over a PB stored.

     

    I have a couple of people I could move with I think - @Kaizac just use your own encryption passwords of you're worried about security.

     

    I actually think if I do move, I'll try and encourage people to use the same collection e.g. just have one pooled library and if anyone wants to add anything, just use the web-based sonarr etc. It seems silly all of us maintaining separate libraries when we can have just one.

     

    Some of my friends have done that already in a different way - stopped their local Plex efforts and just use my Plex server.

  3. I've just read about 10 pages of posts to try and get up to speed on the "shutdown".  Firstly, a big thanks to @Kaizac for patiently supporting everyone while I've been busy with work. I wrote the scripts as a challenge project as someone who isn't a coder over a few months - I literally had to Google for each step "what command do I use to do xxx?", So it's great he's here to help with stuff outside the script like issues regarding permissions etc as I wouldn't be able to help!

     

    Back to business - Can someone share what's happening with the "shutdown" please as I'm out of the loop? I moved my cheaper Google account to a more expensive one I think about a year ago, and all was fine until my recent upload problems - but I think that was from my seedbox and unrelated, as I've started uploading from my unraid server again and all looks ok.

     

    I've read mentions of emails and alerts in the Google dashboard - could someone share their email/screenshots please and also say what Google account they have?

  4. Is anyone else getting slow upload speeds recently?  My script has been performing perfectly for years, but for the last week or so my transfer speed has been awful
     

    2023/06/18 22:37:15 INFO  : 
    Transferred:          28.538 MiB / 2.524 GiB, 1%, 3 B/s, ETA 22y21w1d
    Checks:                 2 / 3, 67%
    Deleted:                1 (files), 0 (dirs)
    Transferred:            0 / 1, 0%
    Elapsed time:       9m1.3s


    It's been so long since I looked at my script I don't even know what to look at first ;-)

    Have I missed some rclone / gdrive updates? Thanks

  5. 19 hours ago, Viper359 said:

    I have attached my mount script. What should I change to stop hundreds of GB of data being downloaded daily via my google drive?

    mount.txt 11.53 kB · 2 downloads

    That's one of the drawbacks of the cache - that it caches all reads e.g. even when Plex, Sonarr etc are doing scans. You could turn off any background scans that your apps are doing - I accept it as a necessary evil in return for the amount of storage I'm getting for £11/pm (I think that's what I pay)

  6. On 11/15/2022 at 4:40 PM, robinh said:

     

    In that case they might have solved the issue since on sunday the following version was getting installed:  mergerfs 2.33.5-22-g629806e.


    I will reboot my Unraid machine later this week to reinstall the 6.11.3. Did install it last weekend but was suspecting the release as root-cause for the issues with mergerfs so reverted it back to 6.11.2.

     

     

     

    I hope so - I think I've been a victim of the bug where my mount would keep disconnecting - and sometimes too fast for my script to fix.

  7. 1 hour ago, Sildenafil said:

    I ask you for help to configure unionfs


    I already use this script to mount gdrives but I don't use the upload script nor unionfs.
    Now I have sonarr installed and cannot use two different root folders.

     

    My drives are currently configured like this

    Tv series locally /mnt/user/media/Tv
    Gdrive archive /mnt/user/mount_rclone/media_archive_gdrive/Tv

    Now I have created a new share which should merge the previous /mnt/user/media_unionfs

     

    In my case I happen to have
    /mnt/user/media/Tv/series1/season1
    /mnt/user/media/Tv/series1/season2

    and
    /mnt/user/mount_rclone/media_archive_gdrive/Tv/series1/season3
    /mnt/user/mount_rclone/media_archive_gdrive/Tv/series1/season4

    for plex no problem, it manages to combine folders by itself but sonarr no.

     

    I set up the script like this
    RcloneRemoteName="media_archive_gdrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/user/media/Tv" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="150G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/user/media_unionfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="Plex-Media-Server" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    MountFolders=\{""\} # comma separated list of folders to create within the mount


    But in this way in the folder /mnt/user/media_unionfs I find only the files present on /mnt/user/mount_rclone/media_archive_gdrive/Tv
    but not combined with those present locally.

    Where am I wrong?

    Change to:

     

    LocalFilesShare="/mnt/user/media"

  8. 8 hours ago, Kaizac said:

     

    Don't think server side copy works like that. It just works on the same remote and moving a file/folder within that 1 remote.

     

    So it will probably just end up being a 24/7 sync job for a month. Or maybe get a 10GB VPS and run rclone from there for the one time sync/migration. Problem mostly is the 10TB (if they didn't lower it by now) download limit per day. I'm not sure about the move yet though. I also use the workspace for e-mail, my business and such, so it has it's uses. But them not being clear about the plans and what is and isn't allowed is just annoying and a liability on the long run.

     

     

    Didn't we start with unionfs? And I think we only started to use VFS after we got mergerfs available. So that could be the explanation of your performance issue during your test.

     

    I wonder if there are other ways to combine multiple team drives to make it look like 1 remote and thus hopefully increasing performance for you. I'll have to think about that.

     

    EDIT: Is  your RAM not filling up? Or CPU maybe? Does "top" in your unraid terminal show high usage by rclone?

    My load and CPU rarely go over 50%.  I think it's something like fs.inotify.max_user_watches being too low.

    I think I'm under 10TB per day, but it feels like a uncomfortably low ceiling that I'll break through at some point. 

    I'm just going to ditch some of the teamdrives. I've just checked and I've only got an average of 40K items in each.  The max is 400K and I think the drives slow down at 150K, so my recent balancing was a bit too excessive.  The slowdown at 150k is probably only marginally, and I'm a long way from that anyway.

  9. 1 hour ago, Kaizac said:

    So I'm curious what your performance will be when you're finished.

    Ok, ditching again - the performance is too slow. It's been running for an hour and it still won't play anything without buffering.  Maybe union doesn't use VFS - dunno.

    I might have to go with dropbox as my problem is definitely from the number of tdrives I have - 10 for my media, 3 for my backup files, and a couple of others.  Unless I can find out if it's e.g. an unraid number of connections issue.

  10. 1 hour ago, Kaizac said:

     

    Interesting. I have like 6 mounts right now, but I did notice that Rclone is eating a lot off resources indeed, especially with uploads going on as well. So I'm curious what your performance will be when you're finished. Are you still using the VFS cache on the union as well?

     

    Another thing I'm thinking about is just going with the Dropbox alternative. It is a bit more expensive, but we don't have the bullshit limitations of 400k files/folders per Tdrive. No upload and download limits. And just 1 mount to connect everything to. It only had api hits per minute which we have to account for.

    And I can't shake the feeling of Google sunsetting unlimited any time soon for existing users as well.

     

    Until my recent issues with something in one of my 10 rclone mounts + mergerfs mount dropping, I wasn't tempted to move to Dropbox as my setup was fine. If union works, although the tdrives are there in the background, I'll have just one mount.

    How would you move all your files to Dropbox if you did move - rclone server side transfer?

  11. 3 minutes ago, DZMM said:

    @Kaizac Ok, I've had another go at it this morning for a few reasons.

    Firstly, because my mount(s) (not sure which) keep disconnecting every couple of hours or so.  The problem started when I added another 5 tdrives to balance out some tdrives that had over 200k files in.  I think it's probably a unraid issue with the number of open connections or memory - no idea to fix.

     

    The second reason is why I decided to have another go.  I realised that having >10 tdrives mounted so that they could be combined via mergerfs was using up a lot of resources. I realised with union I only needed 1 mount i.e. this must be saving a lot of resources.

     

    Anyway, here's a tidied up mount script I just pulled together - I'll add version numbers etc when I upload to github. You can see how much smaller it is - my old script that mounted >10 tdrives which was over 3000 lines in now under 200!

     

    rclone config:

     

    [tdrive_union]
    type = union
    upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs: tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4_vfs: tdrive5_vfs: 
    action_policy = all
    create_policy = ff
    search_policy = ff

     

    New mount script:
     

    #!/bin/bash
    
    #######  Check if script already running  ##########
    if [[ -f "/mnt/user/appdata/other/scripts/running/fast_check" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Fast check already running"
    	exit
    else
    	mkdir -p /mnt/user/appdata/other/scripts/running
    	touch /mnt/user/appdata/other/scripts/running/fast_check
    fi
    
    ###############################
    #####  Replace Folders  #######
    ###############################
    
    
    mkdir -p /mnt/user/local/tdrive_vfs/{downloads/complete/youtube,downloads/complete/MakeMKV/}
    mkdir -p /mnt/user/local/backup_vfs/duplicat19i
    
    ###############################
    #######  Ping Check  ##########
    ###############################
    
    if [[ -f "/mnt/user/appdata/other/scripts/running/connectivity" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Already passed connectivity test"
    else
    # Ping Check
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
    	ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
    	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    		echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
    # check if mounts need restoring
    	else
    		echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    		rm /mnt/user/appdata/other/scripts/running/fast_check
    		exit
    	fi
    fi
    
    ################################################################
    ###################### mount tdrive_union   ########################
    ################################################################
    
    # REQUIRED SETTINGS
    RcloneRemoteName="tdrive_union"
    RcloneMountLocation="/mnt/user/mount_mergerfs/tdrive_vfs"
    RcloneCacheShare="/mnt/user/mount_rclone/cache"
    RcloneCacheMaxSize="500G"
    DockerStart="bazarr qbittorrentvpn readarr plex radarr_new radarr-uhd sonarr sonarr-uhd"
    
    # OPTIONAL SETTINGS
    
    # Add extra commands or filters
    Command1=""
    Command2="--log-file=/var/log/rclone"
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""
    CreateBindMount="N" 
    RCloneMountIP="192.168.1.77" 
    NetworkAdapter="eth0" 
    VirtualIPNumber="7"
    
    ####### END SETTINGS #######
    
    ####### Preparing mount location variables #######
    
    ####### create directories for rclone mount and mergerfs mounts #######
    mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files
    mkdir -p $RcloneCacheShare/$RcloneRemoteName #for cache files
    mkdir -p $RcloneMountLocation
    
    #######  Check if script is already running  #######
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    	rm /mnt/user/appdata/other/scripts/running/fast_check
    	exit
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    fi
    
    #######  Create Rclone Mount  #######
    
    # Check If Rclone Mount Already Created
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted."
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
    # Creating mountcheck file in case it doesn't already exist
    	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    	touch mountcheck
    	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
    # Check bind option
    	if [[  $CreateBindMount == 'Y' ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		else
    			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
    			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    		fi
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    	else
    		RCloneMountIP=""
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    	fi
    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--umask 000 \
    	--dir-cache-time 5000h \
    	--attr-timeout 5000h \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=$RcloneCacheShare/$RcloneRemoteName \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-age 1h \
    	--vfs-cache-max-size $RcloneCacheMaxSize \
    	--vfs-cache-max-age 24h \
    	--vfs-read-ahead 1G \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
    # slight pause to give mount time to finalise
    	sleep 10
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
    		docker stop $DockerStart
    		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
    		find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
    		rm /mnt/user/appdata/other/scripts/running/fast_check
    		exit
    	fi
    fi
    
    ####### Starting Dockers That Need Mount To Work Properly #######
    
    if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ]] || [[ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    	docker start $DockerStart
    
    fi
    
    echo "$(date "+%d.%m.%Y %T") INFO: ${RcloneRemoteName} Script complete"
    
    rm /mnt/user/appdata/other/scripts/running/fast_check
    
    exit


     

     

    Playback is a bit slower so far, but I'm doing a lot of scanning - Plex, sonarr, radarr etc to add back all my files.  Will see what it's like when it's finished

  12. 13 hours ago, Kaizac said:

    Thanks for testing it already, didn't have time to do it sooner. Disappointing results, would have been nice when we can fully rely on rclone. I wonder why the implementation seems so bad?

    @Kaizac Ok, I've had another go at it this morning for a few reasons.

    Firstly, because my mount(s) (not sure which) keep disconnecting every couple of hours or so.  The problem started when I added another 5 tdrives to balance out some tdrives that had over 200k files in.  I think it's probably a unraid issue with the number of open connections or memory - no idea to fix.

     

    The second reason is why I decided to have another go.  I realised that having >10 tdrives mounted so that they could be combined via mergerfs was using up a lot of resources. I realised with union I only needed 1 mount i.e. this must be saving a lot of resources.

     

    Anyway, here's a tidied up mount script I just pulled together - I'll add version numbers etc when I upload to github. You can see how much smaller it is - my old script that mounted >10 tdrives which was over 3000 lines in now under 200!

     

    rclone config:

     

    [tdrive_union]
    type = union
    upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs: tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4_vfs: tdrive5_vfs: 
    action_policy = all
    create_policy = ff
    search_policy = ff

     

    New mount script:
     

    #!/bin/bash
    
    #######  Check if script already running  ##########
    if [[ -f "/mnt/user/appdata/other/scripts/running/fast_check" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Fast check already running"
    	exit
    else
    	mkdir -p /mnt/user/appdata/other/scripts/running
    	touch /mnt/user/appdata/other/scripts/running/fast_check
    fi
    
    ###############################
    #####  Replace Folders  #######
    ###############################
    
    
    mkdir -p /mnt/user/local/tdrive_vfs/{downloads/complete/youtube,downloads/complete/MakeMKV/}
    mkdir -p /mnt/user/local/backup_vfs/duplicat19i
    
    ###############################
    #######  Ping Check  ##########
    ###############################
    
    if [[ -f "/mnt/user/appdata/other/scripts/running/connectivity" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Already passed connectivity test"
    else
    # Ping Check
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
    	ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
    	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    		echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
    # check if mounts need restoring
    	else
    		echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    		rm /mnt/user/appdata/other/scripts/running/fast_check
    		exit
    	fi
    fi
    
    ################################################################
    ###################### mount tdrive_union   ########################
    ################################################################
    
    # REQUIRED SETTINGS
    RcloneRemoteName="tdrive_union"
    RcloneMountLocation="/mnt/user/mount_mergerfs/tdrive_vfs"
    RcloneCacheShare="/mnt/user/mount_rclone/cache"
    RcloneCacheMaxSize="500G"
    DockerStart="bazarr qbittorrentvpn readarr plex radarr_new radarr-uhd sonarr sonarr-uhd"
    
    # OPTIONAL SETTINGS
    
    # Add extra commands or filters
    Command1=""
    Command2="--log-file=/var/log/rclone"
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""
    CreateBindMount="N" 
    RCloneMountIP="192.168.1.77" 
    NetworkAdapter="eth0" 
    VirtualIPNumber="7"
    
    ####### END SETTINGS #######
    
    ####### Preparing mount location variables #######
    
    ####### create directories for rclone mount and mergerfs mounts #######
    mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files
    mkdir -p $RcloneCacheShare/$RcloneRemoteName #for cache files
    mkdir -p $RcloneMountLocation
    
    #######  Check if script is already running  #######
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    	rm /mnt/user/appdata/other/scripts/running/fast_check
    	exit
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    fi
    
    #######  Create Rclone Mount  #######
    
    # Check If Rclone Mount Already Created
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted."
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
    # Creating mountcheck file in case it doesn't already exist
    	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    	touch mountcheck
    	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
    # Check bind option
    	if [[  $CreateBindMount == 'Y' ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		else
    			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
    			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    		fi
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    	else
    		RCloneMountIP=""
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    	fi
    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--umask 000 \
    	--dir-cache-time 5000h \
    	--attr-timeout 5000h \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=$RcloneCacheShare/$RcloneRemoteName \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-age 1h \
    	--vfs-cache-max-size $RcloneCacheMaxSize \
    	--vfs-cache-max-age 24h \
    	--vfs-read-ahead 1G \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
    # slight pause to give mount time to finalise
    	sleep 10
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
    		docker stop $DockerStart
    		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
    		find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
    		rm /mnt/user/appdata/other/scripts/running/fast_check
    		exit
    	fi
    fi
    
    ####### Starting Dockers That Need Mount To Work Properly #######
    
    if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ]] || [[ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    	docker start $DockerStart
    
    fi
    
    echo "$(date "+%d.%m.%Y %T") INFO: ${RcloneRemoteName} Script complete"
    
    rm /mnt/user/appdata/other/scripts/running/fast_check
    
    exit


     

     

  13. 3 hours ago, DZMM said:

    @Kaizac @slimshizn and others, I need some help with testing. 

     

    I think I've got rclone union working i.e. can remove mergerfs so there are few moving parts.  Plus, I think that rclone union is faster for our scenario than mergerfs, but let me know what you think.

     

    The problem with including /mnt/user/local in the union, was that rclone can't poll changes written direct to /mnt/user/local fast enough...so, just don't write to it, and write only to /mnt/user/mount_mergerfs/tdrive_vfs i.e. like we have already been doing.


    Here are my settings if anyone wants to try them out - basically disable mergerfs by adding "ignore" for MergerfsMountShare="ignore", and then paste in my quick rclone union section - I had to make some quick changes to the rclone mount section that I'll tidy up when I have time.

     

    Here's my rclone config:

    [local_tdrive_union]
    type = smb
    host = localhost
    user = rclone
    pass = xxxxxxxxxxxxxxxxxxx
    
    [tdrive_union]
    type = unio4
    upstreams = local_tdrive_union:local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs:  tdrive6_vfs: 
    action_policy = all
    create_policy = ff
    search_policy = ff


    For some strange reason I found the settings above were faster than the settings below,  with writes to /mnt/user/local appearing instantly, whereas there was a pause with the settings below.  I think when writes are "handled" fully by rclone it works better:

     

    [tdrive_union]
    type = unio4
    upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs:  tdrive6_vfs: 
    action_policy = all
    create_policy = ff
    search_policy = ff


    And my adjusted script:
     

    #######  Create Rclone Mount  #######
    
    # Check If Rclone Mount Already Created
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted."
    # ADDED MOUNT RUNNING REMOVAL HERE
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
    # Creating mountcheck file in case it doesn't already exist
    	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    	touch mountcheck
    	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
    # Check bind option
    	if [[  $CreateBindMount == 'Y' ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		else
    			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
    			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    		fi
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    	else
    		RCloneMountIP=""
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    	fi
    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--umask 000 \
    	--dir-cache-time 5000h \
    	--attr-timeout 5000h \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size 200G \
    	--vfs-cache-max-age 24h \
    	--vfs-read-ahead 1G \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
    # slight pause to give mount time to finalise
    	sleep 10
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
    		docker stop $DockerStart
    		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
    # ADDED MOUNT RUNNING REMOVAL HERE AND I THINK FAST CHECK REMOVAL
    		find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
    		rm /mnt/user/appdata/other/scripts/running/fast_check
    		exit
    	fi
    fi
    
    # create union mount
    
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of tdrive_union"
    # Check If Rclone Mount Already Created
    if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success tdrive_union is already mounted."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Starting mount of tdrive_union."
    	mkdir -p /mnt/user/mount_mergerfs/tdrive_vfs
    	rclone mount \
    	--allow-other \
        --umask 000 \
    	--dir-cache-time 5000h \
    	--attr-timeout 5000h \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=/mnt/user/mount_rclone/cache/tdrive_union \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size 100G \
    	--vfs-cache-max-age 24h \
    	--vfs-read-ahead 1G \
    	tdrive_union: /mnt/user/mount_mergerfs/tdrive_vfs &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
    # slight pause to give mount time to finalise
    	sleep 10
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of tdrive_union mount."
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: tdrive_union mount failed - please check for problems."
    		docker stop $DockerStart
    		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
    		rm /mnt/user/appdata/other/scripts/running/fast_check
    		exit
    	fi
    fi
    
    # MERGERFS SECTION AFTER SHOULD BE OK WITHOUT ANY CHANGES IF 'IGNORE' ADDED EARLIER

     

    Experiment over - it got really slow when lots of scans / file access was going on

    • Thanks 1
    • Haha 1
  14. @Kaizac @slimshizn and others, I need some help with testing. 

     

    I think I've got rclone union working i.e. can remove mergerfs so there are few moving parts.  Plus, I think that rclone union is faster for our scenario than mergerfs, but let me know what you think.

     

    The problem with including /mnt/user/local in the union, was that rclone can't poll changes written direct to /mnt/user/local fast enough...so, just don't write to it, and write only to /mnt/user/mount_mergerfs/tdrive_vfs i.e. like we have already been doing.


    Here are my settings if anyone wants to try them out - basically disable mergerfs by adding "ignore" for MergerfsMountShare="ignore", and then paste in my quick rclone union section - I had to make some quick changes to the rclone mount section that I'll tidy up when I have time.

     

    Here's my rclone config:

    [local_tdrive_union]
    type = smb
    host = localhost
    user = rclone
    pass = xxxxxxxxxxxxxxxxxxx
    
    [tdrive_union]
    type = unio4
    upstreams = local_tdrive_union:local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs:  tdrive6_vfs: 
    action_policy = all
    create_policy = ff
    search_policy = ff


    For some strange reason I found the settings above were faster than the settings below,  with writes to /mnt/user/local appearing instantly, whereas there was a pause with the settings below.  I think when writes are "handled" fully by rclone it works better:

     

    [tdrive_union]
    type = unio4
    upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs:  tdrive6_vfs: 
    action_policy = all
    create_policy = ff
    search_policy = ff


    And my adjusted script:
     

    #######  Create Rclone Mount  #######
    
    # Check If Rclone Mount Already Created
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted."
    # ADDED MOUNT RUNNING REMOVAL HERE
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
    # Creating mountcheck file in case it doesn't already exist
    	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    	touch mountcheck
    	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
    # Check bind option
    	if [[  $CreateBindMount == 'Y' ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		else
    			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
    			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    		fi
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    	else
    		RCloneMountIP=""
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    	fi
    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--umask 000 \
    	--dir-cache-time 5000h \
    	--attr-timeout 5000h \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size 200G \
    	--vfs-cache-max-age 24h \
    	--vfs-read-ahead 1G \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
    # slight pause to give mount time to finalise
    	sleep 10
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
    		docker stop $DockerStart
    		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
    # ADDED MOUNT RUNNING REMOVAL HERE AND I THINK FAST CHECK REMOVAL
    		find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
    		rm /mnt/user/appdata/other/scripts/running/fast_check
    		exit
    	fi
    fi
    
    # create union mount
    
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of tdrive_union"
    # Check If Rclone Mount Already Created
    if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success tdrive_union is already mounted."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Starting mount of tdrive_union."
    	mkdir -p /mnt/user/mount_mergerfs/tdrive_vfs
    	rclone mount \
    	--allow-other \
        --umask 000 \
    	--dir-cache-time 5000h \
    	--attr-timeout 5000h \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=/mnt/user/mount_rclone/cache/tdrive_union \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size 100G \
    	--vfs-cache-max-age 24h \
    	--vfs-read-ahead 1G \
    	tdrive_union: /mnt/user/mount_mergerfs/tdrive_vfs &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
    # slight pause to give mount time to finalise
    	sleep 10
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of tdrive_union mount."
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: tdrive_union mount failed - please check for problems."
    		docker stop $DockerStart
    		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
    		rm /mnt/user/appdata/other/scripts/running/fast_check
    		exit
    	fi
    fi
    
    # MERGERFS SECTION AFTER SHOULD BE OK WITHOUT ANY CHANGES IF 'IGNORE' ADDED EARLIER

     

    • Thanks 1
  15. 1 hour ago, workermaster said:

    Thanks for the info. I still have a copy of all the accounts saved somewhere, so they are safe. 

     

    I tried creating a tdrive and got to the part where it asks me the location of the SA credentials:

    image.png.e323dd025539b1570ee4fa19a12113b2.png

     

    I assume that I need to put the path of the SA accounts there. That would be:

    image.thumb.png.0787fac2b86a964f5da0824f9b87c374.png

    I had to save them there according to the renaming instructions. 

     

    But when I enter that path, it doesn't work:

    image.png.36f0e043033fde2879235aeb85f3efac.png

    because it can't find the files in the next step where I say that it is a team drive:

    image.thumb.png.bb3bc83c6fc33e4d464f0722e39e6eb5.png

     

    I see that it is asking for a file, and not a path, but I thought it needed the path to all 20 accounts? What am I supposed to put there?


    Just add the service accounts directly to your rclone config file via the plugin editing window. When done your tdrive remote "pairs" should look like this:

     

    [tdrive]
    type = drive
    scope = drive
    service_account_file = /mnt/user/path_to_first_service_account/sa_tdrive_new.json
    team_drive = xxxxxxxxxxxxxxxxxxxx
    server_side_across_configs = true
    
    [tdrive_vfs]
    type = crypt
    remote = tdrive:crypt
    filename_encryption = standard
    directory_name_encryption = true
    password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    password2 = xxxxxxxxxxxxxxxxxxxxxxxxx


    The whole point of the service accounts is that the script automatically rotates the service account in use, so that you can upload 750GB on each run of the script - read the script notes and it will be clear e.g. If you say in the script to rotate 10 SAs and your SA files start with sa_tdrive_new, then the script will change the SA used on each run (that must all be in the same location) i.e.


    sa_tdrive_new1.json
    sa_tdrive_new2.json

    sa_tdrive_new3.json

    sa_tdrive_new4.json

    sa_tdrive_new5.json

    sa_tdrive_new6.json

    sa_tdrive_new7.json

    sa_tdrive_new8.json

    sa_tdrive_new9.json

    sa_tdrive_new10.json
     

    and on the 11th run, back to 1:

     

    sa_tdrive_new1.json

    sa_tdrive_new2.json

    etc etc

     

    You need 14-16 SAs to safely max out a gig line.

  16. 2 hours ago, workermaster said:

    So, I am almost there. Turns out that I created about 800 SA. Some with a new project and some not. I don't think that should matter. I have removed a lot of them and only kept 20. Of these 20, I have added 10 to the group and added that group to the teamdrive. 

    I hope they are in a recycling bin somewhere so you can restore.  given the difficulty you had creating them, you could have just put them in a folder for future use e.g. I'm using about 90 service accounts now across multiple mounts.

     

    2 hours ago, workermaster said:

    And (hopefully my last question) how do I move the data that is already uploaded in the secure folder, to the teamdrive?


    as long you have server side transfers setup in your rclone config i.e.

     

    [tdrive]
    type = drive
    scope = drive
    service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive_new.json
    team_drive = xxxxxxxxxxxxxxxxxxxxxx
    server_side_across_configs = true

     

    then it's as simple as running:

     

    rclone move source_mount: target_mount:

     

    you can add in other arguments if you want e.g. this is how I move files between from my main tdrive to one of my movies tdrives as an overnight job (again, this is all covered in this thread several times):
     

    rclone move --min-age 30d tdrive:crypt/encrypted_movies_folder_name tdrive_movies_adults:crypt/encrypted_movies_folder_name \
    --user-agent="transfer2" \
    -vv \
    --buffer-size 512M \
    --drive-chunk-size 512M \
    --tpslimit 8 \
    --checkers 8 \
    --transfers 4 \
    --order-by modtime,ascending \
    --exclude *fuse_hidden* \
    --exclude *_HIDDEN \
    --exclude .recycle** \
    --exclude .Recycle.Bin/** \
    --exclude *.backup~* \
    --exclude *.partial~* \
    --drive-stop-on-upload-limit \
    --delete-empty-src-dirs

     

  17. 31 minutes ago, workermaster said:

     

    In the meantime, I was reading up on the next steps. I see that I need to have a teamdrive. When I login to Google drive, I only have 2 options:

    image.png.ba8d9c78702f63c4c89e9e70880996f5.png

     

    The top one is where I am currently uploading data. The bottom one is a shared drive. Is a shared drive the same thing as a team drive? 

    The rclone drive is a team drive i.e. shared.

    If you created the Group and added the service accounts as per Step 3, and then added the Group address to the team drive as per Step 4, you have finished setting up your SAs.

    All that's left is to remane the SAs to whatever you want, store them in a folder somewhere, and then use them in the script where it tells you want to do. 

     

    If you're unsure, please search this thread for service accounts where I ran through how to use them - find the first instance and go from there.  Everything you need is in here several times.

     

    Quote

     

    Optional: Create Service Accounts (follow steps 1-4).To mass rename the service accounts use the following steps:

    Place Auto-Generated Service Accounts into /mnt/user/appdata/other/rclone/service_accounts/

    Run the following in terminal/ssh

    Move to directory: cd /mnt/user/appdata/other/rclone/service_accounts/

    Dry Run:

    n=1; for f in *.json; do echo mv "$f" "sa_gdrive_upload$((n++)).json"; done

    Mass Rename:

    n=1; for f in *.json; do mv "$f" "sa_gdrive_upload$((n++)).json"; done

     

     

  18. 31 minutes ago, workermaster said:

    I had to run the mount script twice to get it running. The first time, it generates a small logfile, and the second time, it does a lot of things and generates a large logfile. The problem is, that everytime I restart, it fails to start working again. I am going to let it keep running for now and try this again tomorrow (I will share all the logs tomorrow). I have limited the upload speed of the script to 8MB so it will keep going and not hit the 750GB limit. 

    Sometimes it takes time to mount because rclone does various things like updating the cache. That's why the script is designed to run on a cron job.

  19. 2 hours ago, Kaizac said:

    Go to your mnt/appdata/other/rclone/remotes/XXyour-remote-nameXX/ folder and you should see a daily_upload_running file there. Delete it and start the script again. It's a checker file like mountcheck, but doesn't get deleted on shutdowns and such. So the script will think it's already running, but with a manual delete it will run again.

    The unmount script tidies this all up at array start

    • Upvote 1
  20. 13 hours ago, 00b5 said:

    I hope this is simple, but;

     

    I only need to start one docker, plex. Script starts it fine, but creates a file to check so it doesn't try to start it again and again (dockers_started) in my google drive root. 

     

    When does this file ever get removed? If I restart the system, for example, this file never gets deleted? (unless I make an unmount script and include removing it as part of it?)

     

    The real issue I want to work around, is that the server is remote to me, and unfortunately seems to lose power at some point monthly (great internet, shitty power). I'm just letting the system reboot at this point, and so the mount happily mounts back up, but the dockers never start (plex). 

     

    Am I just an edge case on this? I can't just delete the file on array start, since I need gdrive mounted first, and if I delete it via the main script, it will just start plex over and over again? 

     

    Got a more elegant way to fix this, something where the script knows it is the first run on this bootup, and can delete the file and allow the dockers to start? 

    There is an unmount script that cleans everything up that most people run at array start.

    • Upvote 1
  21. 1 hour ago, workermaster said:

    I have taken precautions to make sure I don't lose it. I now want to configure a shared drive. What do I need to do for that?

    Is the only thing I need to do, to change the remote to be a shared drive and add some account somehwere on Google?

    What happens to the data that I have already uploaded? Is that deleted?

    1. create a teamdrive within google drive
    2. follow the instructions on github for setting up service accounts
    3. create a new rclone remote that points to the tdrive - use the same passwords and your new service accounts
    4. (if you used the same passwords!) use the rclone move command to move files from your old remote to your new remote server side
  22. 40 minutes ago, workermaster said:

    What would you recommend for uploading all the data? Just move all the media I already have into the local folder?

     

    17 hours ago, DZMM said:

    Add them as extra paths to merge into your mergerfs mount, and then upload those paths to your gdrive:

     

    # OPTIONAL SETTINGS
    
    # Add extra paths to mergerfs mount in addition to LocalFilesShare
    LocalFilesShare2="/mnt/user/Series"
    LocalFilesShare3="/mnt/user/Movies"

     

    40 minutes ago, workermaster said:

    To add another question. I can upload 750GB per day. What happens if I am at 740Gb already and the script is trying to upload a file of 50GB? Will It just skip it, or upload half of it?

    It will stop at 750GB - I think sometimes it will finish the current file, but I think it hard stops. 

    That's why I'd put the extra effort in now before you get started to use teamdrives, which allows you to use and rotate service accounts to upload more than 750GB/day (750GB/day per service account)