DZMM Posted October 16, 2022 Author Share Posted October 16, 2022 13 hours ago, workermaster said: Say that I were to lose the mount points configuration I have, can I still restore access to these files? Or can I never decrypt them? don't lose your passwords! 1 Quote Link to comment
workermaster Posted October 16, 2022 Share Posted October 16, 2022 7 hours ago, DZMM said: don't lose your passwords! I have taken precautions to make sure I don't lose it. I now want to configure a shared drive. What do I need to do for that? Is the only thing I need to do, to change the remote to be a shared drive and add some account somehwere on Google? What happens to the data that I have already uploaded? Is that deleted? Quote Link to comment
DZMM Posted October 16, 2022 Author Share Posted October 16, 2022 1 hour ago, workermaster said: I have taken precautions to make sure I don't lose it. I now want to configure a shared drive. What do I need to do for that? Is the only thing I need to do, to change the remote to be a shared drive and add some account somehwere on Google? What happens to the data that I have already uploaded? Is that deleted? create a teamdrive within google drive follow the instructions on github for setting up service accounts create a new rclone remote that points to the tdrive - use the same passwords and your new service accounts (if you used the same passwords!) use the rclone move command to move files from your old remote to your new remote server side Quote Link to comment
workermaster Posted October 17, 2022 Share Posted October 17, 2022 (edited) On 10/16/2022 at 6:43 PM, DZMM said: create a teamdrive within google drive follow the instructions on github for setting up service accounts create a new rclone remote that points to the tdrive - use the same passwords and your new service accounts (if you used the same passwords!) use the rclone move command to move files from your old remote to your new remote server side I made a post before, but deleted that since I managed to get a bit further. It has taken me a lot of stress, but have managed to do the first few steps. I now want to run the command to actually make the service accounts. I mean this step: But for some reason, the cmd screen says that Python is not installed. I then installed it again from the Microsoft store (even though it was already installed) but now it gives me this error: I do not understand what goes wrong here, since I did run this command multiple times. Do you know what is going wrong here? EDIT: I have given up. I just need something simple and I think this might be to much for me at the moment. I will just deal with the 750GB limt. Edited October 19, 2022 by workermaster Quote Link to comment
00b5 Posted October 19, 2022 Share Posted October 19, 2022 I hope this is simple, but; I only need to start one docker, plex. Script starts it fine, but creates a file to check so it doesn't try to start it again and again (dockers_started) in my google drive root. When does this file ever get removed? If I restart the system, for example, this file never gets deleted? (unless I make an unmount script and include removing it as part of it?) The real issue I want to work around, is that the server is remote to me, and unfortunately seems to lose power at some point monthly (great internet, shitty power). I'm just letting the system reboot at this point, and so the mount happily mounts back up, but the dockers never start (plex). Am I just an edge case on this? I can't just delete the file on array start, since I need gdrive mounted first, and if I delete it via the main script, it will just start plex over and over again? Got a more elegant way to fix this, something where the script knows it is the first run on this bootup, and can delete the file and allow the dockers to start? Quote Link to comment
workermaster Posted October 19, 2022 Share Posted October 19, 2022 I have a problem with the upload script. It no longer seems to do anything. This is the script: #!/bin/bash ###################### ### Upload Script #### ###################### ### Version 0.95.5 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Edit the settings below to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: Add additional commands or filters # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring # 5. Optional: Use service accounts in your upload remote # 6. Optional: Use backup directory for rclone sync jobs # REQUIRED SETTINGS RcloneCommand="move" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="gdrive_upload_vfs" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y ModSort="ascending" # "ascending" oldest files first, "descending" newest files first # Note: Again - remember to NOT use ':' in your remote name above # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight. BWLimit1Time="01:00" BWLimit1="8M" BWLimit2Time="08:00" BWLimit2="8M" BWLimit3Time="16:00" BWLimit3="8M" # OPTIONAL SETTINGS # Add name to upload job JobName="_daily_upload" # Adds custom string to end of checker file. Useful if you're running multiple jobs against the same remote. # Add extra commands or filters Command1="--exclude downloads/**" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # Bind the mount to an IP address CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter. RCloneMountIP="192.168.1.253" # Choose IP to bind upload to. NetworkAdapter="eth0" # choose your network adapter. eth0 recommended. VirtualIPNumber="1" # creates eth0:x e.g. eth0:1. # Use Service Accounts. Instructions: https://github.com/xyou365/AutoRclone UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="15" # Integer number of service accounts to use. # Is this a backup job BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName BackupRemoteLocation="backup" # choose location on mount for deleted sync files BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y ####### END SETTINGS ####### ############################################################################### ##### DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING ##### ############################################################################### ####### Preparing mount location variables ####### if [[ $BackupJob == 'Y' ]]; then LocalFilesLocation="$LocalFilesShare" echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected. Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***" else LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected. Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***" fi RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount ####### create directory for script files ####### mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files ####### Check if script already running ########## echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***" if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding." touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName fi ####### check if rclone installed ########## echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload." else echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later." rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName exit fi ####### Rotating serviceaccount.json file if using Service Accounts ####### if [[ $UseServiceAccountUpload == 'Y' ]]; then cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/ CounterNumber=$(find -name 'counter*' | cut -c 11,12) CounterCheck="1" if [[ "$CounterNumber" -ge "$CounterCheck" ]];then echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}." else echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1." touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1 CounterNumber="1" fi ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json" echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}." else echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}" ServiceAccount="" fi ####### Upload files ########## # Check bind option if [[ $CreateBindMount == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}" ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}" else echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}" ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber fi else RCloneMountIP="" fi # Remove --delete-empty-src-dirs if rclone sync or copy if [[ $RcloneCommand == 'move' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload." DeleteEmpty="--delete-empty-src-dirs " else echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload." DeleteEmpty="" fi # Check --backup-directory if [[ $BackupJob == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}." LocalFilesLocation="$LocalFilesShare" BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation" else BackupRemoteLocation="" BackupRemoteDeletedLocation="" BackupRetention="" BackupDir="" fi # process files rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \ --user-agent="$RcloneUploadRemoteName" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,$ModSort \ --min-age $MinimumAge \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \ --bind=$RCloneMountIP $DeleteEmpty # Delete old files from mount if [[ $BackupJob == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}." rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation fi ####### Remove Control Files ########## # update counter and remove other control files if [[ $UseServiceAccountUpload == 'Y' ]]; then if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_* touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1 echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1." else rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_* CounterNumber=$((CounterNumber+1)) touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run." fi else echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts." fi # remove dummy file rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName echo "$(date "+%d.%m.%Y %T") INFO: Script complete" exit No matter what I try, it doesn't want to start. It keeps saying that it is already running, but there are no logs showing that it is running. I tried removing all rclone scripts and copying new ones from the Github page, but even after a reboot, I keep getting the message that it is already running, even though it hasn't been started before. Quote Link to comment
Kaizac Posted October 19, 2022 Share Posted October 19, 2022 21 minutes ago, workermaster said: I have a problem with the upload script. It no longer seems to do anything. This is the script: #!/bin/bash ###################### ### Upload Script #### ###################### ### Version 0.95.5 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Edit the settings below to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: Add additional commands or filters # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring # 5. Optional: Use service accounts in your upload remote # 6. Optional: Use backup directory for rclone sync jobs # REQUIRED SETTINGS RcloneCommand="move" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="gdrive_upload_vfs" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y ModSort="ascending" # "ascending" oldest files first, "descending" newest files first # Note: Again - remember to NOT use ':' in your remote name above # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight. BWLimit1Time="01:00" BWLimit1="8M" BWLimit2Time="08:00" BWLimit2="8M" BWLimit3Time="16:00" BWLimit3="8M" # OPTIONAL SETTINGS # Add name to upload job JobName="_daily_upload" # Adds custom string to end of checker file. Useful if you're running multiple jobs against the same remote. # Add extra commands or filters Command1="--exclude downloads/**" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # Bind the mount to an IP address CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter. RCloneMountIP="192.168.1.253" # Choose IP to bind upload to. NetworkAdapter="eth0" # choose your network adapter. eth0 recommended. VirtualIPNumber="1" # creates eth0:x e.g. eth0:1. # Use Service Accounts. Instructions: https://github.com/xyou365/AutoRclone UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="15" # Integer number of service accounts to use. # Is this a backup job BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName BackupRemoteLocation="backup" # choose location on mount for deleted sync files BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y ####### END SETTINGS ####### ############################################################################### ##### DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING ##### ############################################################################### ####### Preparing mount location variables ####### if [[ $BackupJob == 'Y' ]]; then LocalFilesLocation="$LocalFilesShare" echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected. Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***" else LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected. Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***" fi RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount ####### create directory for script files ####### mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files ####### Check if script already running ########## echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***" if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding." touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName fi ####### check if rclone installed ########## echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload." else echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later." rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName exit fi ####### Rotating serviceaccount.json file if using Service Accounts ####### if [[ $UseServiceAccountUpload == 'Y' ]]; then cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/ CounterNumber=$(find -name 'counter*' | cut -c 11,12) CounterCheck="1" if [[ "$CounterNumber" -ge "$CounterCheck" ]];then echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}." else echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1." touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1 CounterNumber="1" fi ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json" echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}." else echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}" ServiceAccount="" fi ####### Upload files ########## # Check bind option if [[ $CreateBindMount == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}" ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}" else echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}" ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber fi else RCloneMountIP="" fi # Remove --delete-empty-src-dirs if rclone sync or copy if [[ $RcloneCommand == 'move' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload." DeleteEmpty="--delete-empty-src-dirs " else echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload." DeleteEmpty="" fi # Check --backup-directory if [[ $BackupJob == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}." LocalFilesLocation="$LocalFilesShare" BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation" else BackupRemoteLocation="" BackupRemoteDeletedLocation="" BackupRetention="" BackupDir="" fi # process files rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \ --user-agent="$RcloneUploadRemoteName" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,$ModSort \ --min-age $MinimumAge \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \ --bind=$RCloneMountIP $DeleteEmpty # Delete old files from mount if [[ $BackupJob == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}." rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation fi ####### Remove Control Files ########## # update counter and remove other control files if [[ $UseServiceAccountUpload == 'Y' ]]; then if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_* touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1 echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1." else rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_* CounterNumber=$((CounterNumber+1)) touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run." fi else echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts." fi # remove dummy file rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName echo "$(date "+%d.%m.%Y %T") INFO: Script complete" exit No matter what I try, it doesn't want to start. It keeps saying that it is already running, but there are no logs showing that it is running. I tried removing all rclone scripts and copying new ones from the Github page, but even after a reboot, I keep getting the message that it is already running, even though it hasn't been started before. Go to your mnt/appdata/other/rclone/remotes/XXyour-remote-nameXX/ folder and you should see a daily_upload_running file there. Delete it and start the script again. It's a checker file like mountcheck, but doesn't get deleted on shutdowns and such. So the script will think it's already running, but with a manual delete it will run again. Quote Link to comment
DZMM Posted October 19, 2022 Author Share Posted October 19, 2022 13 hours ago, 00b5 said: I hope this is simple, but; I only need to start one docker, plex. Script starts it fine, but creates a file to check so it doesn't try to start it again and again (dockers_started) in my google drive root. When does this file ever get removed? If I restart the system, for example, this file never gets deleted? (unless I make an unmount script and include removing it as part of it?) The real issue I want to work around, is that the server is remote to me, and unfortunately seems to lose power at some point monthly (great internet, shitty power). I'm just letting the system reboot at this point, and so the mount happily mounts back up, but the dockers never start (plex). Am I just an edge case on this? I can't just delete the file on array start, since I need gdrive mounted first, and if I delete it via the main script, it will just start plex over and over again? Got a more elegant way to fix this, something where the script knows it is the first run on this bootup, and can delete the file and allow the dockers to start? There is an unmount script that cleans everything up that most people run at array start. 1 Quote Link to comment
DZMM Posted October 19, 2022 Author Share Posted October 19, 2022 2 hours ago, Kaizac said: Go to your mnt/appdata/other/rclone/remotes/XXyour-remote-nameXX/ folder and you should see a daily_upload_running file there. Delete it and start the script again. It's a checker file like mountcheck, but doesn't get deleted on shutdowns and such. So the script will think it's already running, but with a manual delete it will run again. The unmount script tidies this all up at array start 1 Quote Link to comment
workermaster Posted October 19, 2022 Share Posted October 19, 2022 (edited) 19 minutes ago, DZMM said: The unmount script tidies this all up at array start I had to run the mount script twice to get it running. The first time, it generates a small logfile, and the second time, it does a lot of things and generates a large logfile. The problem is, that everytime I restart, it fails to start working again. I am going to let it keep running for now and try this again tomorrow (I will share all the logs tomorrow). I have limited the upload speed of the script to 8MB so it will keep going and not hit the 750GB limit. Edited October 19, 2022 by workermaster Quote Link to comment
DZMM Posted October 19, 2022 Author Share Posted October 19, 2022 (edited) 31 minutes ago, workermaster said: I had to run the mount script twice to get it running. The first time, it generates a small logfile, and the second time, it does a lot of things and generates a large logfile. The problem is, that everytime I restart, it fails to start working again. I am going to let it keep running for now and try this again tomorrow (I will share all the logs tomorrow). I have limited the upload speed of the script to 8MB so it will keep going and not hit the 750GB limit. Sometimes it takes time to mount because rclone does various things like updating the cache. That's why the script is designed to run on a cron job. Edited October 19, 2022 by DZMM Quote Link to comment
axeman Posted October 19, 2022 Share Posted October 19, 2022 So I'm preparing to upgrade from Unraid 6.9.2 to 6.11.1 - given the many posts I saw here with folks having permissions issues with the dockers - just want to see if there's anything I need to do prior to upgrading? Or should I just upgrade and fix whatever after? Thoughts? Thanks! Quote Link to comment
workermaster Posted October 19, 2022 Share Posted October 19, 2022 Quick question. I have given up trying to get the 100 Service Accounts to work, but is it possible to manually create 2 or 3? My whole problem is that I have no idea how to use Python and it keeps giving me weird error, but if I can manually make a few accounts, then I would be happy. If this is possible, how do I do this? Just add a few of my other Google accounts to the shared drive? Then create the accounts file myself that Python would normally do for me? How would that file look like? Quote Link to comment
lilfade Posted October 19, 2022 Share Posted October 19, 2022 3 hours ago, workermaster said: Quick question. I have given up trying to get the 100 Service Accounts to work, but is it possible to manually create 2 or 3? My whole problem is that I have no idea how to use Python and it keeps giving me weird error, but if I can manually make a few accounts, then I would be happy. If this is possible, how do I do this? Just add a few of my other Google accounts to the shared drive? Then create the accounts file myself that Python would normally do for me? How would that file look like? Your issue is you haven't installed python to your windows path, Google "add python3 to windows path", do this and then try again. Currently you'd have to run something like C:\Program Files\python3\python3.exe script_name.py ... Etc. Quote Link to comment
workermaster Posted October 19, 2022 Share Posted October 19, 2022 23 minutes ago, lilfade said: Your issue is you haven't installed python to your windows path, Google "add python3 to windows path", do this and then try again. Currently you'd have to run something like C:\Program Files\python3\python3.exe script_name.py ... Etc. I'll try that tomorrow. I hope that solves my problem. Quote Link to comment
00b5 Posted October 20, 2022 Share Posted October 20, 2022 8 hours ago, DZMM said: There is an unmount script that cleans everything up that most people run at array start. Well, SoaB, I even had that script added in and everything, so now it is set to run at array start, and I should be set, thanks! 1 Quote Link to comment
workermaster Posted October 20, 2022 Share Posted October 20, 2022 (edited) 14 hours ago, lilfade said: Your issue is you haven't installed python to your windows path, Google "add python3 to windows path", do this and then try again. Currently you'd have to run something like C:\Program Files\python3\python3.exe script_name.py ... Etc. I followed this link to try and add Python to the Windows path. My system looks a little different since I am using Windows 11, but I think that I managed to do what they asked. https://phoenixnap.com/kb/how-to-install-python-3-windows Do I even need to use the Python console? It seems that every command can be ran from the CMD. I then went back to the 4 steps to create service accounts. I am at step 2: But when I run that command, I still get this error: I have no idea how to troubleshoot this. Do you, or anyone else, know how to fix this, or help me with creating the file manually? EDIT: Could it be that you need to install rclone for creating the service accounts? One of the first steps mentions that you need to install it, but I skipped that since I have rclone running on Unraid and thought that was enough. Is it needed to install rclone on Windows in order to create the service accounts? Edited October 20, 2022 by workermaster Quote Link to comment
workermaster Posted October 20, 2022 Share Posted October 20, 2022 I have no programming knowledge, so to me, the missing module error I get in the post above, looks like it has someting to do with the script I am trying to execute, and not with the Python installation. That makes it impossible for me to Google what is going wrong, and since I can't read code, I am stuck. Quote Link to comment
live4ever Posted October 20, 2022 Share Posted October 20, 2022 6 minutes ago, workermaster said: I have no programming knowledge, so to me, the missing module error I get in the post above, looks like it has someting to do with the script I am trying to execute, and not with the Python installation. That makes it impossible for me to Google what is going wrong, and since I can't read code, I am stuck. Try installing google-auth-oauthlib: pip install google-auth-oauthlib Quote Link to comment
workermaster Posted October 20, 2022 Share Posted October 20, 2022 1 minute ago, live4ever said: Try installing google-auth-oauthlib: pip install google-auth-oauthlib I tried that, but it made no difference. It seems to be installed, but for some reason, is not usable. Quote Link to comment
Kaizac Posted October 20, 2022 Share Posted October 20, 2022 12 minutes ago, workermaster said: I have no programming knowledge, so to me, the missing module error I get in the post above, looks like it has someting to do with the script I am trying to execute, and not with the Python installation. That makes it impossible for me to Google what is going wrong, and since I can't read code, I am stuck. It literally says in step 1 that you need to install rclone..... And did you activate the Drive API and get your credentials.json? Quote Link to comment
workermaster Posted October 20, 2022 Share Posted October 20, 2022 (edited) 28 minutes ago, Kaizac said: It literally says in step 1 that you need to install rclone..... And did you activate the Drive API and get your credentials.json? I downloaded rclone, but it seems to not be a program that is installed. I see no installer, but only a .exe that tells me to access it from a cmd screen. Starting it from a cmd screen also does not give me the option to install it: Do I have the wrong rclone here? I do have a credentials file. Got it from here after enabling the api: Then renamed that file and it is in the project folder: Edited October 20, 2022 by workermaster Quote Link to comment
workermaster Posted October 20, 2022 Share Posted October 20, 2022 In the meantime, I have tried to copy all rclone files into the project folder and run the python3 gen_sa_accounts.py --quick-setup -1 command. It did not work and gave me the same module error. I then tried moving all files in the project folder into the Python installation folder, to rule out any problems with the Windows path. Still the same error. Then I tried to run step 1 and 2 again from github: https://github.com/xyou365/AutoRclone to make sure that I did everything right. I could not find a mistake anywhere. When I run the last confirmation step for the api, it doesn't ask me to login, but shows me a long string of letters and numbers in de console, but the website mentions that you only have to login the first time. As far as I can tell, everything should be setup correctly. This leaves me with the idea that the problem is with the rclone install, since the rclone I have, seems to be a portible one and not one that you need to install. But I did get it from the link in the manual (https://rclone.org/downloads/), so that should be the correct one. Quote Link to comment
Kaizac Posted October 20, 2022 Share Posted October 20, 2022 23 minutes ago, workermaster said: In the meantime, I have tried to copy all rclone files into the project folder and run the python3 gen_sa_accounts.py --quick-setup -1 command. It did not work and gave me the same module error. I then tried moving all files in the project folder into the Python installation folder, to rule out any problems with the Windows path. Still the same error. Then I tried to run step 1 and 2 again from github: https://github.com/xyou365/AutoRclone to make sure that I did everything right. I could not find a mistake anywhere. When I run the last confirmation step for the api, it doesn't ask me to login, but shows me a long string of letters and numbers in de console, but the website mentions that you only have to login the first time. As far as I can tell, everything should be setup correctly. This leaves me with the idea that the problem is with the rclone install, since the rclone I have, seems to be a portible one and not one that you need to install. But I did get it from the link in the manual (https://rclone.org/downloads/), so that should be the correct one. Why don't you just do this from your Unraid box and use the terminal? Windows complicates stuff. You can get python 3 from the community store. Then just get a folder where you dump all the files in and use "cd /path/to/folder" to get to that folder and execute from there. Quote Link to comment
workermaster Posted October 20, 2022 Share Posted October 20, 2022 (edited) 1 hour ago, Kaizac said: Why don't you just do this from your Unraid box and use the terminal? Windows complicates stuff. You can get python 3 from the community store. Then just get a folder where you dump all the files in and use "cd /path/to/folder" to get to that folder and execute from there. I didn't know I could do that. I am going to try that in a moment. EDIT: Trying to run this now. I couldn't find a Python download in the community apps, but then remembered that the nerdpack could have it. I enabled and installed it there: I also moved the project from my windows PC to Disk 1 on the server. But I have no idea how to run the code (I know the directory is not correct in the above screenshot, but it doesn't even find the Python commands) I have never ran anything like this on the Unraid console before, so sorry if I make obvious mistakes. Should the above be enough to work, or should I run these commands in Unraid: EDIT2: I have figured out that I can do this; to start Python. I am now in the project folder, but can't run the needed commands: I now have to figure out how to get pip working Edited October 20, 2022 by workermaster Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.