Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Posts posted by Kaizac

  1. 27 minutes ago, workermaster said:

    I ran that command and then ran the requirements again. It worked. The errors were gone. I then looked again at step 2 of creating the service accounts, and since I already have a project (you need to make one to enable the drive api) I thought this command: 

    python3 gen_sa_accounts.py --quick-setup -1

    was the best one to use. I entered that into the console and this time, it told me to go to a link to give access to the script. This is where I get my next problem. 

    image.thumb.png.438368e0fa1b1f737bd57065261f6b49.png

     

    The request seems to be invalid:

    image.png.794a3600ad6b215299c79c48d4d6760a.png

    image.png.0102c5c81ccef79271d1d4545e9b51f0.png

    It says that the access is denied and that I should contact the developer for this problem. I tried doing this on a pc that has the Unraid UI open, and also on the server itself (booted into gui mode). I get the same error. 

     

    I also tried running the other commands:

    python3 gen_sa_accounts.py --quick-setup 1 --new-only
    python3 gen_sa_accounts.py --quick-setup 1

    to see if they gave a different result, but nothing helped. Do you know why the request is invalid?

    Look at https://github.com/xyou365/AutoRclone/issues/89

     

    You have to edit some code in the script. You can use notepad++ for that.

  2. 23 minutes ago, workermaster said:

    In the meantime, I have tried to copy all rclone files into the project folder and run the python3 gen_sa_accounts.py --quick-setup -1 command. It did not work and gave me the same module error. 

     

    I then tried moving all files in the project folder into the Python installation folder, to rule out any problems with the Windows path. Still the same error. 

     

    Then I tried to run step 1 and 2 again from github: https://github.com/xyou365/AutoRclone to make sure that I did everything right. I could not find a mistake anywhere. When I run the last confirmation step for the api, it doesn't ask me to login, but shows me a long string of letters and numbers in de console, but the website mentions that you only have to login the first time. As far as I can tell, everything should be setup correctly. 

     

    This leaves me with the idea that the problem is with the rclone install, since the rclone I have, seems to be a portible one and not one that you need to install. But I did get it from the link in the manual (https://rclone.org/downloads/), so that should be the correct one. 

    Why don't you just do this from your Unraid box and use the terminal? Windows complicates stuff. You can get python 3 from the community store. Then just get a folder where you dump all the files in and use "cd /path/to/folder" to get to that folder and execute from there.

  3. 12 minutes ago, workermaster said:

    I have no programming knowledge, so to me, the missing module error I get in the post above, looks like it has someting to do with the script I am trying to execute, and not with the Python installation. That makes it impossible for me to Google what is going wrong, and since I can't read code, I am stuck. 

    It literally says in step 1 that you need to install rclone..... And did you activate the Drive API and get your credentials.json?

  4. 21 minutes ago, workermaster said:

    I have a problem with the upload script. It no longer seems to do anything. 

     

    This is the script:

    #!/bin/bash
    
    ######################
    ### Upload Script ####
    ######################
    ### Version 0.95.5 ###
    ######################
    
    ####### EDIT ONLY THESE SETTINGS #######
    
    # INSTRUCTIONS
    # 1. Edit the settings below to match your setup
    # 2. NOTE: enter RcloneRemoteName WITHOUT ':'
    # 3. Optional: Add additional commands or filters
    # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
    # 5. Optional: Use service accounts in your upload remote
    # 6. Optional: Use backup directory for rclone sync jobs
    
    # REQUIRED SETTINGS
    RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
    RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'.
    RcloneUploadRemoteName="gdrive_upload_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
    LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
    RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
    MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
    ModSort="ascending" # "ascending" oldest files first, "descending" newest files first
    
    # Note: Again - remember to NOT use ':' in your remote name above
    
    # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
    BWLimit1Time="01:00"
    BWLimit1="8M"
    BWLimit2Time="08:00"
    BWLimit2="8M"
    BWLimit3Time="16:00"
    BWLimit3="8M"
    
    # OPTIONAL SETTINGS
    
    # Add name to upload job
    JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.
    
    # Add extra commands or filters
    Command1="--exclude downloads/**"
    Command2=""
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""
    
    # Bind the mount to an IP address
    CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
    RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
    NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
    VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.
    
    # Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
    UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
    ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
    ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
    CountServiceAccounts="15" # Integer number of service accounts to use.
    
    # Is this a backup job
    BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
    BackupRemoteLocation="backup" # choose location on mount for deleted sync files
    BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
    BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y
    
    ####### END SETTINGS #######
    
    ###############################################################################
    #####    DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING   #####
    ###############################################################################
    
    ####### Preparing mount location variables #######
    if [[  $BackupJob == 'Y' ]]; then
    	LocalFilesLocation="$LocalFilesShare"
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
    else
    	LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
    fi
    
    RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount
    
    ####### create directory for script files #######
    mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files
    
    #######  Check if script already running  ##########
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
    	exit
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    	touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    fi
    
    #######  check if rclone installed  ##########
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    	exit
    fi
    
    ####### Rotating serviceaccount.json file if using Service Accounts #######
    if [[ $UseServiceAccountUpload == 'Y' ]]; then
    	cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
    	CounterNumber=$(find -name 'counter*' | cut -c 11,12)
    	CounterCheck="1"
    	if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
    		echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
    	else
    		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
    		CounterNumber="1"
    	fi
    	ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
    	echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
    	ServiceAccount=""
    fi
    
    #######  Upload files  ##########
    
    # Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    	ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    		echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    	else
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
    		ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    	fi
    else
    	RCloneMountIP=""
    fi
    
    #  Remove --delete-empty-src-dirs if rclone sync or copy
    if [[  $RcloneCommand == 'move' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
    	DeleteEmpty="--delete-empty-src-dirs "
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
    	DeleteEmpty=""
    fi
    
    #  Check --backup-directory
    if [[  $BackupJob == 'Y' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
    	LocalFilesLocation="$LocalFilesShare"
    	BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
    else
    	BackupRemoteLocation=""
    	BackupRemoteDeletedLocation=""
    	BackupRetention=""
    	BackupDir=""
    fi
    
    # process files
    	rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
    	--user-agent="$RcloneUploadRemoteName" \
    	-vv \
    	--buffer-size 512M \
    	--drive-chunk-size 512M \
    	--tpslimit 8 \
    	--checkers 8 \
    	--transfers 4 \
    	--order-by modtime,$ModSort \
    	--min-age $MinimumAge \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--exclude *fuse_hidden* \
    	--exclude *_HIDDEN \
    	--exclude .recycle** \
    	--exclude .Recycle.Bin/** \
    	--exclude *.backup~* \
    	--exclude *.partial~* \
    	--drive-stop-on-upload-limit \
    	--bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
    	--bind=$RCloneMountIP $DeleteEmpty
    
    # Delete old files from mount
    if [[  $BackupJob == 'Y' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
    	rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
    fi
    
    #######  Remove Control Files  ##########
    
    # update counter and remove other control files
    if [[  $UseServiceAccountUpload == 'Y' ]]; then
    	if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
    		echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
    	else
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
    		CounterNumber=$((CounterNumber+1))
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
    		echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
    	fi
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
    fi
    
    # remove dummy file
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    echo "$(date "+%d.%m.%Y %T") INFO: Script complete"
    
    exit

    No matter what I try, it doesn't want to start. It keeps saying that it is already running, but there are no logs showing that it is running.

    image.thumb.png.7b1bd97e3f658e8dff58c11f9aaf5277.png

     

    I tried removing all rclone scripts and copying new ones from the Github page, but even after a reboot, I keep getting the message that it is already running, even though it hasn't been started before. 

    Go to your mnt/appdata/other/rclone/remotes/XXyour-remote-nameXX/ folder and you should see a daily_upload_running file there. Delete it and start the script again. It's a checker file like mountcheck, but doesn't get deleted on shutdowns and such. So the script will think it's already running, but with a manual delete it will run again.

  5. 4 minutes ago, deniax said:

    Probably I am overlooking something very simple, but for some reason, my Google Drive contents are now showing up in my mount.

    In the attached logs, I see all mounts are mounted successfully

     

    I already feel kinda guilty as probably its something simple :)

    rclone.png

    log.txt 56.16 kB · 0 downloads

    I don't know what you mounted and what you mean with Google drive contents not showing up? If you're mounting your crypt you will only see files in the specific team drive and the Crypt folder within that team drive.

  6. 6 minutes ago, workermaster said:

    I suspected that names could be changed, but am not that confident that I got it right. 

     

    These are the 2 remotes I have made:
    image.png.1f2d0b4c78d5b14714e37f47dc218c86.png

    image.png.24a854ea0061bbb7ca5ea6e17685de3a.png

     

    This is the mount script:
    image.png.ffab09d082ae4057f8d36c5f15b48c69.png

     

    Could you please tell me if this is setup correctly? I have now tried to keep the names of mounts and folders the same as the default mount script and first post in this thread. Can I safely start the mount script now?

    Rename your crypt to gdrive_vfs through rclone config and you're good to go.

    • Thanks 1
  7. 2 hours ago, workermaster said:

    I am going to create the mounts again, since my last ones were a bit weird. I have just one question (for now), you mention that we need 2 mounts. You have given them both a name here, but in the scripts you call the mount "gdrive_vfs". Is this correct? Should the name of the mount in the script not be the name of the crypt remote if you want all the data going to to gdrive to be encrypted?

    You can have a different folder name from the remote name, they don't have to be identical. So you just need to create a google drive remote like you did. Then link a crypt remote to it. Like googleworkspace and googleworkspace_crypt. And then you can mount it to folder /mnt/user/mount_mergerfs/gdrive, or whatever name you like. I've explained in one of my recent posts how you should look at the mount script and what you need to put in the variables.

  8. 4 hours ago, deniax said:

    How/where would one suggest to have the (local) mounts, looking at Trash's guides?

    To keep hardlinks active across all arrs and downloaders, I should not use the default /user/local/gdrive_media_vfs/ , correct?

     

     

    Trash's hardlink folder guide:

    data
    ├── torrents
    │  ├── movies
    │  ├── music
    │  └── tv
    ├── usenet
    │  ├── movies
    │  ├── music
    │  └── tv
    └── media
        ├── movies
        ├── music
        └── tv

     

    You have to create /user pointing to /mnt/user/mount_mergerfs/gdrive_vfs or whatever naming scheme you use within your docker templates. Then within the dockers itself you have to start all your directories/folders from /user/. For example /user/media/movies.

    This way all dockers will look at it like it's one drive and thus have the fast performance.

    • Like 1
  9. 1 hour ago, workermaster said:

    I am trying to get this setup but am struggeling with a few things. You mention that I have to setup 2 remotes, but I don't know how. I found this video from Spaceinvaderone: 

     

    but that video is very old. I followed what he did and created 2 remotes. Is that the correct way or do we need to create them some other way? This is how they look now:image.thumb.png.89f45e04596991d0122ba87f4f0ac045.png

     

    I have made a Google business account and have my own domain. I then upgraded to Enterprise where it said unlimited storage, but when I look in Google Drive, it says that there is only 5TB. Do you know why that is?

    Did you create these through the shell/terminal? Seems you are missing the following for the crypt mount:

    filename_encryption = standard
    directory_name_encryption = true

    Or maybe you don't want those options enabled?

     

    The googleworkspace mount seems fine. You could add:

    server_side_across_configs = true

     

    About the 5TB limit. Right now the google workspace accounts are a bit of unclear for new accounts and their storage. The 5TB is the personal drive limit. It will show this for me as well, but I can just go past it (I use team drives). But people with new accounts have also been reporting that they can't upload more than 5TB and you will need 2 more accounts and then ask Google for more storage every time with an explanation. You can upload 750gb per day per account (or use service accounts to have 750gb per service account, but this is a bit too complicated for you right now I think). So you'll just have to test it whether you can go past the 5TB storage.

  10. 12 minutes ago, francrouge said:

    #1 Anyone has hits to be able to play files faster.

     

    Trying to read 1080p file 10mbits and its takes like 1 min

     

    # create rclone mount
        rclone mount \
        $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
        --allow-other \
        --umask 000 \
        --uid 99 \
        --gid 100 \
        --dir-cache-time $RcloneMountDirCacheTime \
        --attr-timeout $RcloneMountDirCacheTime \
        --log-level INFO \
        --poll-interval 10s \
        --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
        --drive-pacer-min-sleep 10ms \
        --drive-pacer-burst 1000 \
        --vfs-cache-mode full \
        --vfs-cache-max-size $RcloneCacheMaxSize \
        --vfs-cache-max-age $RcloneCacheMaxAge \
        --vfs-read-ahead 500m \
        --bind=$RCloneMountIP \
        $RcloneRemoteName: $RcloneMountLocation &
     

     

    #2 Also do you know if its possible to direct play throught plex or its always converting ?🤔

    My mount settings are above in one of my posts. I don't know your download speed, but if it isn't that high you might be downloading too big of a chunks or too far ahead. And your dir-cache-time can be 9999h.

     

    Regarding your second question, what do you mean with direct play? Normally direct play within the plex context means that your client device (media player) is playing the files directly, without any transcoding needed. So that is definitely possible, just depends on your mediaplayer. Playing a 4K on a chromecast 1080p will lead to a transcode of course. And a lot of burned in subtitles also lead to transcoding for the subtitles part, but not the video part.

     

    For your samba issues I would suggest you reboot without anything mounted, no dockers on and such. And then just put in a file in your mount_mergerfs folder an see whether you can open and edit that file. 

    • Like 1
  11. 7 hours ago, francrouge said:

    hi all other question

     

    about the upload script.

     

    I'm getting this now 

     

    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 90.241631ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 143.701353ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 222.186098ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 305.125972ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 402.588316ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 499.64329ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 589.545348ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 676.822802ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 680.141577ms
    2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 694.895337ms
    2022/10/06 05:21:58 DEBUG : pacer: Reducing sleep to 307.907209ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 0s
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 78.586386ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 159.649286ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 198.168036ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 245.411694ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 330.517403ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 429.05441ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 523.306138ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 609.645869ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 690.942129ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 681.587878ms
    2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 639.166177ms
    2022/10/06 05:22:00 DEBUG : pacer: Reducing sleep to 66.904708ms
    2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 0s
    2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 87.721382ms
    2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 187.616721ms
    2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 186.994169ms
    2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 285.041735ms
    2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 352.336246ms
    2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 449.015128ms
    2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 547.412525ms
    2022/10/06 05:22:01 INFO :
    Transferred: 0 B / 0 B, -, 0 B/s, ETA -
    Elapsed time: 1h22m0.4s

     

     

    Do i need to worry  i saw this for the last week maybe

     

     

    i put my upload script  on the post

     

    thx all

     

    upload.txt 10.07 kB · 0 downloads

    No problem, but I think it shows that you are rate limited so it will keep trying until the limitation is gone again. I don't understand why you added --max-transfer? You already have --drive-stop-on-upload-limit, so that will end the script when you hit the limit. Maybe try removing your max-transfer flag and see what the script does?

    • Like 1
  12. 52 minutes ago, undone said:

    Well, after a few initial difficulties a couple of years ago, it has been running without any problems so far.

    • gsuite:/crypt/media works
    • /cache leads to /mnt/cache and also works
    • the "&" is included, it just was not copied

    I added it because of a failure running the script. After a reboot I seems to work without it.

     

    What do you mean with "anything"?

    The file is encrypted with the password in the crypt config and the naming can be anything, nothing to hide behind my 'Merval Endgames 37 8K Premium 20.4 THEx.mp3.bin'.

     

    1. That was obviously the problem. If it runs in the background, there is also a connection in the appropriate folder.
    2. With the background process as predefined, it can simply be found at /mnt/user/mount_rclone/gcrypt/ .
    3. With the path from 2. the following also works (fusermount -uz /mnt/user/mount_rclone/gcrypt && fusermount -uz /mnt/user/mount_mergerfs/gcrypt/) and the array can be stopped.

    Thank you very much, I hope that is it for now with my problems.

    Glad it works now!

     

    Regarding the encryption, I meant you don't do file or directory name encryption. And you made clear that's a deliberate choice. I was just surprised with that. Knowing that Google has been limiting new accounts and stopping unlimited storage and multiple stories of people who got their whole drive deleted because of copyright material. I personally don't want to take that risk.

  13. 18 hours ago, francrouge said:

    hi yes  on krusader i got no problem but on windows  with network shares its not working anymore

    image.png.280f532277af0c32d9b6442ad9ece11f.png

    image.png.8587cc2e10568da03f901685502de95b.png

    i can't edit or rename delete etc. on the gdrive mount on windows my local shares are ok

    image.png.a07182d78fc62551a8b954364d57cc38.png

     

     

     

    I will add also my mount and upload script

     

    Maybe i'm missing something

     

     

     

    should i try the new permission feature you think ?

     

    thx

     

    Krusader working is not surprising since it uses the root account, so it's not limited with permissions. I looked over your scripts quickly and I see nothing strange. So I think it's an samba issue. Did you upgrade to 6.11? There were changes to sambe in there, maybe check that out? Could explain why it stopped working.

     

    1 hour ago, undone said:

    I finally found the right lines where I needed to insert it, thank you.

     

    The mount script now also runs without error messages, but I cannot use rclone as I did with my old server (A).
    There (A) I can enter the following line and have direct access to the files in the cloud, but this does not work on the new server (B). 

    ls /mnt/gsuite/
    >List of all connected cloud files< 

    In the new server (B), I cannot find the appropriate /mnt/ to access the data.

     

    Here is the comparison of the old- (A) und new- (B) server:

     

     

    1. does the script work as described above if I do not get any errors when I click on "run script" and it shows "script complete" at the end?
    2. How can I access my cloud mount from the command line?
    3. how can I stop the mount again (my array cannot be stopped (stop-loop) when the array is still running/ after I started the script)?

     

    I don't have any other scripts running.
    The mount is not visible in the array.

     

    Wow, I'm amazed your script for Server A worked....

     

    You configured your Gcrypt as : gsuite:/crypt/media. But the correct method would be gsuite:crypt/media.

    Then you also had your cache-dir in your A script to /cache. Did that work? In unraid you will have to define the actual path. Seems you did that correctly in the new script.

    On the A script you didn't close the mount script with a "&", in the new script this is fixed already by default.

    In your new script you put in --allow-non-empty as extra command, this is very risky to do. So make sure you thought about doing that.

     

    What I find most worrying is that your crypt doesn't actually encrypt anything. Is that by choice? If you do want to switch to an actually encrypted Crypt you will have to send all your files through the crypt to your storage. It won't automatically encrypt all the files within that mount.

     

    Your specific questions:

    1. Don't use "Run script" in User Scripts. Always use run script in the background when running the script. If you use the run script option it will just stop the script as soon as you close the popup. That might explain why your mount drops right away.

    2. You have the rclone commands here: https://rclone.org/commands/. Other than that you can for example use "rclone size gcrypt:" to see how big your gcrypt mount is in the cloud.

    3. You can unmount with fusermount -uz /path/to/remote. Make sure you don't have any dockers or transfers running with access to that mount though, because they will start writing in your merger folder causing problems when you mount again.

    • Like 1
  14. 2 hours ago, live4ever said:

    So after I upgraded my main uploading unRAID server from 6.9 series to 6.11 I noticed how permissions could mess up using the scripts from GitHub.

    My mount script is a bit different - I create a “backup” folder acting as sort of Dropbox for uploads to gdrive at:
    /mnt/user/local/gdrive_vfs/backup

    This folder would get deleted after rclone_upload completes and then get recreated with rclone_mount (with root:root).

    In DZMM rclone_mount the MountFolders bit:

    MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount


    In DZMM rclone_upload:

    Command1="--exclude downloads/**"


    So the paths from MountFolders:

    /mnt/user/local/gdrive_vfs/movies
    /mnt/user/local/gdrive_vfs/tv


    will get deleted (after uploading) - then the rclone_mount script will create them again with root:root (and dockers trying to use them will give permission errors.

    Is there a way for the rclone_mount script to create the empty MountFolders with nobody:user permissions?

     

    Maybe just add a chown command at the end of the script somewhere before the last exit?

    chown -R nobody:users /mnt/user/local

     

  15. 49 minutes ago, Halvliter said:

     

    This is a new way of using Rclone for me, thank you for helping me! I must not have been paying attention to all the information when I installed it.

     

    1. I have now tried to clean up all my shares, take a look below:

     

    An overview:

    V9WQfNg.png

     

    Specific details of the shares:
    <deleted for readability>

     

    2. Yes, I must have not utilized the local share properly, I see that now!

    I use Plex, Radarr and Sonarr. I must make some adjustments so that it is used correct. Am I thinking correct when I want to:

     

        1. Radarr sends torrent to rTorrent.
        2. rTorrent downloads torrent to /mnt/user/torrent/seeding
        3. Radarr sends hard-copy link to /mnt/user/local/movies/Movie 1 (2022)/Movie1.mkv
        4. Plex gets access to Movie 1 via /mnt/user/mount_mergerfs/Movie 1 (2022)/Movie1.mkv
        5. Upload script copies Movie1.mkv to crypt:media/movies/Movie 1 (2022)/Movie1.mkv
        6. When torrent is seeded long enough deletes file from /mnt/user/torrent/seeding/movie.mkv
        7. Plex continues to have access to moviefile

     

    I also have several Kodi-boxes accessing the crypt:media, so I would still like to have the .nfo's, subtitles and posters in the cloud.

    This command, are these the folders inside /mnt/user/local that I want to be mounted in /mnt/user/mount_mergerfs?

     

        MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

     

     

    3. Thank you very much, by your explaination I would like to stop using rclone-cache. How do I disable it ?

    Under the required settings, do I remove these? Or set them to "ignore"

     

    RcloneCacheShare="/mnt/user0/rclonecache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="400G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files

     

    In the rclone mount-settings do I remove these ?

     

        --dir-cache-time $RcloneMountDirCacheTime \
        --attr-timeout $RcloneMountDirCacheTime \
        --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \



    After the share is empty, then I can delete the share "rclonecache" ?

     

    4. Well, this is actually embarassing. I have not used the upload script at all. I have added a proposal to the script here. Does it look okay ?

     

      Hide contents
    #!/bin/bash
    
    ######################
    ### Upload Script ####
    ######################
    ### Version 0.95.5 ###
    ######################
    
    ####### EDIT ONLY THESE SETTINGS #######
    
    # INSTRUCTIONS
    # 1. Edit the settings below to match your setup
    # 2. NOTE: enter RcloneRemoteName WITHOUT ':'
    # 3. Optional: Add additional commands or filters
    # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
    # 5. Optional: Use service accounts in your upload remote
    # 6. Optional: Use backup directory for rclone sync jobs
    
    # REQUIRED SETTINGS
    RcloneCommand="copy" # choose your rclone command e.g. move, copy, sync
    RcloneRemoteName="cryptmedia" # Name of rclone remote mount WITHOUT ':'.
    RcloneUploadRemoteName="cryptmedia" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
    LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
    RcloneMountShare="/mnt/user/cryptmedia" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
    MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
    ModSort="ascending" # "ascending" oldest files first, "descending" newest files first
    
    # Note: Again - remember to NOT use ':' in your remote name above
    
    # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
    BWLimit1Time="01:00"
    BWLimit1="off"
    BWLimit2Time="08:00"
    BWLimit2="15M"
    BWLimit3Time="16:00"
    BWLimit3="12M"
    
    # OPTIONAL SETTINGS
    
    # Add name to upload job
    JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.
    
    # Add extra commands or filters
    Command1="--exclude downloads/**"
    Command2="-vv"
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""
    
    # Bind the mount to an IP address
    CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
    RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
    NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
    VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.
    
    # Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
    UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
    ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
    ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
    CountServiceAccounts="15" # Integer number of service accounts to use.
    
    # Is this a backup job
    BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
    BackupRemoteLocation="backup" # choose location on mount for deleted sync files
    BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
    BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y
    
    ####### END SETTINGS #######
    
    ###############################################################################
    #####    DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING   #####
    ###############################################################################
    
    ####### Preparing mount location variables #######
    if [[  $BackupJob == 'Y' ]]; then
    	LocalFilesLocation="$LocalFilesShare"
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
    else
    	LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
    fi
    
    RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount
    
    ####### create directory for script files #######
    mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files
    
    #######  Check if script already running  ##########
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
    	exit
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    	touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    fi
    
    #######  check if rclone installed  ##########
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    	exit
    fi
    
    ####### Rotating serviceaccount.json file if using Service Accounts #######
    if [[ $UseServiceAccountUpload == 'Y' ]]; then
    	cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
    	CounterNumber=$(find -name 'counter*' | cut -c 11,12)
    	CounterCheck="1"
    	if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
    		echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
    	else
    		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
    		CounterNumber="1"
    	fi
    	ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
    	echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
    	ServiceAccount=""
    fi
    
    #######  Upload files  ##########
    
    # Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    	ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    		echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    	else
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
    		ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    	fi
    else
    	RCloneMountIP=""
    fi
    
    #  Remove --delete-empty-src-dirs if rclone sync or copy
    if [[  $RcloneCommand == 'move' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
    	DeleteEmpty="--delete-empty-src-dirs "
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
    	DeleteEmpty=""
    fi
    
    #  Check --backup-directory
    if [[  $BackupJob == 'Y' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
    	LocalFilesLocation="$LocalFilesShare"
    	BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
    else
    	BackupRemoteLocation=""
    	BackupRemoteDeletedLocation=""
    	BackupRetention=""
    	BackupDir=""
    fi
    
    # process files
    	rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
    	--user-agent="$RcloneUploadRemoteName" \
    	-vv \
    	--buffer-size 512M \
    	--drive-chunk-size 512M \
    	--tpslimit 8 \
    	--checkers 8 \
    	--transfers 4 \
    	--order-by modtime,$ModSort \
    	--min-age $MinimumAge \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--exclude *fuse_hidden* \
    	--exclude *_HIDDEN \
    	--exclude .recycle** \
    	--exclude .Recycle.Bin/** \
    	--exclude *.backup~* \
    	--exclude *.partial~* \
    	--drive-stop-on-upload-limit \
    	--bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
    	--bind=$RCloneMountIP $DeleteEmpty
    
    # Delete old files from mount
    if [[  $BackupJob == 'Y' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
    	rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
    fi
    
    #######  Remove Control Files  ##########
    
    # update counter and remove other control files
    if [[  $UseServiceAccountUpload == 'Y' ]]; then
    	if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
    		echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
    	else
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
    		CounterNumber=$((CounterNumber+1))
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
    		echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
    	fi
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
    fi
    
    # remove dummy file
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    echo "$(date "+%d.%m.%Y %T") INFO: Script complete"
    
    exit

     

     

    1. I would put all the shares you only use to mount cloud stored files on (so nothing locally, or not much) to Prefer cache. It prevents to disturb your array disks from sleeping or having to switch read/write which is slow on a HDD. Other than that it looks fine

     

    2. Let me state beforehand I don't use torrents, so this is a part of your workflow I don't use personally. However I wonder why you want to upload files before seeding? You could decide to add the Torrent share to your merged folder of local/cryptmedia cloud/torrents. But you have to be aware of using the right folder structures. Maybe you can combine local and torrents and you only have (local/torrents) and (cloud) as 2 folders that are merged. Plex would still see the file the same since it's part of mount_mergerfs. It's up to you though.

     

    Regarding the folders in the script, these are folders that will be created in local folder in case they are not already there. I don't use DZMM's mount script because I have a bit of a different use case. But I also don't create folders within the main folders. I only have local, mount_rclone and mount_mergerfs. But like I said, I don't upload everything like he and you do. So you can try it out, with or without that list of folders. If you don't want to use that command, just remove the folder names or put in 1 dummy folder name. Don't just delete the whole line, that will mess up the script.

     

    3. Leave the top parameters, delete the flags further down in the script with the list of -- XX flags. Keep dir-cache-time and attr-timeout, but remove cache-dir. Dir-cache is different than VFS cache.

    Delete:

    --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \

    --vfs-cache-mode full \

    --vfs-cache-max-size $RcloneCacheMaxSize \

    --vfs-cache-max-age $RcloneCacheMaxAge \

    --vfs-read-ahead 1G \

     

    Just FYI, I use these values for my mount script which differ a bit from DZMM:

    --allow-other --umask 002 --buffer-size 256M --dir-cache-time 9999h --drive-chunk-size 512M --attr-timeout 1s --poll-interval 1m --drive-pacer-min-sleep 10ms --drive-pacer-burst 500 --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off

     

    After the share rclonecache is empty you can just delete it yes. Make sure you reboot after all the changes, so the cache is also actually stopped. Even better is to disable your dockers and mount script, reboot and then make the changes and delete the share and then start everything up.

     

    4. You use copy instead of move, any reason for that?

    These parameters are a bit ambiguous and can go wrong easily if you have a different setup/naming scheme. So to explain:

     

    RcloneRemoteName= This is the foldername where everything is stored. So if your crypt mount name (cryptmedia) is the same as the folder in your local and mount_mergerfs folders  this is fine. But for example if your rclone mount is crypt_media: but your folder is actually mount_mergerfs/Crypt-Media you have to fill in Crypt-Media here.

    RcloneUploadRemoteName= use your actual rclone mount name here, not the folder name.

    LocalFilesShare= location of the local files, so yours is fine like this

    RcloneMountShare= this is the main folder directory in which the folder to which you mount the rclone mount is nested. So in your case you used "/mnt/user/cryptmedia". But that means the script as you configured it now will upload files from "/mnt/user/cryptmedia/cryptmedia". If that is correct, then no problem. But if your media folder structure starts directly in the share cryptmedia you would have to fill in "/mnt/user" here.

     

    Remove -vv at command 2. It's already part of the default upload script as you can see further down below.

    • Like 1
  16. 32 minutes ago, Halvliter said:

     

    Thank you!

     

    1. Shares all over the place - How can I fix this? Can you give me an example, please?

     

    2. Fully cloudbased - I'm not entirely sure what you mean here ? I am used to the "old" way of mounting rclone, I just had a startup-script for mounting, and had another rclone copy script for adding new media. 

     

     

    3. Using VFS cache on seperate disk - Thank you very much for that tip, I will clean up the disk 1, and use that for VFS-cache. How do I ensure that only disk 1 is used for vfs-cache ?

     

    In the meantime, I will disable it, will it be enough to remove these parts from the mountscript?
     

        --vfs-cache-mode full \
        --vfs-cache-max-size $RcloneCacheMaxSize \
        --vfs-cache-max-age $RcloneCacheMaxAge \


    and change

        --vfs-read-ahead 1G \
    

    to perhaps 

        --vfs-read-ahead 512MB \

     

    4. Logging upload - I have not see the progress in the upload script. Where would I be able to view the progress ? The only thing I see is this:

     

    02.10.2022 13:59:50 INFO: Script complete
    Script Finished Oct 02, 2022 13:59.50
    
    Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

     

     

    Thank you again.

     

     

     

     

    1. I see you have the shares Gsuite, Cryptmedia, Rclonecache, local, mount_mergerfs. I don't understand the purpose of these. Normally you would only have 3 or 4 if you also use VFS cache. In the traditional setup you would have 1 folder to which you mount your crypt of google drive (mount_rclone in the script). Then you have your local files (mnt/user/local in the script) and the merger folder of those 2 (mount_mergerfs in the script).

     

    2. This is where your share structure also confuses me. You don't use the local share in your mount script, you put it on ignore. So your current setup will just use the mounted crypt rclone mount and the rclone cache. Your local files are not included in the merger. This means you are working fully in the cloud and don't have any of your media associated files locally. I personally store the small files like subtitles and nfo and artwork locally and merge that share with my cloud mount.

     

    3. Like you did right now, the Share rclonecache only uses Disk 1. Just know that the depending on your setup/wan speed/amount of users the rclone cache advantage can be pretty small. I used to use it, but I stopped and don't notice any difference, even though it uses a lot of storage. And I even stored it on an SSD. Putting a cache on a HDD can work, but I'm not really sure you are really getting the advantages you desire.

     

    To explain the cache: without cache you play a media file and it will write the buffer to your RAM and will play from there, thus initiating a new download every time the file is accessed.

    With cache: the buffer will be written to your VFS cache-dir disk and from there it will play. If you access that file later again it will play from your cache instead of downloading again.

     

    This can be useful when you have a multiple people accessing the same media and it prevents having to download the file every time when someone start the media playback and is easier on your RAM. But if you just have a few users or have different media interests it will just download Episode 1 to the disk, then you play Episode 2, Then episode 3, etc. And then once the cache is full it will just delete the unused cache files. There won't be any advantage because nobody is looking at Episode 1 within a short time span again. So take your situation in consideration before you take on the hassle to get a cache working.

     

    vfs-read-ahead only works with vfs-cache-mode full, so when you don't use the cache you don't need this flag.

     

    4. My upload script has -vv in it and this shows me what's going on when I look at the log within the User Scripts section of Unraid.

    • Like 1
  17. 10 minutes ago, Halvliter said:

     

    That is true, my post was very hasted, I'm sorry.

     

    I have edited my previous post, I hope that helps.

     

    Thank you.

    Your shares are all over the place to be honest. And your disk 1 which you use for all your shares is almost completely full which will just lock up your whole system.

     

    Right now your mount config seems to be fully cloudbased, without any local files. Is that correct? But you are using VFS cache on a disk which you use for all your other media/storage. That's really not advisable. Normally you use an unassigned drive for the cache so it does not interfere with your main storage/processes.

     

    So I would say, you need to add either storage, or you make sure your disk 1 unloads a lot of used storage, so you have a big buffer. Practically I would just disable the use of a VFS cache. And I also don't see any use for the stats-log flag, when the upload script already arranges that the log shows which files are being uploaded.

    • Like 1
  18. 28 minutes ago, unn4m3d said:

     

    Hey, I am running the script on a cronjob every 10 minutes. 

     

    Here is a full qoute of my mount script. 

     

    #!/bin/bash
    
    ######################
    #### Mount Script ####
    ######################
    ## Version 0.96.9.3 ##
    ######################
    
    ####### EDIT ONLY THESE SETTINGS #######
    
    # INSTRUCTIONS
    # 1. Change the name of the rclone remote and shares to match your setup
    # 2. NOTE: enter RcloneRemoteName WITHOUT ':'
    # 3. Optional: include custom command and bind mount settings
    # 4. Optional: include extra folders in mergerfs mount
    
    # REQUIRED SETTINGS
    RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="200G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="binhex-nzbhydra2 binhex-jackett binhex-lidarr binhex-radarr sonarr unpackerr nzbget overseerr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,downloads/completed"\} # comma separated list of folders to create within the mount
    
    # Note: Again - remember to NOT use ':' in your remote name above
    
    # OPTIONAL SETTINGS
    
    # Add extra paths to mergerfs mount in addition to LocalFilesShare
    LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
    LocalFilesShare3="ignore"
    LocalFilesShare4="ignore"
    
    # Add extra commands or filters
    Command1="--rc"
    Command2=""
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""
    
    CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
    RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
    NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
    VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them
    
    ####### END SETTINGS #######
    
    ###############################################################################
    #####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
    ###############################################################################
    
    ####### Preparing mount location variables #######
    RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
    LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
    MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location
    
    ####### create directories for rclone mount and mergerfs mounts #######
    mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
    mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
    if [[  $LocalFilesShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
    	LocalFilesLocation="/tmp/$RcloneRemoteName"
    	eval mkdir -p $LocalFilesLocation
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    	eval mkdir -p $LocalFilesLocation/"$MountFolders"
    fi
    mkdir -p $RcloneMountLocation
    
    if [[  $MergerfsMountShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
    	mkdir -p $MergerFSMountLocation
    fi
    
    
    #######  Check if script is already running  #######
    echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    	exit
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    fi
    
    ####### Checking have connectivity #######
    
    echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
    ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
    if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
    else
    	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    	exit
    fi
    
    #######  Create Rclone Mount  #######
    
    # Check If Rclone Mount Already Created
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
    else
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
    # Creating mountcheck file in case it doesn't already exist
    	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    	touch mountcheck
    	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
    # Check bind option
    	if [[  $CreateBindMount == 'Y' ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
    		else
    			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
    			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    		fi
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    	else
    		RCloneMountIP=""
    		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    	fi
    # create rclone mount
    	rclone mount \
    	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    	--allow-other \
    	--umask 000 \
    	--dir-cache-time $RcloneMountDirCacheTime \
    	--attr-timeout $RcloneMountDirCacheTime \
    	--log-level INFO \
    	--poll-interval 10s \
    	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    	--drive-pacer-min-sleep 10ms \
    	--drive-pacer-burst 1000 \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size $RcloneCacheMaxSize \
    	--vfs-cache-max-age $RcloneCacheMaxAge \
    	--vfs-read-ahead 1G \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &
    
    # Check if Mount Successful
    	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
    # slight pause to give mount time to finalise
    	sleep 5
    	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    	else
    		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
    		docker stop $DockerStart
    		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    		exit
    	fi
    fi
    
    ####### Start MergerFS Mount #######
    
    if [[  $MergerfsMountShare == 'ignore' ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
    else
    	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    	else
    # check if mergerfs already installed
    		if [[ -f "/bin/mergerfs" ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
    		else
    # Build mergerfs binary
    			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
    			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
    			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
    			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
    # check if mergerfs install successful
    			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
    			sleep 5
    			if [[ -f "/bin/mergerfs" ]]; then
    				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
    			else
    				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
    				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    				exit
    			fi
    		fi
    # Create mergerfs mount
    		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
    # Extra Mergerfs folders
    		if [[  $LocalFilesShare2 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare2=":$LocalFilesShare2"
    		else
    			LocalFilesShare2=""
    		fi
    		if [[  $LocalFilesShare3 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare3=":$LocalFilesShare3"
    		else
    			LocalFilesShare3=""
    		fi
    		if [[  $LocalFilesShare4 != 'ignore' ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
    			LocalFilesShare4=":$LocalFilesShare4"
    		else
    			LocalFilesShare4=""
    		fi
    # make sure mergerfs mount point is empty
    		mv $MergerFSMountLocation $LocalFilesLocation
    		mkdir -p $MergerFSMountLocation
    # mergerfs mount command
    		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
    # check if mergerfs mount successful
    		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
    		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
    			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
    		else
    			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
    			docker stop $DockerStart
    			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    			exit
    		fi
    	fi
    fi
    
    ####### Starting Dockers That Need Mergerfs Mount To Work Properly #######
    
    # only start dockers once
    if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
    else
    # Check CA Appdata plugin not backing up or restoring
    	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
    		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    	else
    		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
    		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    		docker start $DockerStart
    	fi
    fi
    
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    echo "$(date "+%d.%m.%Y %T") INFO: Script complete"
    
    exit

     

    I cannot see any issues configuration wise, however I am glad for help on this. 

    Your script seems fine. You can switch logs INFO to DEBUG to maybe catch something. But I'm thinking it's not a Rclone issue, but something with the Unraid system. See:

     

    29 minutes ago, undone said:

    Well, that is what if done, and it works by hand if I run rclone without script: 

     

     

    So why don't you put in that manual command in a seperate cron job to execute at start of the system? And if that overlaps with the rclone scripts, add a wait to those scripts.

  19. On 9/29/2022 at 4:54 PM, unn4m3d said:

    Hi, I used the provided scripts throughout the last couple of years. 

     

    A few days ago I ran into ongoing issues with my crypt mount. After mounting the crypt remote, folders inside the remote are visible and accessible. However, after a few minutes the mount drops and I get the following error: 

    root@Tower:~# ls -la /mnt/user/mount_rclone/gdrive_vfs
    /bin/ls: cannot open directory '/mnt/user/mount_rclone/gdrive_vfs': Transport endpoint is not connected

     

    If I now run the mount script without unmounting it does not give me any errors. However, I cannot acess the crypt mount.

     

    After unmounting with fusermount -uz I can mount and access the crypt folder again. 

     

    Any ideas to fix the issue are welcome. 

     

    Many thanks in advance. 

     

     

    Are you running the mount script on a continuous cron job? Can you share your specific mount script to see if you maybe configured something wrong?

     

    22 minutes ago, undone said:

    Hi, i have the rclone.config life in a different direction, is it somehow possible to tell the script where the new config file is located?

    e.g. the new path is "/mnt/disk1/rclone/rclone.conf"

    That is a question for the rclone plugin itself. But I don't think that's possible with Unraid because all the plugins are stored on the boot drive.

     

    33 minutes ago, Halvliter said:

    I have added the following command to the script, because I like to follow the upload process:

     

    Command2="--stats=2m --stats-log-level NOTICE --log-file=/mnt/user/torrent/rclone/log.txt"

     

    and it ran for many weeks without problems. Filesize was maybe 30-40 MB.

     

    Suddenly the file grew to a couple of gigabytes, so I deleted the file and rebooted. 

     

    After an hour the log-file is now 2.5 GB, and filled with these lines (repeated MANY times):

     

    2022/10/02 14:24:23 INFO  : vfs cache purgeClean item.Reset media/TV-serie/Stargate SG-1 (1997)/Season 05/Stargate SG-1 (1997) - S05E04 - The Fifth Man [DVD][8bit][x264][AC3 5.1]-MEECH.mkv: Empty item skipped, freed 0 bytes
    2022/10/02 14:24:23 INFO  : vfs cache purgeClean item.Reset media/TV-serie/Stargate SG-1 (1997)/Season 05/Stargate SG-1 (1997) - S05E04 - The Fifth Man [DVD][8bit][x264][AC3 5.1]-MEECH.mkv: Empty item skipped, freed 0 bytes
    2022/10/02 14:24:23 INFO  : vfs cache purgeClean item.Reset media/TV-serie/Stargate SG-1 (1997)/Season 05/Stargate SG-1 (1997) - S05E04 - The Fifth Man [DVD][8bit][x264][AC3 5.1]-MEECH.mkv: Empty item skipped, freed 0 bytes

     

     

    Sometimes I have the same problems as the user in this thread, but I haven´t found any solution to that problem:
     

     

    Any idea on how to fix this?

    You gave no information about your configurations, folders, disks, sizes, didn't attach the scripts you use. You made it pretty much impossible for us to be of any help.

  20. 14 minutes ago, Logopeden said:

     

    Talking of the file and folder limitations? or have i been reading this wrong

     

    i just whant it to move files to google drive. and then make unraid/radarr/sonarr move the file to crypt ect?

     

     

    do i need to have 1 mount for evry of the 100 users i got made?

    As far as I know there is only a file amount limitation like 400k files. Which is a big amount. And if you are smart with using seperate team drives for separate purposes you should be fine.

     

    You can use just 1 service account for the mount itself. But the uploading is done through multiple service accounts. It is for continuous use though and not for a one time upload.

     

    So let's say you have a backlog of multiple terabytes waiting to be uploaded you will have to stick to 750gb per day until you are caught up. Or you create multiple mounts to the same team drive with seperate service accounts and find a way to seperate the files you upload. Per folder or per disk drive for example.

     

    Once you have a continuous download and upload going you can use the script from DZMM. It will see the files you downloaded and start uploading and then switch to the next service account and so on. Keep in mind that download speed may not outpace upload speed or you will be troubled with the upload quota again.