Raneydazed

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by Raneydazed

  1. Well i went ahead and over corrected and made a bunch of different projects, made a bunch of different changes to things. Only to go back to the first OAuth client ID i made and got rid of the SAs. I can't figure out where i went wrong here. Right now I'm just getting LONG lines of errors about how nothing can be uploaded due to quota being met? Its going to take me 64 years to get things uploaded as of now. I am patient but, thats a bit too long for me right now! 2022/02/03 22:06:50 INFO : Transferred: 5.097 GiB / 642.745 GiB, 1%, 334 B/s, ETA 64y45w2d1h12m45s Errors: 9740 (retrying may help) Checks: 9740 / 9744, 100% Transferred: 0 / 10012, 0% Elapsed time: 1h3m0.8s Checking: Transferring: * data/media/tv/Tacoma F… 2.0][h264]-CtrlHD.mkv: 0% /1.723Gi, 0/s, - * data/media/tv/Tacoma F… 5.1][h264]-CtrlHD.mkv: 0% /1.780Gi, 0/s, - * data/media/tv/Ted Lass…mos 5.1][h264]-NTb.mkv: 0% /2.544Gi, 0/s, - * data/media/tv/Steven U…p][EAC3 2.0][x264].mkv: 0% /237.423Mi, 0/s, - 2022/02/03 22:06:50 ERROR : data/media/tv/Tacoma FD (2019) [imdb-tt8026448]/Season 01/Tacoma FD S01E08 [WEBDL-1080p][EAC3 2.0][h264]-CtrlHD.mkv: Failed to copy: googleapi: Error 403: The user's Drive storage quota has been exceeded., storageQuotaExceeded 2022/02/03 22:06:50 ERROR : data/media/tv/Tacoma FD (2019) [imdb-tt8026448]/Season 01/Tacoma FD S01E08 [WEBDL-1080p][EAC3 2.0][h264]-CtrlHD.mkv: Not deleting source as copy failed: googleapi: Error 403: The user's Drive storage quota has been exceeded., storageQuotaExceeded So, can i use the existing project, and the existing OAuth Client ID i have/had set up previously? My current Oauth Client 2 that im using (Client ID and Client secret in rclone config), am i able to use that to create service accounts? since it has drive api etc? I've been using Rclone for a few weeks now. I'd like to set up SAs but I'm missing something here. I tried to set up some and ended up putting a hundred in each of my projects, had like 1000 of them. Which was whack. Do i need to make a different Oauth thing for each of the python quickstart links, ie. drive api and directory api? I have workspace enterprise so unlimited. I can't for the life of me understand whats going on right now
  2. Set up service accounts and screwed up everything. Output/logs running the “Mount” script— 03.02.2022 12:48:58 INFO: Creating local folders. 03.02.2022 12:48:58 INFO: Creating MergerFS folders. 03.02.2022 12:48:58 INFO: *** Starting mount of remote gdrive_vfs 03.02.2022 12:48:58 INFO: Checking if this script is already running. 03.02.2022 12:48:58 INFO: Script not running - proceeding. 03.02.2022 12:48:58 INFO: *** Checking if online 03.02.2022 12:48:59 PASSED: *** Internet online 03.02.2022 12:48:59 INFO: Mount not running. Will now mount gdrive_vfs remote. 03.02.2022 12:48:59 INFO: Recreating mountcheck file for gdrive_vfs remote. 2022/02/03 12:48:59 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "gdrive_vfs:" "-vv" "--no-traverse"] 2022/02/03 12:48:59 DEBUG : Creating backend with remote "mountcheck" 2022/02/03 12:48:59 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf" 2022/02/03 12:48:59 DEBUG : fs cache: adding new entry for parent of "mountcheck", "/usr/local/emhttp" 2022/02/03 12:48:59 DEBUG : Creating backend with remote "gdrive_vfs:" 2022/02/03 12:48:59 DEBUG : Creating backend with remote "gdrive:crypt" 2022/02/03 12:48:59 Failed to create file system for "gdrive_vfs:": failed to make remote "gdrive:crypt" to wrap: drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json: no such file or directory 03.02.2022 12:48:59 INFO: *** Checking if IP address 192.168.1.252 already created for remote gdrive_vfs 03.02.2022 12:49:00 INFO: *** IP address 192.168.1.252 already created for remote gdrive_vfs 03.02.2022 12:49:00 INFO: *** Created bind mount 192.168.1.252 for remote gdrive_vfs 03.02.2022 12:49:00 INFO: sleeping for 5 seconds 2022/02/03 12:49:00 Failed to create file system for "gdrive_vfs:": failed to make remote "gdrive:crypt" to wrap: drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json: no such file or directory 03.02.2022 12:49:05 INFO: continuing... 03.02.2022 12:49:05 CRITICAL: gdrive_vfs mount failed - please check for problems. Stopping dockersplexradarrsonarrbinhex-readarrtautulliprowlarrlidarrbinhex-readarr-audiblebinhex-qbittorrentvpnnzbgetScript Finished Feb 03, 2022 12:49.05 My RClone config if you think it would be helpful! [gdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json team_drive = server_side_across_configs = true [gdrive_vfs] type = crypt remote = gdrive:crypt filename_encryption = standard password = some-password password2 = some-password directory_name_encryption = true Went through all the steps as well as I could understand in the “Follow steps 1-4” link. Everything was working fine aside from those errors, then I set up SAs and my damned power went out. After reboot, I get what you saw above.
  3. #!/bin/bash########################## Mount Script ############################ Version 0.96.9.3 ############################### EDIT ONLY THESE SETTINGS ######## INSTRUCTIONS# 1. Change the name of the rclone remote and shares to match your setup# 2. NOTE: enter RcloneRemoteName WITHOUT ':'# 3. Optional: include custom command and bind mount settings# 4. Optional: include extra folders in mergerfs mount# REQUIRED SETTINGSRcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive dataRcloneMountShare="/mnt/user/mount_rclone2" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rcloneRcloneMountDirCacheTime="720h" # rclone dir cache timeLocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disableRcloneCacheShare="/mnt/user0/mount_rclone2" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rcloneRcloneCacheMaxSize="300G" # Maximum size of rclone cacheRcloneCacheMaxAge="336h" # Maximum age of cache filesMergerfsMountShare="/mnt/user/mount_mergerfs2" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disableDockerStart="plex radarr sonarr binhex-readarr tautulli prowlarr lidarr binhex-readarr-audible binhex-qbittorrentvpn nzbget " # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings pageMountFolders=\{"data/usenet/completed,data/usenet/intermediate,data/usenet/nzb,data/usenet/queue,data/usenet/tmp,data/usenet/scripts,data/torrent/complete,data/torrent/intermediate,data/torrent/queue,data/media/audible,data/media/books,data/media/movies,data/media/music,data/media/tv,data/media/anime"\} # comma separated list of folders to create within the mount# Note: Again - remember to NOT use ':' in your remote name above# OPTIONAL SETTINGS# Add extra paths to mergerfs mount in addition to LocalFilesShareLocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disableLocalFilesShare3="ignore"LocalFilesShare4="ignore"# Add extra commands or filtersCommand1="--rc"Command2=""Command3=""Command4=""Command5=""Command6=""Command7=""Command8=""CreateBindMount="Y" # Y/N. Choose whether to bind traffic to a particular network adapterRCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP addressNetworkAdapter="eth0" # choose your network adapter. eth0 recommendedVirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them####### END SETTINGS #######
  4. Fun new development, NZBGet is now downloading at approx. 15MB/s on average. For a single file, (current one is 900M) it has been taking between 20 and 45 minutes to unpack. Unpack has been extremely slow since I started using RClone and Mergerfs. Download was generally solid at 45MB/s. I set it up according to the GitHub Link. /user and /mnt/user. Hardlinks is enabled. Can't hardly even download a few episodes! Where should I look to improve? Not sure where to start. (Internet speed is 800 down and 30 up)
  5. Oh awesome!! I guess I’ll just wait it out then. I keep getting impatient. I’m messing with a hetzner vps right now. It’s looking better and better. Although, all my encrypted data, am I able to use it with a different source/rclone? I should be able to right with the secret and everything?
  6. Alright so I'm thinking that after I went through and adjusted some of the files possibly right before or even during an upload, I unknowingly changed what was being uploaded and it didn't like it? Getting a lot of different 'errors' 2022/01/31 22:58:36 DEBUG : data/usenet/completed/Nora Roberts - High Noon MP3/abook.ws~NaRs-HhNn-2007/High Noon-Part03.mp3: md5 = 2c073e14787f1c2c68453196db6dfcd2 OK 2022/01/31 22:58:36 INFO : data/usenet/completed/Nora Roberts - High Noon MP3/abook.ws~NaRs-HhNn-2007/High Noon-Part03.mp3: Copied (new) 2022/01/31 22:58:36 INFO : data/usenet/completed/Nora Roberts - High Noon MP3/abook.ws~NaRs-HhNn-2007/High Noon-Part03.mp3: Deleted 2022/01/31 22:58:39 DEBUG : data/media/audible/L.E. Modesitt Jr./Scion of Cyador/108 L E Modesitt Jr - Scion of Cyador.mp3: md5 = df57453661bfc2da5ddf427c10b324b0 OK 2022/01/31 22:58:39 ERROR : data/media/audible/L.E. Modesitt Jr./Scion of Cyador/108 L E Modesitt Jr - Scion of Cyador.mp3: corrupted on transfer: sizes differ 3108208 vs 3178975 2022/01/31 22:58:39 INFO : data/media/audible/L.E. Modesitt Jr./Scion of Cyador/108 L E Modesitt Jr - Scion of Cyador.mp3: Removing failed copy 2022/01/31 22:58:39 ERROR : data/media/audible/L.E. Modesitt Jr./Scion of Cyador/108 L E Modesitt Jr - Scion of Cyador.mp3: Not deleting source as copy failed: corrupted on transfer: sizes differ 3108208 vs 3178975 2022/01/31 22:58:40 DEBUG : 5u766ki0eig8vcmvt9aboeu5eg/2ei79mrfro0bvskiuj5lbnlh7o/7m9vp6trdrfg5rd7i17lkrsaj8/ibdk802nmoj4b0hctdphjnsdg925ubmqg4dif0llicigus00n206ba2mn5cke6uev3mhdavd6l1pem3kvnt6b4c9p9v55jhuvgb277o/7gcl8p9vfahvkl42vbs8ueab4mfgknk3dmfsp0lana1mmr2nkli0/u512265ll1hq8v72m7rflqj2b4ich9avf3aeki3eudp1ovhtlchj5frqdoi0caie60n8stlrrtcpu: Sending chunk 0 length 15203613 2022/01/31 22:58:51 INFO : Transferred: 561.639 MiB / 596.961 GiB, 0%, 1.013 MiB/s, ETA 6d23h30m36s Checks: 2 / 6, 33% Deleted: 1 (files), 0 (dirs) Renamed: 1 Transferred: 1 / 10013, 0% Elapsed time: 11m0.7s Checking: and further down i have the errors 2022/01/31 23:04:02 INFO : Transferred: 574.090 GiB / 891.342 GiB, 64%, 800.839 KiB/s, ETA 4d19h23m12s Errors: 432 (retrying may help) Checks: 9129 / 9133, 100% Deleted: 4414 (files), 0 (dirs) Renamed: 4413 Transferred: 4414 / 14426, 31% Elapsed time: 56h17m0.6s Checking: I'm trying to figure out what I ought to do from here?
  7. Hi all, having an odd issue that i havent been able to find a definite answer to yet. Just thought I'd check on my upload script earlier, and saw I was having errors. It said to retry? 2022/01/29 00:48:02 INFO : Transferred: 2.059 TiB / 8.721 TiB, 24%, 3.370 MiB/s, ETA 3w2d23h48m30s Errors: 257 (retrying may help) Checks: 21638 / 21642, 100% Deleted: 10690 (files), 0 (dirs) Renamed: 10690 Transferred: 10690 / 20702, 52% Elapsed time: 190h1m0.8s Checking: i just realized that i previously did NOT have my dockers in the mount script. so i ran the clean up script, stopped docker, added them back into the mount script, restarted docker then the script then ran upload, and got this ^^^. what have i done? Also, how do I use service accounts on unraid with the autorclone instructions? I cant get pip or py3 to work at all. Thank you for your time!!
  8. Anybody have advice on where to look for support on the Autorclone side of this, ie the SA portion of this? On unraid I don’t think that portion is possible the way it’s spelled out. Requires python 3 or pip3. And I can’t figure out a way to get through that bit
  9. So I read through like 60 pages of people complaining and found the answer! Run the cleanup script! And it worked like a charm. I hadn't added it back. (this is my second attempt at using rclone and gdrive, i nuked the whole first attempt lol) So everything is fine now! Question, easiest way to move a good portion of my local files (about 40tbs give or take) to the gdrive? I'd prefer not to redownload everything if I can avoid it. Copying is taking FOREVER
  10. Isn't the mount_rclone share empty? Shoot, mine is! I think i must have it configured wrong, because in my mount script I changed it to make /data shares so /data/media/movies,/data/media/tv,/data/media/music. I'll post my scripts and see if you guys can spot anything unusual? #!/bin/bash ###################### #### Mount Script #### ###################### ## Version 0.96.9.3 ## ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/mount_rclone2" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/user0/mount_rclone2" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="300G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/user/mount_mergerfs2" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="nzbgetremote binhex-radarr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page MountFolders=\{"data/media/movies,data/media/music,data/media/tv,data/usenet/complete"\} # comma separated list of folders to create within the mount # Note: Again - remember to NOT use ':' in your remote name above # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="y" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them also, my logs for running the mount script say that the "containers are already running" but they aren't. #!/bin/bash ###################### ### Upload Script #### ###################### ### Version 0.95.5 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Edit the settings below to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: Add additional commands or filters # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring # 5. Optional: Use service accounts in your upload remote # 6. Optional: Use backup directory for rclone sync jobs # REQUIRED SETTINGS RcloneCommand="move" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="gdrive_vfs" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/mount_rclone2" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y ModSort="ascending" # "ascending" oldest files first, "descending" newest files first # Note: Again - remember to NOT use ':' in your remote name above # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight. BWLimit1Time="01:00" BWLimit1="off" BWLimit2Time="08:00" BWLimit2="15M" BWLimit3Time="16:00" BWLimit3="12M" # OPTIONAL SETTINGS # Add name to upload job JobName="_daily_upload" # Adds custom string to end of checker file. Useful if you're running multiple jobs against the same remote. # Add extra commands or filters Command1="" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # Bind the mount to an IP address CreateBindMount="y" # Y/N. Choose whether or not to bind traffic to a network adapter. RCloneMountIP="192.168.1.253" # Choose IP to bind upload to. NetworkAdapter="eth0" # choose your network adapter. eth0 recommended. VirtualIPNumber="1" # creates eth0:x e.g. eth0:1. # Use Service Accounts. Instructions: https://github.com/xyou365/AutoRclone UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="15" # Integer number of service accounts to use. # Is this a backup job BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName BackupRemoteLocation="/mnt/user/mount_mergerfs2/backup" # choose location on mount for deleted sync files BackupRemoteDeletedLocation="/mnt/user/mount_mergerfs2/backup/deleted_files" # choose location on mount for deleted sync files BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y It seems like I'm having an issue with how rclone and mergerfs see the shares? in my /mnt, I have, /mnt/user/mount_mergerfs/gdrive_vfs. gets weird after that. i then have /mount_mergerfs/grdive_vfs/gdrive_vfs/data/media/ tv|movies|music. i also have, /mount_mergerfs/gdrive_vfs/data/media/ tv|movies|music. Local has the same, including the /usenet/complete. My rclone mount only has ---> /mount_rclone2/gdrive_vfs/gdrive_vfs/data/media/movies. thats it. what the heck am i missing?
  11. Alright, back again with what might be a stupid question but here goes. When I want something added to my gdrive(cloud storage) I can add it manually to my mergerfs share right? If I decide to unload my music for example, drag it over to /mnt/user/mount_mergerfs/gdrive_vfs/media/music for example. And it copies over to my local folder(ie. /mnt/user/local/gdrive_vfs/media/music). Which is then uploaded to the cloud storage? I guess my main question is, how the hell do I know if it worked or if it stays locally and isn’t getting uploaded. Upload logs after running the script tell me 11.01.2022 11:41:09 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs,data ***11.01.2022 11:41:09 INFO: *** Starting rclone_upload script for gdrive_vfs,data ***11.01.2022 11:41:09 INFO: Exiting as script already running. How do I know if it actually uploads? It has said the same thing in logs every hour (it’s set to run the ‘upload’ script hourly) for the last week.
  12. Hi all. I’m again, fairly new to doing this type of thing. I think I screwed up the script by not running in the background. It now says it’s running and the script ended. How would I kill the script and restart it in the background? I looked and it’s not showing up in active scripts. My knowledge of Linux commands extends as far as how to type in nano and a directory/file lol. (Just pointing out my ignorance) Sent from my iPhone using Tapatalk
  13. I think I was trying out about 150 gbs. And I set up the speed however the config files said to on bisonbuzzs GitHub lol
  14. I think I may have figured it out. Question, it is apparently going to take about 21 hours to upload. Is that about right do you think? Sent from my iPhone using Tapatalk
  15. This is my “short” log in userscripts now after running upload in the background. 31.12.2021 08:00:11 INFO: *** Rclone move selected. Files will be moved from /mnt/user/mount_mergerfs/gdrive_media_vfs/gdrive_media_vfs for gdrive_upload_vfs ***31.12.2021 08:00:11 INFO: *** Starting rclone_upload script for gdrive_upload_vfs ***31.12.2021 08:00:11 INFO: Script not running - proceeding.31.12.2021 08:00:11 INFO: Checking if rclone installed successfully.31.12.2021 08:00:11 INFO: rclone installed successfully - proceeding with upload.31.12.2021 08:00:11 INFO: Uploading using upload remote gdrive_upload_vfs31.12.2021 08:00:11 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.2021/12/31 08:00:11 INFO : Starting bandwidth limiter at 15Mi Byte/s2021/12/31 08:00:11 INFO : Starting transaction limiter: max 8 transactions/s with burst 12021/12/31 08:00:11 DEBUG : --min-age 15m0s to 2021-12-31 07:45:11.561497194 -0800 PST m=-899.9892178532021/12/31 08:00:11 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/mount_mergerfs/gdrive_media_vfs/gdrive_media_vfs" "gdrive_upload_vfs:" "--user-agent=gdrive_upload_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,15M 16:00,12M" "--bind=" "--delete-empty-src-dirs"]2021/12/31 08:00:11 DEBUG : Creating backend with remote "/mnt/user/mount_mergerfs/gdrive_media_vfs/gdrive_media_vfs"2021/12/31 08:00:11 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"2021/12/31 08:00:11 DEBUG : Creating backend with remote "gdrive_upload_vfs:"2021/12/31 08:00:11 Failed to create file system for "gdrive_upload_vfs:": didn't find section in config file31.12.2021 08:00:11 INFO: Not utilising service accounts.31.12.2021 08:00:11 INFO: Script completeScript Finished Dec 31, 2021 08:00.11Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt Something somewhere isn’t talking to something lol Sent from my iPhone using Tapatalk
  16. I did fiddle with both of these quite a bit. In the merger_vfs share, i deleted several of the shares/subfolders and added my own. Does this allow for that? its setup for downloads/movies downloads/tv and a few others. i changed it in krusader because i was having trouble with radarr and nzbget (i created 2 new containers to use just for "cloud storage" would that be up there with why im having trouble?
  17. ok, i honestly may have messed up how i set up the mount script as well. #!/bin/bash ###################### ### Upload Script #### ###################### ### Version 0.95.5 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Edit the settings below to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: Add additional commands or filters # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring # 5. Optional: Use service accounts in your upload remote # 6. Optional: Use backup directory for rclone sync jobs # REQUIRED SETTINGS RcloneCommand="move" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="gdrive_upload_vfs" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="/mnt/user/mount_mergerfs/gdrive_media_vfs" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y ModSort="ascending" # "ascending" oldest files first, "descending" newest files first # Note: Again - remember to NOT use ':' in your remote name above # Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight. BWLimit1Time="01:00" BWLimit1="off" BWLimit2Time="08:00" BWLimit2="15M" BWLimit3Time="16:00" BWLimit3="12M" # OPTIONAL SETTINGS # Add name to upload job JobName="_daily_upload" # Adds custom string to end of checker file. Useful if you're running multiple jobs against the same remote. # Add extra commands or filters Command1="--exclude downloads/**" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # Bind the mount to an IP address CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter. RCloneMountIP="192.168.1.253" # Choose IP to bind upload to. NetworkAdapter="eth0" # choose your network adapter. eth0 recommended. VirtualIPNumber="1" # creates eth0:x e.g. eth0:1. # Use Service Accounts. Instructions: https://github.com/xyou365/AutoRclone UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts. ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files. ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload". CountServiceAccounts="15" # Integer number of service accounts to use. # Is this a backup job BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName BackupRemoteLocation="backup" # choose location on mount for deleted sync files BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y ####### END SETTINGS ####### i didnt edit anything below the ##### END SETTINGS ##### line
  18. So I’m new to the forum and forum discussions, please don’t judge lol. I tried to set this up yesterday and it did not go as planned. The upload script is giving me some errors. I’m thinking it’s due to how I set up the scripts. Which was done by me and my lack of knowledge. 12/31 05:47:01 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/mount_mergerfs/gdrive_media_vfs/gdrive_media_vfs" "gdrive_upload_vfs:" "--user-agent=gdrive_upload_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,15M 16:00,12M" "--bind=" "--delete-empty-src-dirs"]2021/12/31 05:47:01 DEBUG : Creating backend with remote "/mnt/user/mount_mergerfs/gdrive_media_vfs/gdrive_media_vfs"2021/12/31 05:47:01 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"2021/12/31 05:47:01 DEBUG : Creating backend with remote "gdrive_upload_vfs:"2021/12/31 05:47:01 Failed to create file system for "gdrive_upload_vfs:": didn't find section in config file31.12.2021 05:47:01 INFO: Not utilising service accounts.31.12.2021 05:47:01 INFO: Script complete [edit] Last half of log (for rclone_upload) in userscripts Sent from my iPhone using Tapatalk