DZMM

Members
  • Posts

    2801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. For the last week or so I've been unable to connect to my unRAID SMB shares from VMs or my laptop. It all used to work fine, but access has gone completely for private shares. I'm not sure if this is the root cause but the problems might have started after I tried to upgrade to 6.10rc2 around the same time, but I had to rollback as my VM performance was terrible taking about 30 mins to boot. I've read through a few posts re SMB issues but nothing has worked. I'm getting a bit desperate now. Help please! highlander-diagnostics-20220226-1410.zip
  2. All looks fine. Did you wait 15 mins after creating the test file?
  3. add this to your upload script in the custom commands. It'll create a log file you can see: --log-file=/home/user/wherever_you_want_$CounterNumber.txt
  4. use the user scripts plugin instead if you want to see logs
  5. I don't think 0 is a good idea as e.g. it could mean that all writes will go direct to google drive and won't be retried if there's a problem. You'll also miss out on read benefits e.g. if same TV episode is accessed, it will load faster from the cache rather than downloading again - I think this also helps with seeking. I have my caches currently set to between 10 and 200GB depending on my retention target. After posting earlier I did some quick research and for a stable rclone experience you should put something in.
  6. The vfs cache stores reads and writes on a first in, first out basis if the file isn't in use. E.g. it will store a plex stream so that it doesn't need downloading again for increased responsiveness, new writes direct to the mount (not the local folder via mergerfs) will go here first (if there's space). I keep mine small as my plex library scans and plex usage mean it would need to be massive to have a decent hit rate. I probably should investigate the implications of disabling it as it causes endless writes, and I'd probably be better off just using memory.
  7. you must have -rc somewhere in the custom commands - you have to move it somewhere else. Re the bind mount - either change it to N, or make sure you enter a different IP for RCloneMountIP
  8. read the message - you are trying to start the remote control twice and bind to the same IP.....
  9. I've managed this weekend to successfully integrate a seedbox into my setup and I'm sharing how I did it. I've purchased a cheap seedbox as my Plex streams were taking up too much bandwidth as I've gone from a 1000/1000 -->360/180 -->1000/120 connection, so it's been a pain trying to balance each day the bandwidth and file space requirements of moving files from /local to the cloud, and having enough bandwidth for Plex, backup jobs etc etc My setup now is: 1. Seedbox downloading to /home/user/local/nzbget and /home/user/local/rutorrent 2. rclone script running each min to move files from /home/user/local/nzbget --> tdrive_vfs:seedbox/nzbget and sync files from /home/user/local/rutorrent --> tdrive:seedbox/rutorrent (torrent files need to stay for seeding) 3. added remote path to ***arr to look in /user/mount/mergerfs/tdrive_vfs for files in /home/user/local (thanks @Akatsuki) It's working perfectly so far as my local setup hasn't changed, with rclone polling locally for changes that have occured in the cloud. Here's my script - I've stripped out all the options as I don't need them: #!/bin/bash ###################### ### Upload Script #### ###################### ### Version 0.95.5 ### ###################### # REQUIRED SETTINGS ModSort="ascending" # "ascending" oldest files first, "descending" newest files first # Add extra commands or filters Command1="--exclude _unpack/**" Command2="--fast-list" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # OPTIONAL SETTINGS CountServiceAccounts="14" ####### END SETTINGS ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting Core Upload Script ***" ####### create directory for script files ####### mkdir -p /home/user/rclone/remotes/tdrive_vfs ####### Check if script already running ########## echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script ***" if [[ -f "/home/user/rclone/remotes/tdrive_vfs/upload_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding." touch /home/user/rclone/remotes/tdrive_vfs/upload_running fi ####### Rotating serviceaccount.json file ####### cd /home/user/rclone/remotes/tdrive_vfs/ CounterNumber=$(find -name 'counter*' | cut -c 11,12) CounterCheck="1" if [[ "$CounterNumber" -ge "$CounterCheck" ]];then echo "$(date "+%d.%m.%Y %T") INFO: Counter file found." else echo "$(date "+%d.%m.%Y %T") INFO: No counter file found . Creating counter_1." touch /home/user/rclone/remotes/tdrive_vfs/counter_1 CounterNumber="1" fi ServiceAccount="--drive-service-account-file=/home/user/rclone/service_accounts/sa_spare_upload$CounterNumber.json" echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}." ####### Transfer files ########## # Upload nzbget files /usr/local/bin/rclone move /home/user/local tdrive_vfs: $ServiceAccount --config=/home/user/.config/rclone/rclone.conf --user-agent="external" -vv --order-by modtime,$ModSort --min-age 1m $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 --drive-chunk-size=128M --transfers=4 --checkers=8 --exclude rutorrent/** --exclude deluge/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude .Recycle.Bin/** --exclude *.backup~* --exclude *.partial~* --drive-stop-on-upload-limit --delete-empty-src-dirs --log-file=/home/user/rclone/upload_log.txt # Sync rutorrent files /usr/local/bin/rclone sync /home/user/local/seedbox/rutorrent tdrive_vfs:seedbox/rutorrent $ServiceAccount --config=/home/user/.config/rclone/rclone.conf --user-agent="external" -vv --order-by modtime,$ModSort --min-age 1m $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 --drive-chunk-size=128M --transfers=4 --checkers=8 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude .Recycle.Bin/** --exclude *.backup~* --exclude *.partial~* --drive-stop-on-upload-limit --log-file=/home/user/rclone/sync_log.txt ####### Remove Control Files ########## # update counter and remove other control files if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then rm /home/user/rclone/remotes/tdrive_vfs/counter_* touch /home/user/rclone/remotes/tdrive_vfs/counter_1 echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1." else rm /home/user/rclone/remotes/tdrive_vfs/counter_* CounterNumber=$((CounterNumber+1)) touch /home/user/rclone/remotes/tdrive_vfs/counter_$CounterNumber echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run." fi # remove dummy files and replace directories rm /home/user/rclone/remotes/tdrive_vfs/upload_running mkdir -p /home/user/local/seedbox/nzbget/completed echo "$(date "+%d.%m.%Y %T") INFO: Script complete" exit
  10. I never said there was a problem with the mount! You have to open dockers that access the mount AFTER the mount has successfully been created. That's why the script takes care of this
  11. Create a rclone remote that isn't encrypted and run a second mount and upload script pair to upload to it.
  12. do you have any fast drives for your /mnt/user/mount_mergerfs2 share or just HDDs? That could be why you are slow - e.g. I download to a non-mergerfs folder and then unpack to my mergerfs share which is 100% HDDs:
  13. post your mount settings, nzbget docker mappings and a screenshot of your path settings within nzbget.
  14. re-run the upload. Your suspicion was right - you changed files that rclone had in a pending queue to upload, so it didn't upload them for safety.
  15. The errors usually aren't anything to worry about. I suspect what's happened is that when you started the script theire was 8.7TB to transfer, but while it was running 257 files were "removed" e.g. renamed or upgraded by radarr, sonarr etc. To the script this is an error as when those files are no longer there when it gets around to trying to move them.
  16. are you opening plex AFTER you've successfully mounted? Plex is saying the file isn't there and that's why it's not playing. You need to launch plex, radarr etc after the mount is successful - that's why the script has a section to do this.
  17. I used to have similar problems where the first attempt would fail, but eventually it would mount as the cron job made further attempts. This seems to have stopped for me since I reduced the size of RcloneCacheMaxSize. My suspicion is hat trclone needs a bit of time to compare what's sitting in the cache locally and not uploaded yet, Vs what's in the remote location, before creating the actual mount. With a 3TB cache I think this is the problem. Try dropping to say 20GB and see if this solves the "issue" and then decide if you want a 1st mount experience, or you're happy for your cron job to handle it and you want a bigger cache.
  18. I agree. Have you tried browsing the directory? If you go to /disk1/mount_rclone/ /disk2/mount_rclone/ etc etc you'll see the files that are actually on your machine, rather than all the files in the mount if you went to /user/mount_rclone. Or, unmount and then look to see what files are left.
  19. If it doesn't work locally just on apple tv it's nothing to do with the script. To verify move the offending file to a non script folder and try and play.
  20. Thanks. I think your find means we can ditch mergerfs and use rclone union instead. I didn't use union before because it doesn't support hardlinks, but I think this workaround fixes that. I'll try and test this week.
  21. This is very interesting, and I think you've unlocked a significant improvement. One question first. So, in your torrent client have you changed the download location from /cloud/downloads/torrents/sonarr to /cloud/local/downloads/torrents/sonarr? Or was it always /cloud/local/downloads/torrents/sonarr?