Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by DZMM

  1. Could be - we're not sure. At the moment my system has been shutting down ok. Don't know - but if /mnt/user/local is working I'd just run with it! #1 I have no idea what's going on there as I see negliable performance difference and I think @watchmeexplode5 has said the same #2 I got my first API ban in over a year a few weeks ago, but I was doing some big scans in Plex as well as in Radarr, Sonarr etc all at the same time sounds like something is wrong with your radarr mappings not rclone which just uploads anything it sees in the local share - radarr controls where files are moved.
  2. @TecneoI haven't had a chance to play with union this week - have you had any progress before I start?
  3. ncw just said they should work - let me know how you get on. I'm going to focus on fixing the local polling tonight.
  4. It doesn't appear in your /dockers page - it's a bit of an odd case. It has to be re-installed everytime unRAID starts as part of the script. It's why I want to remove it if possible - because unRAID can't support it natively e.g. via a plugin or a 'normal' docker, it's a bit confusing for unRAID users. Not mergerfs' problem, but just makes it a bit clunky. Having everything in rclone will be a lot cleaner.
  5. I've just read the full post you linked to and saw Nick's comment - I'm going to try after work removing the -dir-cache-time and having poll set to maybe 1s. Need to read up a bit first/have a refresher on what both are doing
  6. nope but it defaults to 1m so it wouldn't help - I think rclone looks for new changes based on --dir-cache-time - when I had this set to 720h as usual changes weren't getting picked up - I waited about 5 mins. With --dir-cache-time 1m they got picked up pretty quickly.
  7. @watchmeexplode5 @Kaizac @testdasi and everyone else - I need help testing rclone union please which landed yesterday as part of rclone 1.5.2. https://forum.rclone.org/t/rclone-1-52-release/16718 I've created a test union ok and playback seems good - better than mergerfs, although only tried a few files. It'd be great if we can get this running as I think it'll be easier to support than mergerfs which has been brilliant but must be installed via a docker. We'll also be just using one app. I've encountered one problem so far in that -dir-cache-time applies to local folder changes as well, so a small number is needed to spot any changes made to /mnt/user/local. I've asked if there's a way to have a long cache for just the remotes. I've asked for advice here: https://forum.rclone.org/t/rclone-union-cache-time/16728/1 My settings so far: [tdrive_union] type = union upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs: tdrive_uhd_vfs: tdrive_t_adults_vfs: gdrive_media_vfs: action_policy = all create_policy = ff search_policy = ff rclone mount --allow-other --buffer-size 256M --dir-cache-time 1m --drive-chunk-size 512M --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes tdrive_union: /mnt/user/mount_mergerfs/union_test
  8. @axeman #1 if you delete locally a file that rclone has synced,then rather than delete it on the remote it is moved to the backupdir for your chosen number of days #2 it should show your files. Are you using krusader? If so, restart it as it has problems. Or try ssh or Windows explorer
  9. Or look in your mergerfs location. Or look in /bin
  10. yes the upload script doesn't run if the mount is not running.
  11. and the files you are upload aren't appearing in the rclone mount? no remote files should be going there - it's only used for the script checker files, so I've no idea how you managed that. Maybe when you were trying to setup you did something odd?? My advice is to unmount everything, delete /appdata/other/rclone, delete any residual files and folders out of /mnt/user/mount_rclone and /mnt/user/mount_mergerfs and start with a fresh run. Lots of people have problems initially when they don't set it up right. I think you know what you are doing now, so a clean start is probably best.
  12. what settings do you have for your mount and upload scripts?
  13. That's weird as the version 0.9.2 on my machine had it but github had a weird 0.9.1 version. Anyway thanks for spotting and I've fixed - other people with this problem please update unmount script to 0.9.2 #!/bin/bash ####################### ### Cleanup Script #### ####################### #### Version 0.9.2 #### ####################### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_cleanup script ***" ####### Cleanup Tracking Files ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Removing Tracking Files ***" find /mnt/user/appdata/other/rclone/remotes -name dockers_started* -delete find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete find /mnt/user/appdata/other/rclone/remotes -name upload_running* -delete echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***" exit
  14. You haven't setup correctly the cleanup script properly to delete this or something else went wrong while you were trying to fix everything. Look in /mnt/user/appdata/other/rclone/remotes/whatever_your_upload_remote_is_called and delete whatever file is in there. After you've deleted, run the upload script again and it should recreate the file once you've started and delete it when it's finished. Double-check to make sure. If you end up rebooting or something while an upload is in progress, setting the cleanup/unmount script at array start will make sure all the control files are deleted.
  15. Yes. You haven't setup the unmount script properly to remove all the verification files. Best way to run imo is at array start. Delete that file manually and the upload will start.
  16. lol don't exactly have anywhere to go at the moment
  17. You could also add your CCTV share as an extra path to your main mergerfs mount. Probably a better idea to have only one upload script running as your upload speed isn't great I.e. easier to control his much upload bandwidth is used
  18. yes, you could create multiple instances of the script to do backups, moves or syncs. Or, just setup a plain old rclone move on a cron job. All the streaming is done in RAM.
  19. That's one I don't how to stop - when I've messed stuff up in the post I've rebooted. Or, you could temporarily change the name of the path to force the upload to finish - messy I know.
  20. If you want to and if it won't messup say your plex scans. Or, you can add the other paths to the mergerfs mount as LocalPath2 (I think it is). I honestly think you're approaching this wrong. What I would do is just add /mnt/user/Videos to your mergerfs mount and then exclude the paths you don't want uploading yet e.g. /mnt/user/Videos/Animation and remove the exclusion if you change your mind. Otherwise, you'll probably end up wasting a lot of time and creating hassle with rescanning paths. It can't exceed your physical upload speed. Maybe it's buffering the first min or so of files to RAM, but after a few mins if you haven't set a bwlimit it will drop to less than 5MB/s
  21. @axeman paths not referenced in the script will be fine. Don't really understand the 2nd part. If you move the files you won't get duplicates.
  22. It doesn't like the space in "3D movies" hence the error: My script and me aren't smart enough to account for this - I try not to have paths with spaces to avoid these problems and always use underscores etc. Change the path and you'll be fine
  23. You've entered your paths wrong. Post your rclone mount settings. Edit: I just reread your post about wanting to mirror but have cloud found first so I think you messed up the script when you switched the mergerfs paths. Post your whole mount scriot please.
  24. Never come across that before. Maybe try editing the config file manually to look something like this: [tdrive] type = drive scope = drive service_account_file = /mnt/cache/appdata/other/rclone/service_accounts/sa_tdrive.json team_drive = xxxxxxxxxxxxxxxxxx # look at the url in gdrive for the ID server_side_across_configs = true