Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. and the files you are upload aren't appearing in the rclone mount? no remote files should be going there - it's only used for the script checker files, so I've no idea how you managed that. Maybe when you were trying to setup you did something odd?? My advice is to unmount everything, delete /appdata/other/rclone, delete any residual files and folders out of /mnt/user/mount_rclone and /mnt/user/mount_mergerfs and start with a fresh run. Lots of people have problems initially when they don't set it up right. I think you know what you are doing now, so a clean start is probably best.
  2. what settings do you have for your mount and upload scripts?
  3. That's weird as the version 0.9.2 on my machine had it but github had a weird 0.9.1 version. Anyway thanks for spotting and I've fixed - other people with this problem please update unmount script to 0.9.2 #!/bin/bash ####################### ### Cleanup Script #### ####################### #### Version 0.9.2 #### ####################### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_cleanup script ***" ####### Cleanup Tracking Files ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Removing Tracking Files ***" find /mnt/user/appdata/other/rclone/remotes -name dockers_started* -delete find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete find /mnt/user/appdata/other/rclone/remotes -name upload_running* -delete echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***" exit
  4. You haven't setup correctly the cleanup script properly to delete this or something else went wrong while you were trying to fix everything. Look in /mnt/user/appdata/other/rclone/remotes/whatever_your_upload_remote_is_called and delete whatever file is in there. After you've deleted, run the upload script again and it should recreate the file once you've started and delete it when it's finished. Double-check to make sure. If you end up rebooting or something while an upload is in progress, setting the cleanup/unmount script at array start will make sure all the control files are deleted.
  5. Yes. You haven't setup the unmount script properly to remove all the verification files. Best way to run imo is at array start. Delete that file manually and the upload will start.
  6. You could also add your CCTV share as an extra path to your main mergerfs mount. Probably a better idea to have only one upload script running as your upload speed isn't great I.e. easier to control his much upload bandwidth is used
  7. yes, you could create multiple instances of the script to do backups, moves or syncs. Or, just setup a plain old rclone move on a cron job. All the streaming is done in RAM.
  8. That's one I don't how to stop - when I've messed stuff up in the post I've rebooted. Or, you could temporarily change the name of the path to force the upload to finish - messy I know.
  9. If you want to and if it won't messup say your plex scans. Or, you can add the other paths to the mergerfs mount as LocalPath2 (I think it is). I honestly think you're approaching this wrong. What I would do is just add /mnt/user/Videos to your mergerfs mount and then exclude the paths you don't want uploading yet e.g. /mnt/user/Videos/Animation and remove the exclusion if you change your mind. Otherwise, you'll probably end up wasting a lot of time and creating hassle with rescanning paths. It can't exceed your physical upload speed. Maybe it's buffering the first min or so of files to RAM, but after a few mins if you haven't set a bwlimit it will drop to less than 5MB/s
  10. @axeman paths not referenced in the script will be fine. Don't really understand the 2nd part. If you move the files you won't get duplicates.
  11. It doesn't like the space in "3D movies" hence the error: My script and me aren't smart enough to account for this - I try not to have paths with spaces to avoid these problems and always use underscores etc. Change the path and you'll be fine
  12. You've entered your paths wrong. Post your rclone mount settings. Edit: I just reread your post about wanting to mirror but have cloud found first so I think you messed up the script when you switched the mergerfs paths. Post your whole mount scriot please.
  13. Never come across that before. Maybe try editing the config file manually to look something like this: [tdrive] type = drive scope = drive service_account_file = /mnt/cache/appdata/other/rclone/service_accounts/sa_tdrive.json team_drive = xxxxxxxxxxxxxxxxxx # look at the url in gdrive for the ID server_side_across_configs = true
  14. Backup moves files deleted from the local folder to another folder on gdrive for a chosen number of days, so if you accidentally delete you can restore from gdrive
  15. You don't have to use teamdrives. But, it's recommended you take the extra steps to get them up and running because after a while most users come across one or more of the following problems: - want to upload more than 750GB/day by adding more users - want to share the remote not just Plex access, or use on another PC - fixes performance issues when a lot of content has been loaded, by splitting into multiple teamdrives that are merged locally Read the GitHub post which describes best how to use SA files
  16. I don't know how much of a file Plex has to scan to profile a file. If you want to experiement, I think reducing --drive-chunk-size: might help. This controls how big the first chunk is that Plex requests - 128M in my settings. Try 64M and 256M and share how you get on. I chose 128M as in my testings this was the best chunk on my setup at the time to get the fastest playback launch times i.e. I wasn't trying to optimise scans. Once you've done the first scan it gets a lot faster - almost normal speeds. --buffer-size: is just how much of each stream is held in memory. it shouldn't affect scanning. E.g. 8 streams would be 8 x256M max for this script.
  17. You don't have a remote called gdrive_vfs in your rclone config.
  18. Run the command "rclone version" as the plugin might not have installed the latest version of rclone.if it's less than v1.51 then uninstall and reinstall the plugin to get the latest.
  19. Yes. Mergerfs looks at the local location first so if you want to play the cloud copy, you need to delete the local copy. The script already does this - just set the upload script to 'move' and then the MinimumAge to 30d.
  20. Not sure of the context of @watchmeexplode5 advice, but it's best to forget /mnt/user/local exists and focus all file management activity on /mnt/user/mount_mergerfs as there's less chance of something going wrong
  21. you're overthinking things. Just treat the mergerfs folder like a normal folder where you add, move, edit etc files and don't worry about what happens in the background and let rclone via the scripts deal with everything.
  22. you have a duplicate directory in your destination, not your source - rclone picks this up. gdrive allows duplicate directories or files with the same name. if you're worried or anal about file management, you can investigate and manually delete the dups from gdrive or check out the rclone dedupe commands which are really good and can clean up server-side dupes. Again though, dupes on gdrive's side doesn't affect the mount as they aren't loaded.
×
×
  • Create New...