Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. I think you and @perhansen might be right. I've been getting high memory usage lately and I've purchased another 32GB that's waiting to go in once a big processing job finishes in a day or two. Will report back if all ok afterwards. Thanks
  2. I'm getting some weird behaviour with the GUI highlander-diagnostics-20200221-0134.zip
  3. How did you add in the log files? Worth adding to the main script? I agree you can do some nice stuff with mergerfs. What I've done is created a pool for each of the local shares for my tdrives, and I then do one upload job from that pool to one tdrive, and then do superfast server-side moves to put the files in the right teamdrives. Means I have one upload script running, rather than multiple ones eating up RAM.
  4. Help please. I'm trying to call another user script from a user script. # script1 #!/bin/bash some code exit # script 2 #!/bin/bash source /boot/config/plugins/user.scripts/scripts/script1/script more code exit The problem I'm having is the exit in script 1 i.e. the child is exiting the parent script 2 as well. Is there a way to avoid this happening and letting the parent resume? I found this on the net https://unix.stackexchange.com/questions/217650/call-a-script-for-another-script-but-dont-exit-the-parent-if-the-child-calls-e/356018 but when I try ./boot/config/plugins/user.scripts/scripts/script1/script I get 'no such file or directory'.
  5. My request is similar. I have a pfsense VM which means my server doesn't have connectivity until the VM has started up. This means I currently run some scripts as */5 * * * * to give the VM 5 mins to startup. What I really want to do is run the script just once xx mins after array start, rather than having it run every 5 mins forever. On reflection, I guess I could add a sleep 300 to my array start script, but it'd be nice if I could control this in my scheduling.
  6. Thought of a way to do this. For now just do fusermount -uz /path/to/rclonemount and also to mergerfs mount
  7. @KeyBoardDabbler it mounted to that location because of what settings you put in the config. You make a good point about the cleanup script as it now cleans up at array start, rather than actually unmounting as mounts now could be anywhere from multiple mount instances e.g. I have 4. I do need to find a way for people to unmount outside of a reboot or doing manually with my scripts.
  8. @KeyBoardDabbler post your rclone.conf without the passwords and the config section at the top of the script please
  9. Upload Script 0.95.2 Thanks to @watchmeexplode5 helping me fix me being stupid and not using my own script properly, which led to him adding another nice simplification. --drive-service-account-file added to upload remote, removing the need to add a remote that isn't 'used' for the upload job. Before if using an encrypted remote e.g. gdrive_vfs: the service account was added to gdrive: . Now, via --drive-service-account-file the rotating of SAs is done in gdrive_vfs: https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_upload
  10. because the script errored out before you still have the mount_running file there. Delete it manually from rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running or run the unmount script to cleanup everything first.
  11. @KeyBoardDabbler sorry the extra check I added (too) quickly without testing for @bar1 broke the mount script. Please try 0.96.4 https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount
  12. @KeyBoardDabbler one slash too many: LocalFilesShare="//mnt/user/local" Edit: whoops sorry realised that was me - must have slipped in when updating on GitHub. Fixed.
  13. it sounds like it's failing to copy the mountcheck file to the remote i.e. something is wrong with your rclone config which doesn't have permission to add files to google. I've added a new section to the mount script to check if the mountcheck copy is successful - try the latest version 0.96.2+ # Creating if mountcheck successfully copied if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} Mountcheck successfully copied. Proceeding." else echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} mountcheck copy failed. Please investigate your rclone config and settings." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi And once again - can you please post the whole log output, formatted, so people can read. There's a lot of messaging in there that will help people get you up and running.
  14. @bar1 that was supposed to be remove passwords. It's not very helpful as you're not posting the actual script you are running. The last line processed looking at your logs before the errors is: "echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."" and the next line should be, which doesn't seem to be working is: touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running My only guess is the script doesn't like whatever your remote 'Bar1-xxx' is called - try underscores rather than dashes.
  15. Nice way to get a 1G connection. Which provider did you go with?
  16. can you post the full logs as can see where it's failing from when you run the script and your rclone config (remote passwords) please - can you use the code formatting as well when you post please.
  17. Thanks @Spladge I was hoping there was another way. Not a problem though
  18. I agree! Although I'm sure someone who knows how to code will wonder why on earth I'm doing certain things, I've got no real script or coding experience and I feel like I've progressed a lot in the last two weeks. Yes, I've basically added in things that I use that I hadn't included before as I was worried they'd make support harder e.g. bindmounts, bwlimits, multiple remotes etc as well as making it easier to setup with fewer potential issues = less support questions + encourage more people to try. It's also setup so migration from unionfs or any other setup should only take a few mins now. Rotating Service accounts is a big find for me - I think you were using my mount rotation version which although worked, looks a bit dumb now. One feature you've missed is the latest one allowing extra folders to be added easily to mergerfs - again, I think this will reduce the number of issues on this thread. Re bindmounts - yes. I give each remote and upload_remote a unique IP to stop all the traffic appearing on my unRAID IP in pfsense. This way I can traffic shape to give my streaming mounts top priority, and various priority levels for the other remotes. Backup /upload switch - again added selfishly so the script can support my backup job. Bwlimit - there's no logic as it's hardcoded into the rclone move job. It should still work if anyone (wanted to) deleted the config entries - I haven't tried to see what happens, but I doubt anyone would do that. I went hard-coded as it makes the cleanup script a lot easier to run via user scripts as the files to be deleted are always in the same place. If it was variable, users would have to enter the same location (or all the locations if they are running multiple scripts for multiple remotes) they've chosen in the cleanup script.
  19. @Spladge Thanks. I wish I'd thought about adding more folders to my main mergerfs mount, before I created a new one just last month when my music folders & files pushed me over the 400K limit - I spent a fair while having to move, rescan and fix tag errors 😞 Anyway, planning for the future as I will probably need to create another remote at some point and because I know you're not the only person with more than two folders in their unions, I made some updates last night to the mount script https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount Scripts should be finished now, except for any bug fixes as the ability to add custom commands and extra mergerfs folders should cover any future ideas. What's your folder structure? At the moment for TV I've got the following folders added to Plex: /mount_mergerfs/tv/kids/tv show (year) /mount_mergerfs/tv/adults/tv show (year) /mount_mergerfs/uhd/tv/kids/tv show (year) /mount_mergerfs/uhd/tv/adults/tv show (year) With 2x Sonarr dockers - one for non-UHD and one for UHD. If I add another sonarr instance in a new remote (assuming just HD), I can't see a way around having to add two more folders to plex: /mount_mergerfs/new_sonarr/tv/kids /mount_mergerfs/new_sonarr/tv/adults Or, are you saying you do this with your dockers: Docker1: /mount_mergerfs/tv/2000/tv show (2001)/ Docker2: /mount_mergerfs/tv/2010/tv show (2012)/ Docker3: /mount_mergerfs/tv/2020/tv show (2020)/ and Plex is ok if you only add /mount_mergerfs/tv to the library, and it can work out the rest?
  20. If all the custom exclusions weren't filled in and used, it errored out. I've just fixed this so they can be empty and now you can enter custom commands, rather than just exclusions - see v0.95.1 https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_upload # Add extra commands or filters Command1="--exclude downloads/**" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" # process files rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $BackupDir \ --user-agent="$RcloneUploadRemoteName" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --min-age $MinimumAge \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \ --bind=$RCloneMountIP $DeleteEmpty
  21. Ok, Updates finished https://github.com/BinsonBuzz/unraid_rclone_mount. Tidy ups to mount and cleanup script. Upload script has some good changes: configurable --min-age as part of the config section configurable --exclusion as part of the config section. I've added 8 which should be plenty Service account counters work 100% now (I think!) Added ability to do backup jobs For 99% of users this should mean the main script doesn't need touching. I added #4 to reduce the amount of edits I have to do to support my own jobs, including my backup job. Now that the main body supports my backup job, future updates will be much faster and they'll be fewer errors.
  22. @Thel1988 I'm going to ditch the cut command tonight as it's not working 100%. Will post an update later
×
×
  • Create New...