Spladge

Members
  • Posts

    66
  • Joined

  • Last visited

Everything posted by Spladge

  1. Anybody else updated this recently and not seeing any of their devices or networks any more? EDIT: Default site reverted to an older empty site and selection menu moved - all still there
  2. Sorry, I didn't see this for some reason. I stopped using it and migrated to a NUC. Was doing some messing around and making changes to unraid and decided to keep the home assistant stuff isolated.
  3. `--vfs-read-ahead` defaults to zero, no read ahead so if you have not specified a value it will still be 0 I think, or false or off. I am not sure, that is the one thing I have not messed with. As far as improving seek I think whatever you have in `vfs-read-ahead` will be the maximum it will help by, if you want to FFWD to a point 600mb ahead of where you are and the read ahead value is 400mb then I think it will make no difference, but it might also make a difference of 400mb depending on other settings. I think the best thing for me to do is create two different teamdrive mounts with different settings and compare that way. I will try to set up a small VPS, biggest problem though may be that my home internet connection is not that fast and connections from most remote servers are poor to Australia.
  4. I have been testing the VFS settings fairly extensively for the past few months on the betas. Not with unraid but just my rclone mount scripts on straight ubuntu - what caching will really help with is jumping around - fwds and backwards in a file and as you mentioned with stuff like your kids watching the same file over and over - if you were sharing with a few people then the benefits would be stuff like the game of thrones linux iso final where everyone tunes in to watch on release day. Essentially you are keeping the file local. I keep jumping between full and writes. If there is anything you want tested maybe I can help.
  5. Ha - I did not even think of that! I'll see if I can modify my ubuntu version somehow. It has all the pieces but currently relies on systemd. Do you remove the mergerfs section and run it in a single script to avoid non-empty errors?
  6. So I am finally sitting down to migrate to mergerfs and the new scripts and first thing I am wondering is how people are coping with mounting multiple teamdrives here. Just adding them as a manual mount? Or using variables like... RcloneRemoteName01 RcloneRemoteName02..... Or am I missing something obvious?
  7. I might finally get a chance to migrate to mergerfs and the new mount scripts
  8. I have not tried but it would be easy enough to set up a test library and just put a few things in it.
  9. Note that I have not checked yet, but is there an option /flag / fork for strictly unencypted remotes? If not I might convert once you have settled everything down and perhaps it could sit in another branch?
  10. Well its very simple - I have a sonarr that looks after everything from 1980-1989 another for 1990-1999, 2000-20010 etc. Same for movies. Those are then structured the same way in teamdrives. `teamdrive_name:tv/1980s` and then it all comes together in a merger under `/mergerdir/tv`.
  11. You just set up mergerfs - you don't really have to do anything I use a bunch of different sonarrs and radarrs to manage my splits. Year based is the easiest because you can allow for that in the series level folder. And movie too. If you need some help with moving stuff between team drives I can help you there. The up loaders I guess can be switched but movies will not ever likely exceed the limit of a single drive. TV - I currently use four different team drives. five if you count anime.
  12. You may wish to consider the 400k object limit on teamdrives - this includes files / directories and stuff in the trash. An rclone move into a mergered MYDRIVE after upload can get you around that, but it's still another remote, and mydrive doesn't really support service accounts properly.
  13. I think you can easily use options other than numbering too - it works easiest for me, but random or alphanumeric would probably work too.
  14. Both -but it depends what you are doing - this was made to avoid api bans doing something like tdarr. So if you have a movies drive and a TV drive and so on, splitting them makes a block on one remote not affect the others. Did you look at my simple rgen script for config file? https://github.com/maximuskowalski/smount/blob/master/rgen.sh I use sometimes a few rclone.conf files and just swap them out. Different batches of service accounts from different projects. service_account_file = /opt/sa/76.json becomes service_account_file = /opt/sa2/76.json
  15. Maybe worth having a look at this if you are interested in cycling service accounts on your mount. https://github.com/Visorask/SARotate
  16. I used a different service account generator but it is fairly easy to rename them all (the .json files) to consecutive numbers or however you wish to use them if you are cycling. If you have gsuite admin access you can add the accounts to a group as a CSV file. This tool creates the CSV files as well. https://github.com/88lex/sa-gen
  17. Actually - I do have a script if you want to copy and paste the config - I use it for making a lot of remotes though. It's a subscript of my sharedrive mounter. You could populate an existing rclone.config file pretty easily but I usually have them all there already. If you have client id / secret / token for an existing remote you could use that too. https://github.com/maximuskowalski/smount/wiki/Rclone I should add that I do not use encryption.
  18. Yes - I meant including the variables on the wiki / instructions. The docker would replace the rclone app - what I meant buy that is you could supply a pre filled rclone.config file (with variables) to match up to the script you have here. Just another idea in terms of automating. Not suggesting there is anything wrong.
  19. Perhaps mention what those variables are if they need to be used / changed? There is also an rclone docker from hotio available that I am meaning to try. May make scripting easier.
  20. NC will allow you to edit / rename / move serverside within the merger system but not create any new files. So new stuff gets made in the right spot and then uploads according to your setups.
  21. Mergerfs will read Left to Right. So in your example when it tries for the files it will load a file from cache first, media_array second and so on. You can use RO RW and NC (no create) for determining where to write your files in the merger. mergerfs /mnt/user/media_cache=RW:/mnt/user/media_array=NC:/mnt/disks/media_remote=NC:/mnt/disks/media_team=NC /mnt/disks/media
  22. DZMM - server side wasn't available until relatively recently - but it sure is useful.
  23. To monitor the changes you could try to use a PAS docker - I have a combined plex/PAS docker but have not set it up or @Stupifier uses a slightly modified version of another script (plex RCS) to do this by monitoring the log file and initiating a plex scan of that dir via the api. https://github.com/zhdenny/plex_rcs