Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. It looks like you've created /mnt/user/mount_rclone/google_vfs so all should be good. Are you running the script in the background?
  2. Are you using the user scripts plugin to run the mount script?
  3. Thanks. My trdrive_backup_vfs includes my plex appdata backup, but I use the CA backup plugin so it's just one big tar file
  4. ahh didn't realise folders counted. My backup tdrive is further away from the limit than I thought: root@Highlander:/mnt/user/public# rclone size tdrive_vfs: --fast-list Total objects: 72443 Total size: 322.704 TBytes (354816683643653 Bytes) root@Highlander:/mnt/user/public# rclone size tdrive_backup_vfs: --fast-list Total objects: 80245 Total size: 8.307 TBytes (9133993530852 Bytes) Edit: are you sure folders count as objects? My movie structure is /movies/movie_name so each of my 16k movies has an associated folder, so my objects should be over 80k not 72k
  5. What performance problems have you seen beyond 200k files? I've just checked and I'm at around 70k items for my plex media library tdrive, but I have another tdrive for backups which probably is over 200k files due to all the little system files 06.07.2019 00:50:41 PLEX LIBRARY STATS Media items in Libraries Library = Movies Items = 16147 Library = TV - Adults Items = 34549 Library = TV - Kids Items = 19054
  6. This is for streaming files not backing up. But, yes you could use it for moving your videos to the cloud - I'm not sure about moving photos as I haven't tried that yet.
  7. fixed on github. Sorry about that - not sure how that went missing
  8. I run my mount script on a 5 min cron job */5 * * * * so even if I'm unlucky and the unmount hasn't run yet, it gets fixed on the next run 5 mins later
  9. Run the unmount script at array start rather than shutdown - that's what I do
  10. I started this way and eventually as I got more comfortable I went all in for plex media, and I've also lots of other files to the point where my server is just photos and personal docs. I'm gone from around 44TB in my array to only 16TB, where only around 4TB is permanent. Re seeking, to be my mind it's got better since I started using rclone but it might be because I've got used to it now that I've gone all in. Have you tried experimenting with --buffer-size ?? I think a bigger buffer might help with forward seeking (not backward of course)
  11. This setup supports that - i.e. plex, sonarr, radarr etc all act as normal. Sonarr 'looks' at the mount_unionfs folder so it functions normally i.e. grab->download->move to media folder - all the rclone script behaviour is invisible to sonarr. The upload script 15m delay ensures that Sonarr has finished any post-processing (renaming etc) before the file is uploaded.
  12. Hi I'm getting this error in my logs but it still seems to be working: nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html) nginx: [error] lua_load_resty_core failed to load the resty.core module from https://github.com/openresty/lua-resty-core; ensure you are using an OpenResty release from https://openresty.org/en/download.html (rc: 2, reason: module 'resty.core' not found: no field package.preload['resty.core'] no file './resty/core.lua' no file '/usr/share/luajit-2.1.0-beta3/resty/core.lua' no file '/usr/local/share/lua/5.1/resty/core.lua' no file '/usr/local/share/lua/5.1/resty/core/init.lua' no file '/usr/share/lua/5.1/resty/core.lua' no file '/usr/share/lua/5.1/resty/core/init.lua' no file '/usr/share/lua/common/resty/core.lua' no file '/usr/share/lua/common/resty/core/init.lua' no file './resty/core.so' no file '/usr/local/lib/lua/5.1/resty/core.so' no file '/usr/lib/lua/5.1/resty/core.so' no file '/usr/local/lib/lua/5.1/loadall.so' no file './resty.so' no file '/usr/local/lib/lua/5.1/resty.so' no file '/usr/lib/lua/5.1/resty.so' no file '/usr/local/lib/lua/5.1/loadall.so')
  13. yes - if you copy a file that already exists in mount_rclone I think unionfs overwrites it.
  14. I've just followed your lead and removed my parity drive as well - can't believe I didn't consider this before as all my media is in the cloud and my personal files are backed up there as well, so I have limited local storage needs now. Hmm this is one for my to investigate as I have 3 W10 VMs on 247 running i440fx
  15. It got pushed back but it looks like the necessary changes to rclone union allowing unionfs to be dropped will be in the next release - 1.4.9 https://github.com/ncw/rclone/milestone/36
  16. Hmm not sure what's going on there. Maybe set the episode to unmonitored in sonarr?
  17. Yes!! If you organise your files in one big folder, when Plex is told there is a change it will scan all files in that folder. Having a folder per movie is more efficient as Plex will only scan that folder for changes
  18. post your logs. I've done full library scans/update metadata and not had problems. The bans only last 24 hours
  19. not sure. I've never done that. I assume if you change the rclone config then it will kill any necessary processes.
  20. it took me a few hours of trial and error to sort the counters. Once you've finished moving your existing content you should be able to upload from just one folder like me if your upload and download speeds are the same (i.e. content is shifted just as fast as it's added) - you just need enough accounts to ensure no individual account will upload more than 750GB/day and mess up the script for up to 24 hours until the ban lifts. I cap my upload scripts at 70MB/s so if I uploaded 24/7 I'd do about 6TB/day so I'd need at a min 6000/750=8 users... but I use about double that just in case something goes wrong.
  21. Yes that'd be the problem. Within gdrive you need to share the teamdrive with each user email and then when you create the remote use that account to create the token. A few tips: 1. Don't use the remote/user account you mount for uploading to make sure your mount always works for playback 2. If you're bulk uploading and you are confident there is no more than 750GB in each of your sub upload folders, I would run your move commands sequentially rather than at all at the same time ONCE A DAY, with the bwlimit set at say 80% of your max. Running multiple rclone move commands at the same time uses up more memory. You'll still get the same max transfer per day, with less ram usage
×
×
  • Create New...