DZMM

Members
  • Posts

    2766
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by DZMM

  1. Are you using the scripts in this thread? It sounds like your trying to do something different with rclone which probably should be posted somewhere else.
  2. How stable is your connection? The mount can occasionally drop and the script is designed to stop Dockers, test, and re-mount. I'm convinced mine's only done this about 3 times though
  3. Sounds dangerous and a bad idea sharing metadata and I think they'd need to share the same database, which would be an even worse idea. I'd try and solve the root cause and see what's driving the API ban as I've had like 2 in 5 years or so e.g. could you maybe schedule Bazarr to only run during certain hours? Or, only give it 1 CPU so it runs slowly? I don't use Bazarr but I think I'm going to start as sometimes in shows I can't make out the dialogue or e.g. there's a bit in another language and I need the subtitles, but I've no idea how Plex sees the files. I might send you a few DMs if I get stuck
  4. IMO you can't go wrong with using /user and /disks - at least for your dockers that have to talk to each other. I think the background to Dockers is they were setup as a good way to ringfence access and data. However, we want certain dockers to talk to each other easily! Life gets so much easier settings paths within docker WEBGUIs when your media mappings look like my Sonarr mappings below:
  5. "disappear" - when and where? I'm guessing that you're not starting the dockers after the mount is active.
  6. Easily fixed in Plex ( a bit harder in sonarr). I've covered it somewhere in this thread in more detail, but the gist of the change is; 1. Add new /mnt/user/movies paths to Plex in the app (after adding path to docker of course) 2. Scan all your libriaries to find all your new paths 3. ONLY when the scan has completed delete old paths from libraries 4. Empty trash This only works if you don't have the setting to auto delete missing files on. Sonarr is a bit of a pain as it tends to crash a lot for big libraries, so do backups! 1. Add new paths to docker and then app 2. Turn off download clients so doesn't try to replace files 3. Go to manage libraries and change location of folders to new location. Decline move files to new location 4. Do a full scan and it'll scan the new paths and find the existing files 5. Turn download clients back on Works well if your files are all named nicely. https://trash-guides.info/Sonarr/Sonarr-recommended-naming-scheme/
  7. Thanks. I left it going overnight and it seems ok this morning. The VM is working well now and I'm frantically catching up on work
  8. I had a power cut this afternoon and after rebooting my server is running constantly at 90% CPU usage + my main W11 VM is ultra-slow (prob because of the CPU usage). Is there anything in my diagnostics that indicates what's wrong and how to fix? All help appreciated as I need my VM for work as I'm about 50% as effective on my laptop. highlander-diagnostics-20220830-2223.zip
  9. yes seedbox ---> gdrive and unraid server organises, renames, deletes files etc on gdrive - no use of local bandwidth or storage except to play files as normal. One day I might move Plex to a cloud server, but that's one for the future (or maybe sooner than expected if electricity prices keep going up!)
  10. I agree. The only "hard" bits on other systems is installing rclone and mergerfs. But, once you've done that the scripts should work on pretty much any platform if you change the paths and can setup a cron job. E.g. I now use a seedbox to do my nzbget and rutorrent downloads and I then move completed files to google drive using my rclone scripts, with *arr running locally sending jobs to the seedbox and then managing the completed files on gdrive i.e. I don't really have a "local" anymore as no files exist locally but the scripts can still handle this.
  11. I'm not sure. I don't think so - I'm investigating
  12. Is anyone who is using rotating service accounts getting slow upload speeds? Mine have dropped to around 100 KiB/s even if I rotate accounts....
  13. interesting - I've noticed one of my mounts slowing, but not my main ones. But, I haven't watched much the last couple of months as work has been hectic
  14. I would however recommend if you are going to be uploading a lot, to create multiple tdrives as you are halfway there with the multiple mounts, to avoid running into future performance issues. I've covered doing this before in other posts, but roughly what you do is: - create a rclone mount but not a mergerfs mount for each tdrive - create one mergerfs mount that combines all the rclone mounts - one rclone upload mount that moves all the files to one of your tdrives - run another rclone script to move the files server side from the single tdrive to their right tdrive e.g. if you uploaded all your files to tdrive_movies, then move tdrive_movies/TV to tdrive_tv/TV Note - you can only do this if all mounts use the same encryption passwords
  15. 1. If all your mounts are folders in the same team drive, I would have one upload remote that moves everything to the tdrive. Because rclone is doing the move "it knows" so each rclone mount knows straightaway there are new files in the mount, so mergers is also happy. 2. Problem goes away 3. Yes you need unique combos. 4. They are different
  16. Has anyone else been having problems with Plex over the last couple of weeks where it crashes overnight, and the docker can't be stopped or killed? I'm having to reboot every couple of days because of this. Or, are there other commands I can use instead of docker stop plex and docker kill plex?
  17. I'm trying to find that page - where do I go? Thanks
  18. my other account that has my storage on and just 1 user got moved to Enterprise Standard for £15.30/mth when the offer period ends which I can live with as the effort to move so much data and get everything working again will be massive:
  19. Did anyone else get the new email about the Workspace migration and G Suite legacy continuing to be free? I'm so happy as it was going to cost me a small fortune as my family all use and a massive amount of time to help them all move!!
  20. Manually move whatever files are already in the mountpoint and then run the script again
  21. ahh, I'd forgotten this step - solved. Thanks
  22. thanks for replying. But in Windows it only says about 200GB used - shouldn't it "recover" the other 26GB?
  23. I'm resurrecting this old post as I'm having problems with thin provisioning on my new W11 VM. The main vdisk is the only thing on my Buzz hdd which is showing only 5.76GB free, even though on the VM tab it says 226GB of 248GB allocated. This is despite Windows saying there is 49GB free. I've used the sparse command above which has always worked for me, but seems to be failing me now. Are there any other things I can try? Thanks in advance. highlander-diagnostics-20220515-1205.zip
  24. I'm on rc6 and I don't think I'm having any problems??