Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. Maybe the error is for the rclone mount path - Is it empty?
  2. @sol there's something in the unionfs mountpoint that shouldn't be there. How are checking? Have you tried mc?
  3. Unless they are going to create their own mounts the simplest solution is to share your Plex server
  4. Very feasible - lots of ways to do it: 1. Mount the same remote with same decryption passwords: no issues with reading from multiple mounts. Writing will work but there may be a lag for files appearing on other mounts until the directory cache expires. If more than one mount try to overwrite the same file only one will win, but no file corruption 2. Create a tdrive and share with 2nd gdrive account and then mount with same decryption passwords: same health warning as above 3. Easiest way - share plex servers! Although this will mean one machine doing all the lifting i.e. would be inefficient for bandwidth
  5. Same here. I've just finished selling 7 HDDs on eBay, including my Parity drive as I have a less real-time backup strategy now (to a seperate teamdrive on a cron job). The noise and heat reduction is really noticeable and of my remaining 16TB, I'm only using around 20% - with nearly 350TB in the cloud which would cost me over £7.5k just for the drives!
  6. Just Thumbnail creation for the reasons you've listed above ;-). I had a big pre-existing library that didn't have thumbnails - it would also take forever for my server to create and I'm not sure if they are needed. I don't miss them. Other guides recommend disabling doing the media analysis due to potential API bans, but I've never had a problem even when adding new content to Plex/google continuously at up to 1Gbps continuously for multiple days at times.
  7. Just was wondering / wanted more info as it might help other people low on ram. I haven't messed with my settings for around a year - I just erred in the side of caution when I posted my scripts. I was close to messing with my settings this weekend as I've had some recent budfering, but that was me making bad changes to my unifi APs which I realised last night and all is ok again.
  8. Nope - rclone move just won't upload anything local that's younger than 15d. I've set mine to 15min (I think) because I don't want anything local.
  9. That's interesting - are you playing any high bitrate movies or 4k?
  10. Have you tried mounting at mnt/user instead of /mnt/disks ? I've had problems with /mnt/disks in the past. Also, what about other dockers - do they work ok?
  11. you are mounting rclone at /mnt/disks - for host path 2 have you selected r/w slave?
  12. you could try reducing the number of uploads and checkers in the upload script to reduce ram usage. An extra 8GB might do the trick, but you might need to change some of the buffer settings in the mount command if you anticipate having a lot of concurrent streams
  13. Have you tried restarting sonarr? I change the paths of shows/movies frequently without any problems, both from mount folders to another mount folder and in the past from local-2-mount and vice versa
  14. Have you looked in the rclone_moubt folder? There's a small delay between files being uploaded and appearing there. To be honest though it sounds like you've set up your unionfs folder wrong somewhere if you're not seeing any cloud files
  15. No backup strategy - it's been working so well for me for over a year now, that if I wasn't posting in this thread, I would honestly have forgotten all about it and wouldn't even remember how I set it up! One thing that I considered with one person was to backup our tdrives with each other, so as long as we both didn't get booted at the same time, we'd have access to our content - we might get around to sorting this one day. I guess I'm assuming that if Google ever pulled the plug I'd have a few days to download what I want and to decide if I want to build a new server with 350TB+ of storage to hold everything....
  16. Yes, everything is streamed - nothing is preserved/cached
  17. must be something wrong with your internet connection as my gdrive connection never drops. the mount script if run on a cron job auto-remounts the rclone and unionfs mounts, so maybe increase the frequency
  18. I'm ok I'm way under the 400k limit - maybe use a backup utility to create a zip then upload?
  19. Looks right. Re the space, that's why I always use dashs or underlines in names to make my life easier
  20. good I'm not sure why you'd want to do this. If you want to test first, just manually copy a tv show or movie or two to see what happens. If you really want to test the full 40GB, then in the upload script just change rclone move to rclone sync
  21. Do you have a /mnt/user/mount_rclone/google_vfs folder? If not create one. I think this will solve your problem. I've added: mkdir -p /mnt/user/mount_rclone/google_vfs to the mount script. I think there was a reason why it's not there, but I can't remember why so adding until someone tells me it causes a problem.
  22. It could be plex - I would definitely turn off thumbnail creation if you have that on in your library settings. Maybe also turn off 'extensive media analysis' in scheduled tasks - others suggest this but I've had no problems and actually think it's a bad idea. If you look in status/alerts in plex you'll see what plex is doing - it's probably analysing files or creating thumbnails
  23. No. If you can, reboot your server and run the script in the background using the user scripts plugin
×
×
  • Create New...