Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. All your problems are because your mount script should have: LocalFilesShare="/mnt/user/local" not: LocalFilesShare="ignore" ignore is saying you don't want a mergerfs mount. So, what's happening is nzbget is writing direct to the rclone mount i.e. direct to gdrive, so I imagine it will be very slow and cause problems! LocalFilesShare2, LocalFilesShare3 and LocalFilesShare4 are the ones you want 'ignore' for if you don't want to add extra paths to your mergerfs mount. Edit: Your docker mappings are correct i.e. to /mnt/user and then within the docker to mount_mergerfs. You just need to fix the mount script. Easiest way is to correct the settings, stop all the dockers that are trying to add new or move files or are accessing the mount e.g. plex, then re-run the script and then start the dockers.
  2. --buffer-size in the mount script. What client are you using and is it the same for all? I've read in the past that iOS keeps opening and closing files which can cause problems. Yes, so I don't think mergers was the culprit. Browsing the mounts is perceptively faster so I think the fault was Google's or rclone's.
  3. oh and also make sure that dockers' paths are all pointing to paths inside the mergerfs mount, not the local path. You should only ever have to use the local path in special circumstances
  4. The trick is to ensure that sab/nzb/deluge etc are all in alignment with radarr/sonarr etc to get hardlinks support. So you need wherever your download clients are downloading to, to appear to be on the same drive when radarr etc look at it i.e. it's the docker mappings that count. To make this as easy and foolproof as possible, ALL of my dockers have the same mappings: - /user ---> /mnt/user - /disks --> /mnt/disks Just these - nothing else. So, that no matter what paths I use within dockers, they will always appear as being on the same 'drive' for inter-docker moves, and I get the maximum file transfer benefit. I.e. for you within Sab, you would download to /user/downloads/sabnzb via Sab's settings. Radarr would look at /user/downloads/sabnzb (same /user mapping) and then move the files to /user/media/movies/whatever i.e. everything 'stays' in the 'user' drive.
  5. The mount script is supposed to create the file - hopefully you won't have this problem again.
  6. Is initial playback faster? You could try increasing the buffer size
  7. The script is looking for the mountcheck file and failing. Can you post your mount and upload settings please
  8. Ok, I've finished splitting my movie & tv teamdrive into 3 and I can testify that performance is better with launch times back to around 3-4 seconds, whereas before they were at times 10+ seconds. I don't know when the tipping point is for creating extra team drives - I'm sharing my teamdrives to see if we can figure it out. I've got 3 now: - Main: rclone size tdrive_vfs: --fast-list Total objects: 66914 Total size: 258.232 TBytes (283928878628609 Bytes) - Adult TV: rclone size tdrive_t_adults: --fast-list Total objects: 55550 Total size: 118.501 TBytes (130292861122569 Bytes) - UHD (TV & Movies): rclone size tdrive_uhd: --fast-list Total objects: 4706 Total size: 69.986 TBytes (76950733274565 Bytes) and an extra rclone mount for music that isn't in a teamdrive: rclone size gdrive: --fast-list Total objects: 95393 Total size: 4.418 TBytes (4857512752388 Bytes) I've only got a total of 223K objects (451TB) but I think my experience proves it's not worth adding anywhere near the 400k limit if you want good performance. I might create a 4th teamdrive and move my kids movies and tv shows to a new teamdrive to see if that knocks another second or two off launch times.
  9. gdrive has no 400k limit so I'm using for my music. I've just setup 2 extra teamdrives - one for UHD content and one for my grown-up TV (not kids). The UHD server side switch went ok although the larger tv one is a bit worrying - I can't see the files on google, but the move has happened on the mount and plex is still playing everything (although appearing in both the source and destination)! It's a fair few TBs so I assume google will catchup soon....
  10. Ok, I'm going to split out my 4K content to start with (easiest) and then some of movies to see if that helps.
  11. @Bjur I recommend you have just one mergerfs mount and add the second rclone remote as an additional local folder - I think I proposed this earlier (can't remember). I do however take a slightly different approach for my music which would push my tdrive over 400K because of all the tracks and folders - I add them to my main tdrive, but then do an overnight rclone move to gdrive where there's no object limit. Now that is interesting! I've noticed that navigating plex and launching files has been slow of late and I'm wondering if this is the cause. At the moment I have all my movies and tv shows in one tdrive - I think the time has come to create a couple more. Do you still aggregate your remotes into one mergerfs mount or do you have multiple mergerfs mounts?
  12. My suspicion is it's a pending upload, but again I've never bothered to investigate.
  13. @Bjur if you want to upload to different locations you need multiple instances of the script.
  14. The latest version of the script supports move, copy and sync. I'm not really understanding what you want to do. If you to point an encrypted remote the to same folder on gdrive, just do that in rclone config i.e. [remote1] type = crypt remote = gdrive:crypt [remote2] type = crypt remote = gdrive:crypt
  15. RcloneCommand="sync" (better) or RcloneCommand="copy" Another script You could get API bans for other reasons, so a good idea to isolate your remotes
  16. To achieve this I would add the local folder you want to appear in mergerfs mount but not uploaded to gdrive as LocalFilesShare2="path to folder you don't want uploading". If you change your mind, just move the files to LocalFilesShare - nothing will 'move' in the mergerfs mount but files will get uploaded.
  17. The old unmount script used to actually unmount but it can't now since I added the ability for the mount to be anywhere.
  18. Yes, you need to create a share folder e.g. mount_rclone like the default in the script and then mount the remote there. If you don't use the share for anything but mounting, the share settings don't matter as nothing should get stored there.
  19. where are you mounting? Directly in /mnt/user or a share in /mnt/share? If it's the former, that's why you're getting errors.
  20. Try experimenting with different buffer size (usually higher) and vfs chunk size in the mount script. Vfs changes will affect startup times - higher values will slow launch, but usually fix buffering problems. I'm surprised you're having problems with 200Mbps though - what average Mbps is Plex/tautulli reporting for the file? Anyone else got any ideas as buffering problems are very rare.
  21. @drogg does the 4k play on if the file is local? Assuming you've got enough bandwidth, there shouldn't be a problem if you can play a local version. I'm on 360 and I have tonnes of concurrent activity when im playing 4k.
×
×
  • Create New...