Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. what's your objective? - Main tdrive getting too big? Create a new rclone mount with a new tdrive and disable the new mergerfs share in that new script. In your original script add the new rclone mount as an extra local folder - you want another mergerfs mount for plex or similar with just your TV shows? I would manually create a 2nd mergerfs mount that merges the rclone remote that's already mounted on your PC and the corresponding local folder. That way you've got only got 1 rclone mount of the remote
  2. It's in the script settings: MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
  3. when you say your download tanks, how much are we talking? I suspect your download is being impacted by the need to send Acknowledgement packets - basically when you download something you tell the sender "I received the last packet ok, please send more". If you're hammering the upload at the same time you might be limiting your download potential - if you don't prioritise your acknowledgement packets. I prioritise my ACK packets with my pfsense VM and you wouldn't believe the difference it can make to even an average connection! Not one for this post, but definitely worth you investigating. Or, just limit your upload to whatever speed doesn't kill your download, and leaves you enough bandwidth to do other things e.g. when I had a gig I capped both at 800Mbps.
  4. I have had the same problem with all builds greater than beta25, and I've been unable to post diagnostics as my system completely freezes.
  5. - mount_rclone and mount_mergerfs are virtual folder so it doesn't matter. I've set mine to 'no' though - /local - user choice if want to use a faster cache or pool drive, or use the array. I've set mine to 'no' as I don't need fast access and files don't tend to hang around long before being uploaded. I do use a separate /downloads for my nzbget intermediate files that are saved on a pool drive, with complete files moved to the array You need to remount as the size is set when you do the mount. I've gone for 400GB as that works well for me with the size of my array, as I've got about 7 mounts so it's 7x400=2.8TB of cached files in total out of my 16TB total storage. My two array drives are spun up pretty much 24x7 and I don't have a parity drive to slow them down, so I don't think I would benefit from an SSD or NVME for the rclone cache. Remember these files are separate to the plex meta files which are small and numerous, so benefit from a fast drive. If I were you, I'd just use a normal HDD outside of your array so your parity setup doesn't slow the drive down.
  6. Not sure if this is related https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta35-available-r1125/page/8/?tab=comments#comment-11646
  7. Can you file a report in the beta35 thread please. I don't know if it's related, but beta30 and beta35 cause my machine to completely freeze/crash and I have to do a hard reset. If anyone else is successfully using beta35 then please shout out!
  8. FYI Important Security update to fix potentially vulnerable passwords: https://forum.rclone.org/t/rclone-1-53-3-release/20569
  9. https://github.com/BinsonBuzz/unraid_rclone_mount/commits/latest---mergerfs-support/rclone_mount
  10. Roll back. I can't explain the behaviour your seeing as I've been running the latest rclone version for a while without any problems.
  11. Are you sure it's downloading everything - maybe Plex is analysing files as part of scheduled maintenance? If you want to reduce the size of the cache, reduce the size of RcloneCacheMaxSize="400G"
  12. I managed to run beta 35 for about 5 mins before it totally locked me out/crashed - VM stopped working, and when I tried to connect to the server via the LAN, the dashboard wouldn't load. No joy with SSH. I had the same problem with the other betas between 25 and 35, although I don't think the lockdown occurred as fast with the sub 35 builds. Beta25 has been running fine for me. I've attached the diagnostics I captured after I rebooted into safe mode to rollback to see if they shed any light why I can't run the builds beyond beta25. highlander-diagnostics-20201116-1418.zip
  13. Have you looked at the actual path to see if there are two files there? I'm not sure if it's a rclone/script or sonarr/radarr issue, but this happens to me sometimes as well e.g. same file like your scenario or two versions of the same show/movie. If I spot them and I can be bothered I tidy up, but I have so much content now that if it plays I don't do anything. The one thing I am anal about is fixing Plex posters as I hate the ones with lots of text on that it seems to default to! Also, movie ratings as I like to have mine consistent as I use them to filter my kids' libraries e.g. they can only see GB/U, GB/PG, and GB/12.
  14. Not sure. Sometimes it stops working (rare) and I have to do a quick tidy up e.g. my dockers might not have stopped in time and I have to actually managed to physically add files to /mount_mergerfs, that I have to move manually to /local, so I can re-mount.
  15. Thanks for the new beta. The write-up went right over my head as someone who's not very technical e.g. I can follow the instructions for the GPU Driver instruction section on how to configure the files...but, I don't know why I would do this or when it might be beneficial or not to do so. What does it potentially allow me to do? Thanks. I'm keen to give this one a go as I'm still stuck on beta25 as the releases before this one, kept causing my machine to hang and lock me out.
  16. I've just updated the mount script to support local file caching. In my experience this has vastly improved playback experience and reduced transfer, and is definitely worth an upgrade. To utilise you need to be on V1.5.3+ The new toggles to set are in the REQUIRED SETTINGS block: RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="400G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files I use /user0 as my location as I have 7 teamdrives mounted, so I don't have enough space on my SSD. Choose wherever works for you. https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount
  17. Is there a way to check in a script if an appdata backup or restore is occuring? I have another script running that restarts dockers if they aren't running - I just realised I need it to NOT restart dockers, IF the appdata script is running. Thanks in advance.
  18. @Lucka glad you for them all working without any hiccups. When they run smoothly in the background, it is really good if you have enough bandwidth. It's saved me thousands of pounds in storage and a fair chunk in electricity cost from fewer HDDs spinning. Using traktarr is a good addition
  19. Just the upload script if it's upload speed you're adjusting. Whatever you run in a command i.e. the scripts, overrides the settings in the rclone config file
  20. The upload script checks for the presence of the mountcheck file in the right place that is created by the mount script. The check is failing - check you've mounted correctly and/or that your remote names match.
  21. This is a very old method and has been replaced with the use of Service Accounts
  22. If you use the mergerfs versions of the scripts that supports hardlinks, this is all taken care of i.e. the files stay local until removed from your torrent client
  23. Good work! In the scripts you can set BWLimits and schedules to fit your connection/usage. If you've 4.375MB/s, I would recommend only scheduling this speed for overnight. You can try playing around with drive-chunk-size etc to see if that helps if you're really trying to squeeze out a few more MB/s.
×
×
  • Create New...