Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. It's slower for me as well. I did a few tests (just copied a big tar file in ssh) to my 'local' mergerfs: Array-2-Array (Disk 2--> Disk 1): 108MB/s Array-2-SSD (Disk 2--> MX500): 188MB/s Array-2-Mergerfs (Disk 2--> Disk 1): 84MB/s 1 and 3 should be roughly the same. Separately, I need to work out and fix what's wrong with my mergerfs command as #3 was supposed to write to my SSD not my array: mergerfs /mnt/disks/ud_mx500/local:/mnt/user/local /mnt/user/mount_mergerfs/local -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=lus,cache.files=partial,dropcacheonclose=true,moveonenospc=true,minfreespace=150G
  2. @jrdnlc I would add a log file as a custom command https://rclone.org/docs/#log-file-file to see what's going on. As far as I can see rclone should be deleting the files
  3. Are those the folders you want to see deleted? They will get re-created every 10 mins. If not, then something else is recreating your folders, or your folders aren't setup correctly as rclone deletes source folders after uploading.
  4. What do you have set for MountFolders= in the mount script and do you have it running on a cron job? If so, it'll recreate those folders. Otherwise, I'm out of ideas.
  5. The upload script is uploading files from /mnt/cache/mount_upload
  6. GT710 - can be found very cheap on eBay as only £30 or so new.
  7. Can you give an example and post your script options as it should be deleting source folders.
  8. @neow here are my Plex mappings. Use similar ones for all dockers
  9. 1. Created a 2TB UD pool /mnt/disks/ud_mx500 of 2x1TB SSDs 2. Before my rclone and mergerfs mounts are built etc I create an extra mergerfs mount of #1 and my array only share /mnt/user/local mergerfs /mnt/disks/ud_mx500/local:/mnt/user/local /mnt/user/mount_mergerfs/local -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=lus,cache.files=partial,dropcacheonclose=true,moveonenospc=true,minfreespace=150G The combination of category.create=lus (least used space - SSD always wins as smaller than the array/drive - not sure which is used in the calc, but it works) and minfreespace=150G (don't store on SSD if less than 150G) seems to work the way I want, with new files going onto the SSD and only onto the array if the SSD is full. The SSD sometimes dips to around 70-80GB free, but never lower. It keeps files off my faster cache nvme drive as the SSDs are fast enough to max out my linespeed both up and down. 3. Then I add /mnt/user/mount_mergerfs/local as my local location to my scripts: # REQUIRED SETTINGS RcloneRemoteName="tdrive_vfs" LocalFilesShare="/mnt/user/mount_mergerfs/local" RcloneMountShare="/mnt/user/mount_rclone" MergerfsMountShare="/mnt/user/mount_mergerfs" DockerStart="duplicati nzbget qbittorrentvpn lazylibrarian radarr radarr-uhd radarr-collections sonarr sonarr-uhd plex ombi tautulli LDAPforPlex letsencrypt organizrv2" MountFolders=\{"downloads/complete,downloads/seeds,documentaries/kids,documentaries/adults,movies_adults_gd,movies_kids_gd,tv_adults_gd,tv_kids_gd,uhd/tv_adults_gd,uhd/tv_kids_gd,uhd/documentaries/kids,uhd/documentaries/adults"\} # OPTIONAL SETTINGS LocalFilesShare2="/mnt/user/mount_rclone/gdrive_media_vfs" LocalFilesShare3="" LocalFilesShare4="" /mnt/user/mount_rclone/gdrive_media_vfs is my rclone mount for my music and photos. I don't add these to the tdrive I use for plex media, as combined it pushes me over the 400k object limit.
  10. I had problems back in the day with /mnt/disks - I use /mnt/user/mount_rclone and /mnt/user/mount_mergerfs. 750GB/user/day upload - 10TB/day I think download. I think you're overthinking things - rclone does not add any extra considerations to your local setup, other than bandwidth and enough storage for local files that are pending upload. For my setup I have made the following choices: 1. plex appdata on an unassigned Nvme - probably overkill, but I want my library browsing to be as fast as possible and the drive was on sale. 2. A mergerfs union of 2 old unassigned 1TB SSDs in a pool and /mnt/user/local - if the SSD pool is full then new nzbget/qbittorrent files get added to the array instead i.e. like a 2nd cache pool. I do this to avoid my new nvme cache drive, to try and avoid 'noisy' writes to a HDD and because I need an SSD to keep up with my download speed. Not sure what's going on there.
  11. @Switchblade HomeAssistant Core is the new name for Home Assistant, and Hass.io is now called Home Assistant. https://www.home-assistant.io/blog/2020/01/29/changing-the-home-assistant-brand/
  12. you can just copy the rclone.conf for that system to /boot/config/plugins/rclone - assuming you are using the unraid rclone plugin.
  13. Thanks @trapexit. I think the message is getting out now and those unlucky enough to have installed the build with issues have probably updated by now.
  14. @TechnicallyConfused you've got me confused...you linked to my scripts in the first post. If you're using them, you just need to manually delete the 'bad' build of mergerfs. read the last couple of posts in the thread you linked to.
  15. @Urya see a few posts up about mergerfs build issues and how to fix
  16. Which makes sense because there's now a real copy of the file on gdrive because the local hardlink 'version' has 'gone', so there's no need for a local hardlink anymore!
  17. Mergerfs is behaving as expected. LocalFileShare2/complete & MergerfsMountShare are seen as two different 'drives' so you get a slow CoW, whereas the last two are moving files within MergerfsMountShare so it hardlinks. ALL dockers have to use paths within the mergerfs mount to avoid issues like this i.e. Deluge needs to download to MergerfsMountShare/Complete - not a local path. Don't use the local paths for anything - my advice is for day-2-day usage, forget it's there.
  18. @teh0wner Not sure - looks right. I have to ask - do you have 'use hardlinks instead of copy' selected? Maybe you have to go up a level. I use /user -> /mnt/user/ for all my mappings and it seems to work all the time.
  19. Out of interest what have you got your for Plex delete settings? I can't remember the exact wording, but did you disable the delete automatically if missing (or words to that effect) option?
  20. No, disable autostart and let the scripts launch the dockers when the mounts are ready
  21. Read the last couple of posts as there was a problem with a mergerfs build that the author resolved
  22. Thanks - not quite, but pointed me in the right direction i.e. actually reading all the text and adding the line to /config/www/nextcloud/config/config.php as stated. I setup nextcloud a while ago, so this must be a 'recent' addition
  23. Does anyone know how to fix this error: The reverse proxy header configuration is incorrect, or you are accessing Nextcloud from a trusted proxy. If not, this is a security issue and can allow an attacker to spoof their IP address as visible to the Nextcloud. Further information can be found in the documentation. I'm using the example config from LE: # make sure that your dns has a cname set for nextcloud # assuming this container is called "letsencrypt", edit your nextcloud container's config # located at /config/www/nextcloud/config/config.php and add the following lines before the ");": # 'trusted_proxies' => ['letsencrypt'], # 'overwrite.cli.url' => 'https://nextcloud.my-domain.com/', # 'overwritehost' => 'nextcloud.my-domain.com', # 'overwriteprotocol' => 'https', # # Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this: # array ( # 0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it. # 1 => 'nextcloud.your-domain.com', # ), server { listen 443 ssl; listen [::]:443 ssl; server_name nextcloud.*; include /config/nginx/ssl.conf; client_max_body_size 0; # location ~ /auth-(.*) { # internal; # proxy_pass http://192.168.50.17:80/api/?v1/auth&group=$1; # proxy_set_header Content-Length ""; # } location / { # auth_request /auth-4; #=User include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; proxy_max_temp_file_size 2048m; proxy_pass https://192.168.50.85:443; } } Thanks in advance.
  24. what command did you use to get the list? I just tried docker ps, and it didn't include mergerfs
×
×
  • Create New...