Jump to content

DZMM

Members
  • Content Count

    2385
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by DZMM

  1. @TechnicallyConfused you've got me confused...you linked to my scripts in the first post. If you're using them, you just need to manually delete the 'bad' build of mergerfs. read the last couple of posts in the thread you linked to.
  2. @Urya see a few posts up about mergerfs build issues and how to fix
  3. Which makes sense because there's now a real copy of the file on gdrive because the local hardlink 'version' has 'gone', so there's no need for a local hardlink anymore!
  4. Mergerfs is behaving as expected. LocalFileShare2/complete & MergerfsMountShare are seen as two different 'drives' so you get a slow CoW, whereas the last two are moving files within MergerfsMountShare so it hardlinks. ALL dockers have to use paths within the mergerfs mount to avoid issues like this i.e. Deluge needs to download to MergerfsMountShare/Complete - not a local path. Don't use the local paths for anything - my advice is for day-2-day usage, forget it's there.
  5. @teh0wner Not sure - looks right. I have to ask - do you have 'use hardlinks instead of copy' selected? Maybe you have to go up a level. I use /user -> /mnt/user/ for all my mappings and it seems to work all the time.
  6. Out of interest what have you got your for Plex delete settings? I can't remember the exact wording, but did you disable the delete automatically if missing (or words to that effect) option?
  7. No, disable autostart and let the scripts launch the dockers when the mounts are ready
  8. Read the last couple of posts as there was a problem with a mergerfs build that the author resolved
  9. Thanks - not quite, but pointed me in the right direction i.e. actually reading all the text and adding the line to /config/www/nextcloud/config/config.php as stated. I setup nextcloud a while ago, so this must be a 'recent' addition
  10. Does anyone know how to fix this error: The reverse proxy header configuration is incorrect, or you are accessing Nextcloud from a trusted proxy. If not, this is a security issue and can allow an attacker to spoof their IP address as visible to the Nextcloud. Further information can be found in the documentation. I'm using the example config from LE: # make sure that your dns has a cname set for nextcloud # assuming this container is called "letsencrypt", edit your nextcloud container's config # located at /config/www/nextcloud/config/config.php and add the following lines before the ");": # 'trusted_proxies' => ['letsencrypt'], # 'overwrite.cli.url' => 'https://nextcloud.my-domain.com/', # 'overwritehost' => 'nextcloud.my-domain.com', # 'overwriteprotocol' => 'https', # # Also don't forget to add your domain name to the trusted domains array. It should look somewhat like this: # array ( # 0 => '192.168.0.1:444', # This line may look different on your setup, don't modify it. # 1 => 'nextcloud.your-domain.com', # ), server { listen 443 ssl; listen [::]:443 ssl; server_name nextcloud.*; include /config/nginx/ssl.conf; client_max_body_size 0; # location ~ /auth-(.*) { # internal; # proxy_pass http://192.168.50.17:80/api/?v1/auth&group=$1; # proxy_set_header Content-Length ""; # } location / { # auth_request /auth-4; #=User include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; proxy_max_temp_file_size 2048m; proxy_pass https://192.168.50.85:443; } } Thanks in advance.
  11. what command did you use to get the list? I just tried docker ps, and it didn't include mergerfs
  12. @teh0wner @Spatial Disorder I don't know what to do. The script's working, but mergerfs seems wonky for you. I built mine 3 days ago and I'm on: root@Highlander:/mnt/user/public/mergerfs# mergerfs --version mergerfs version: 2.29.0-5-gada90ea FUSE library version: 2.9.7-mergerfs_2.29.0 fusermount3 version: 3.9.0 using FUSE kernel interface version 7.31 I'd ping @trapexit or raise an issue on github to see if he can shed any more light. Hopefully it can be resolved whatever the problem is. Until then, I guess I won't reboot! You could in the meantime switch back to Unionfs for a bit- everything will still work, you just won't get the file management benefits. Just use fusermount to unmount whatever mergerfs created, and then create a unionfs mount.
  13. Are you sure? It looks like it didn't install mergerfs and hence nothing to mv. There should be pages and pages of text after the docker run command. Also, the test build I did is in /mnt/user/appdata/other/test/mergerfs so you need to do: mv /mnt/user/appdata/other/test/mergerfs /bin
  14. @teh0wner all looks ok. I just did a test install of mergerfs: mkdir -p /mnt/user/appdata/other/test/mergerfs docker run -v /mnt/user/appdata/other/test/mergerfs:/build --rm trapexit/mergerfs-static-build and all was ok. mergerfs version: 2.29.0-5-gada90ea FUSE library version: 2.9.7-mergerfs_2.29.0 using FUSE kernel interface version 7.31 'build/mergerfs' -> '/build/mergerfs' I'd ping @trapexit or raise an issue on the mergerfs site as all is well for me.
  15. @teh0wner can you post your chosen mount options as I think you've got something wrong in there @Spatial Disorder have you tried installing mergerfs again since the change?
  16. If I understand what you're asking correctly, I think you'll have to create a union remote with your ro remotes, and then add them to another union with a ff or similar policy after your rw remotes. It's not quite as functional as mergerfs yet, but maybe post your usecase on the thread now while they are building as I think ncw did do a shout out for usecases.
  17. @trapexit thanks for building the docker and taking the time to register to let us know about the update. Appreciate it! To be honest I haven't noticed any problems, but will never say no to more security!
  18. Luckily, the next release of rclone looks like it will include rclone union so we'll have an all-in-one solution: https://forum.rclone.org/t/multiwrite-union-test-beta/14458/43?u=binsonbuzz https://github.com/rclone/rclone/blob/pr-3782-union/docs/content/union.md Current mergerfs command is: mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true Union uses most of the same commands so the equivalent that the script can create (not sure what happens with 'rclone config update' if the remote doesn't exist yet) : rclone config update $RcloneUnionRemoteName --union-upstreams $LocalFilesLocation $RcloneMountLocation --union-action-policy all --union-create-policy ff --union-search-policy ff --union-cache-time ???? I'm not sure what the best union-cache-time will be yet - I've asked on the rclone forums. Once the remote is made it will just need mounting: rclone mount $RcloneUnionRemoteName: $RcloneMountLocation & I'm not sure whether thinks like dir-cache-time, drive-chunk-size, vfs-read-chunk-size etc will be needed on the union remote mount, or whether setting on the child rclone mount will be enough - I think it will be as the union is just accessing files via the upstreams.
  19. I just tried the latest version again and I need to rollback to 0.6.5-ls50 (didn't try other builds, just picked this one first) for the UI to load. I get this error in the logs with the latest version: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/app/calibre-web/cps.py", line 34, in <module> from cps import create_app File "/app/calibre-web/cps/__init__.py", line 68, in <module> config = config_sql.load_configuration(ub.session) File "/app/calibre-web/cps/config_sql.py", line 348, in load_configuration update({"restricted_tags": conf.config_mature_content_tags}, synchronize_session=False) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/query.py", line 3910, in update update_op.exec_() File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/persistence.py", line 1692, in exec_ self._do_exec() File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/persistence.py", line 1873, in _do_exec values = self._resolved_values File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/persistence.py", line 1839, in _resolved_values desc = _entity_descriptor(self.mapper, k) File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/base.py", line 402, in _entity_descriptor "Entity '%s' has no property '%s'" % (description, key) sqlalchemy.exc.InvalidRequestError: Entity '<class 'cps.ub.User'>' has no property 'restricted_tags' Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/sqlalchemy/orm/base.py", line 399, in _entity_descriptor return getattr(entity, key) AttributeError: type object 'User' has no attribute 'restricted_tags' root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='calibre-web' --net='br0.33' --ip='192.168.30.72' --cpuset-cpus='8,9,13,24,25,29' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_8083'='8083' -e 'DOCKER_MODS'='linuxserver/calibre-web:calibre' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/':'/user':'rw' -v '/mnt/user/media/other_media/books/':'/books':'rw' -v '/mnt/disks/':'/disks':'rw,slave' -v '/mnt/user/media/other_media/books/':'/library':'rw' -v '/mnt/cache/appdata/dockers/calibre-web':'/config':'rw' 'linuxserver/calibre-web'
  20. Does anyone have a custom script they are running from Sonarr? Mine used to work, but they fail the Sonarr Test. I've read that sonarr sends a test event and looks for an exit code of 0, but even when I do the simple script below it says the exit code is 255. Does anyone know what I'm doing wrong? Thanks #!/bin/bash exit 0
  21. Feature Request: Can we add paths to pre and post processing scripts from User Scripts in the future please e.g. /boot/config/plugins/user.scripts/scripts/CA-Backup_Stop/script which doesn't work at the moment. Thanks
  22. I'm impressed you uploaded that much content so quickly, or did you just stress test using your own kit? In my experience, if you have enough bandwidth then the only limit on plex is the same as if the files existed locally i.e. have you got enough CPU power to handle the streams (and RAM for rclone) because as far as plex is concerned, it's just opening normal media files.
  23. @teh0wner mergerfs doesn't have anything to do with moving files from local to the cloud - the upload script does that. - If you don't want files from local to be uploaded then don't run the upload script. - If you want to add 2 local folders to mergerfs for Plex etc and not have one uploaded then you'll have to do some mergerfs tinkering. Not hard you just have to read up a bit on mergerfs.
  24. New files added to the mergerfs mount get added to the local folder and then moved to the cloud via the upload script. Changes to files already in the cloud happen in the cloud without downloading and then reuploading (a la unionfs). You can add files safely to local if you want but directly to the rclone mount isn't advised, as writing direct to the mount outside rclone move is not 100% reliable. mergerfs isn't a physical folder, so files can't be 'moved' from it - files are moved to the cloud from the real local location. Correct although it's the folder you should be using for all activities i.e. sonarr, plex etc as it allows them to see all files available, regardless of whether they are in the cloud, or local waiting to be uploaded to the cloud