ChuskyX

Members
  • Posts

    8
  • Joined

  • Last visited

ChuskyX's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I'm using the tag "latest" so it could be expected to have the last Borg version. Thinking about it, Borg have a lot of compatility issues between versions, you need to convert repositories, change scripts, etc.. Maybe the latest tag points to the legacy version and only users aware of the implications of an upgrade, must use the 1.2 tag. Most users don't read the change logs prior upgrading containers and this might be needed to have reliable backups.
  2. I think the problem is your bind mount. You must remove the "clients" part and left only "/mnt/user/borg/sshkeys/". You must still put your keys in the clients folder but the container path must point to the parent.
  3. Thanks for the contribution, it's working fine! A bit of trouble at first with ssh keys in the script but nothing you can't fix with a couple of BORG variables 😄 Any plans for upgrade? borg version in the container is 1.1.16 and is unsupported. Could you upgrade to last stable 1.2.7? Thanks
  4. For a hybrid solution, you still have the btrfs option. You have snapshots, and compression (i think). So, if you are not going to create a pool, why to use zfs?
  5. Hi! I tried today the upgrade to unraid 6.10 from 6.9.2. The first thing i noticed is that now it takes a world to start the array. A couple of minutes against 10s with 6.9.2. The second thing is write problems with a few containers. The cause is that containers are not using the standard uid:99 and gid:100 so they can't write in the host folders. Containers like "nextcloudpi" and "poste.io" It shouldn't be a problem because the shared folder uid and gid match the container. With 6.9.2 works fine, why not with 6.10? I'm using syncthing to do remote backups. Syncthing uses the standard 99:100, but now, it has problems to access the folders of the containers above.. Of course, the uid:gid used by that containers were developer election, not mine. Can it be fixed? Thanks
  6. When you use a reverse proxy to access your webapps, only your proxy do the real connection. You need to pass the real ip through the http headers. You need to google the headers your webapp is looking for. More common headers for that: proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; You have to put them in the Advanced tab of your host.
  7. Yeah, i could do that. I guess i only need to copy the contents of docker.img to /var/lib/docker. I don't really understand why it's used a image in first place.. But beyond of that, i would like to find the problem, it's important to me to know what is wrong and how to fix that. The thing is that i used commands like "find . -type f -printf '%s %p\n' | sort -nr | head -10" inside npm container and directly in the container path. I couldn't find any new big file. Neither with "sudo du -a . | sort -n -r | head -n 20". If i check the size of containers using dockerman, the size of npm container doesn't grow. If i check the size of /var/lib/docker/containers using the above commands, it can't find any big files involved with npm 🤷‍♂️
  8. Hi! I have npm running as a rproxy for nextcloud and another services. It's running fine but i have problems with the docker.img file filling up. I identified NPM as the responsible because of the proxy_buffer features. Disable the buffer with "proxy_buffering off" fix the problem but npm becomes a bottleneck, so i need the buffer on. I tried to mount /tmp and /var/tmp from npm to unraid's /tmp folder for diagnostic purposes but the docker.img file still is filled to 100%, so buffers are not stored there. I tried the proxy_temp_path directive to force the use of /tmp folder but doesn't work, it still writes inside /var/lib/docker So, how can i do to force NPM to buffer out of the container? What path i need to mount? my config for nextcloud in npm is: { "id": 1, "created_on": "2022-05-03 19:42:40", "modified_on": "2022-05-07 18:31:07", "owner_user_id": 1, "domain_names": [ "cloud.mydomain.net" ], "forward_host": "192.168.10.182", "forward_port": 7880, "access_list_id": 0, "certificate_id": 1, "ssl_forced": true, "caching_enabled": true, "block_exploits": false, "advanced_config": "proxy_http_version 1.1;\r\n proxy_set_header Upgrade $http_upgrade;\r\n proxy_set_header Connection \"Upgrade\";\r\n proxy_set_header Host $host;\r\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\r\n#proxy_request_buffering off;\r\n#proxy_buffering off;\r\nproxy_buffering on;\r\nproxy_buffers 32 4k;\r\nproxy_max_temp_file_size 2048m;\r\nproxy_temp_file_write_size 32k;\r\nproxy_temp_path /tmp;", "meta": { "letsencrypt_agree": false, "dns_challenge": false }, "allow_websocket_upgrade": false, "http2_support": true, "forward_scheme": "http", "enabled": 1, "locations": [ { "path": "/.well-known/carddav", "advanced_config": "", "forward_scheme": "http", "forward_host": "192.168.10.182/remote.php/dav", "forward_port": 7880 }, { "path": "/.well-known/caldav", "advanced_config": "", "forward_scheme": "http", "forward_host": "192.168.10.182/remote.php/dav", "forward_port": 7880 } ], "hsts_enabled": true, "hsts_subdomains": false } Thanks 🙂