Tuumke

Members
  • Posts

    81
  • Joined

  • Last visited

Everything posted by Tuumke

  1. Nois, i see it's working now. Am i correct in understanding i do not need to boot into safemode first? I can stop array then start it in maintenance?
  2. This was without -n Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 bad CRC for inode 12458118 bad CRC for inode 12458118, will rewrite Bad atime nsec 2173239295 on inode 12458118, resetting to zero cleared inode 12458118 - agno = 1 bad CRC for inode 2147768638 bad CRC for inode 2147768638, will rewrite Bad atime nsec 2173239295 on inode 2147768638, resetting to zero cleared inode 2147768638 - agno = 2 - agno = 3 bad CRC for inode 6447424246 bad CRC for inode 6447424246, will rewrite Bad atime nsec 2173239295 on inode 6447424246, resetting to zero cleared inode 6447424246 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 0 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done -edit- That's it i guess? No need for -L?
  3. Will do that when my kids go to bed, they watching a movie now
  4. I dont unterstand why the file is not working =/ You accept OneDrive links? https://1drv.ms/u/s!ApN1fHOdf6Jjgr0UdOV7WEOcb_NMmw?e=cWaHVD
  5. nas-diagnostics-20210816-1846.zip Tried again.
  6. nas-diagnostics-20210816-1840.zip Here you go. Was still running SMART-quick checks on the disks though
  7. I saw that my organizr docker had some issues updating. Apparently there is something with this folder: root@NAS:/mnt/user/dockers/organizrv2/www/organizr/plugins/bower_components/ace/snippets# ls -ahlp /bin/ls: reading directory '.': Structure needs cleaning total 0 Reading up on the interwebs, 'structure needs cleaning' has something to do with a disk acting up? Any help on how to know which disk this is and how to fix this?
  8. Hi guys, question about unpackerr. Do you need to disable filemanagement in Sonarr/Radarr ?
  9. I just had a hang as well. Think this is the 2nd one in a weeks time after upgrading to 6.9.1 Will keep an eye out to check if this happens more often. Thing with my logs is that they start from the moment i hard reset my NAS.
  10. There should be 2 parts, 1 for the server block, the other in location block i got config/nginx/auth.conf which has: include /config/nginx/proxy-confs/organizr-auth.subfolder.conf; auth_request /auth-0; and a file called auth-location.conf (but on my external VPS instead of same host) in same folder location ~ ^/auth-(.*) { resolver 127.0.0.11 valid=30s; set $upstream_app organizrv2; set $upstream_port 5076; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port/api/v2/auth&group=$1; proxy_pass_request_body off; proxy_set_header Content-Length ""; } Then this is my nzbhydra.subdomain.conf looks like this: server { listen 443 ssl; listen [::]:443 ssl; server_name nzbhydra.*; include /config/nginx/ssl.conf; client_max_body_size 0; include /config/nginx/auth-location.conf; location / { include /config/nginx/auth.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app nzbhydra2; set $upstream_port 5076; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location ~ (/nzbhydra)?/api { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app nzbhydra2; set $upstream_port 5076; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location ~ (/nzbhydra)?/getnzb { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app nzbhydra2; set $upstream_port 5076; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location ~ (/nzbhydra)?/gettorrent { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app nzbhydra2; set $upstream_port 5076; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location ~ (/nzbhydra)?/rss { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app nzbhydra2; set $upstream_port 5076; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location ~ (/nzbhydra)?/torznab/api { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app nzbhydra2; set $upstream_port 5076; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } }
  11. Probably the links to the API. How do you use it? There should be an include to: /config/nginx/proxy-confs/organizr-auth.subfolder.conf; and make sure the proxy_pass points to /api/v2/auth?group=$1 That fixed it for me
  12. Lol, so many data on it.. i can't just wipeclean...
  13. Anyone else having this problem: https://github.com/binhex/arch-qbittorrentvpn/issues/58 Where the docker just wonts fully start? Where it's just stuck at 2020-10-29 14:12:13,463 DEBG 'start-script' stdout output: [info] Starting OpenVPN (non daemonised)... 2020-10-29 14:12:13,555 DEBG 'start-script' stdout output: Thu Oct 29 14:12:13 2020 WARNING: file 'credentials.conf' is group or others accessible Thu Oct 29 14:12:13 2020 OpenVPN 2.4.9 [git:makepkg/9b0dafca6c50b8bb+] x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Apr 20 2020 2020-10-29 14:12:13,555 DEBG 'start-script' stdout output: Thu Oct 29 14:12:13 2020 library versions: OpenSSL 1.1.1g 21 Apr 2020, LZO 2.10 2020-10-29 14:12:13,556 DEBG 'start-script' stdout output: Thu Oct 29 14:12:13 2020 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts 2020-10-29 14:12:13,560 DEBG 'start-script' stdout output: Thu Oct 29 14:12:13 2020 CRL: loaded 1 CRLs from file [[INLINE]] 2020-10-29 14:12:13,562 DEBG 'start-script' stdout output: Thu Oct 29 14:12:13 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]156.146.62.193:1198 Thu Oct 29 14:12:13 2020 UDP link local: (not bound) Thu Oct 29 14:12:13 2020 UDP link remote: [AF_INET]156.146.62.193:1198 -edit- Seems like switching to nextgen openvpn file fixes this. But says port forward is not enabled. Will this give me any issues?
  14. Indirectly. Had port 22 forwarded (as wel as 80 en 443)
  15. I was running adguard and also have a UDM pro, when i noticed that stuff was getting blocked from my nas. I immediatly closed port 22, then saw this in the syslog: Oct 29 09:42:54 NAS sshd[9909]: error: connect_to payy.co.com port 80: failed. Oct 29 09:42:54 NAS sshd[9909]: channel_by_id: 0: bad id: channel free Oct 29 09:42:54 NAS sshd[9909]: Disconnecting user adm 89.39.104.123 port 4746: oclose packet referred to nonexistent channel 0 Oct 29 09:42:54 NAS sshd[9909]: Connection reset by user adm 89.39.104.123 port 4746 Oct 29 09:44:19 NAS sshd[24421]: error: connect_to t.paypal.com: unknown host (Name or service not known) Oct 29 09:44:19 NAS sshd[24421]: error: connect_to b.stats.paypal.com: unknown host (Name or service not known) Oct 29 09:44:20 NAS sshd[24421]: error: connect_to t.paypal.com: unknown host (Name or service not known) Oct 29 09:44:32 NAS sshd[24421]: error: connect_to t.paypal.com: unknown host (Name or service not known) Oct 29 09:44:51 NAS sshd[24421]: error: connect_to t.paypal.com: unknown host (Name or service not known) Oct 29 09:46:23 NAS webGUI: Successful login user root from 192.168.2.1 Uh.. should i be worried? And how to further check my nas for compromises? -edit- Saw some more things and i thought, i should be running under user adm then right? root 776 7449 0 09:18 ? 00:00:00 sshd: adm [priv] adm 778 776 0 09:18 ? 00:00:00 sshd: adm root 7645 7449 0 08:32 ? 00:00:00 sshd: adm [priv] adm 7647 7645 0 08:32 ? 00:00:15 sshd: adm root 10553 7449 0 09:40 ? 00:00:00 sshd: adm [priv] adm 10555 10553 0 09:40 ? 00:00:00 sshd: adm root 19024 8802 0 10:00 pts/0 00:00:00 grep adm root 23428 7449 0 Oct25 ? 00:00:00 sshd: adm [priv] adm 23430 23428 0 Oct25 ? 00:00:00 sshd: adm root 26296 7449 0 09:10 ? 00:00:00 sshd: adm [priv] adm 26310 26296 0 09:10 ? 00:00:00 sshd: adm root 30985 7449 0 Oct28 ? 00:00:00 sshd: adm [priv] adm 30988 30985 0 Oct28 ? 00:00:01 sshd: adm root 31687 7449 0 Oct26 ? 00:00:00 sshd: adm [priv] adm 31689 31687 0 Oct26 ? 00:00:07 sshd: adm I'm rebooting it right now just to be safe.
  16. Argh, i'm getting a lot of stalls recently. I swichted to strict_port_forward => No today, since of the PIA issues. Now im seeing this in the logs: 2020-09-11 14:40:27,365 DEBG 'start-script' stdout output: Fri Sep 11 14:40:27 2020 [943625b3bd94d7c42705f8e0c9d3651e] Inactivity timeout (--ping-restart), restarting Is there any way arround that? Probably something to do with Q17 on https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md It does come back up so i'm guessing just a temporary timeout or sometihng.
  17. The test works for me, but i also have Telegram enabled, and now i only get Telegram messages.
  18. Hm, i have the docker update set to update each 30 minutes. Even though it says its updated through notifications, it's not updating my dockers?!
  19. I just installed this plugin. Thanks for the work! Is there anyway to have it notify to a slack (or discord with /slack) channel like watchtower? Just noticed the agent notification settings in the notification settings.. doh
  20. Is it possible to have your user script log to its own logfile?
  21. Hey Binhex! First off... Thx so much for all the dockers you create. I switched from Transmission to Deluge, back to Transmission and now to qBittorrent. I did so for the support with Organizr. But when i add qBintorrent to Organizr(v2), firefox gives me a Mixed content blocking warning and i can't load the page. Any idea? -edit- Never mind, had to disable the click high jacking
  22. I also created a docker-compose file for use with Traefik (old traefik) for use on my VPS (not unraid system): note: i have environment variables set in /etc/environment for ${USERDIR}, ${DOMAINNAME} you might want to replace those nessus: image: jbreed/nessus container_name: nessus hostname: nessus restart: unless-stopped networks: - traefik_proxy volumes: - ${USERDIR}/docker/nessus:/config - "/etc/localtime:/etc/localtime:ro" - ${USERDIR}/docker/shared:/shared environment: PUID: ${PUID} PGID: ${PGID} TZ: ${TZ} labels: traefik.enable: "true" traefik.backend: nessus traefik.protocol: https traefik.port: 8834 traefik.frontend.rule: Host:nessus.${DOMAINNAME} traefik.frontend.headers.SSLHost: nessus.${DOMAINNAME} traefik.docker.network: traefik_proxy traefik.frontend.passHostHeader: "true" traefik.frontend.headers.SSLForceHost: "true" traefik.frontend.headers.SSLRedirect: "true" traefik.frontend.headers.browserXSSFilter: "true" traefik.frontend.headers.contentTypeNosniff: "true" traefik.frontend.headers.forceSTSHeader: "true" traefik.frontend.headers.STSSeconds: 315360000 traefik.frontend.headers.STSIncludeSubdomains: "true" traefik.frontend.headers.STSPreload: "true" traefik.frontend.headers.customResponseHeaders: X-Robots-Tag:noindex,nofollow,nosnippet,noarchive,notranslate,noimageindex traefik.frontend.headers.frameDeny: "true" traefik.frontend.headers.customFrameOptionsValue: 'allow-from https:${DOMAINNAME}' depends_on: - traefik
  23. Don't have DNS anymore. Updated my docker this morning. Not sure if its because of install docker pihole, or update of container Have tried several things like: - --dns=ipadressofpihole - add variable in template (Key3, DNS1, ipaddressofpihole) I can't use apt-get update, no DNS resolving Also nog ping or nslookup command available in the container itself...
  24. This is just the Unifi controller right? I thought there was a standard config file for it in the letsencrypt docker from ls.io? I checked my docker, it has this file: user@TOWER:/mnt/user/dockers/letsencrypt/nginx/proxy-confs# cat unifi.subdomain.conf.sample # make sure that your dns has a cname set for unifi and that your unifi container is not using a base url server { listen 443 ssl; listen [::]:443 ssl; server_name unifi.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_unifi unifi; proxy_pass https://$upstream_unifi:8443; } location /wss { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_unifi unifi; proxy_pass https://$upstream_unifi:8443; proxy_buffering off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_ssl_verify off; } }
  25. I also have configured letsencrypt reverse proxy for subdomain nessus.subdomain.conf Note1: include /config/nginx/auth.conf points towards my Organizr setup. You might not want to use this server { listen 443 ssl; listen [::]:443 ssl; server_name nessus.*; include /config/nginx/ssl.conf; client_max_body_size 0; include /config/nginx/auth-location.conf; location / { include /config/nginx/auth.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_nessus w.x.y.z; ## Change to IP of HOST proxy_pass https://$upstream_nessus:8834; } }