SimplifyAndAddCoffee

Members
  • Posts

    48
  • Joined

  • Last visited

Everything posted by SimplifyAndAddCoffee

  1. ok so I installed it, but important question: where can I set the SNMP community string?
  2. currently not showing as installable in CA with 6.9.0 RC2 and Nerd Tools 2021-01-08 Is this intended?
  3. I've already tried both of those... also I did finally just turn on 2FA on gmail and assign app tokens to my smtp clients. Once again Bitwarden is working with it, but unraid is not.
  4. I'm using literally the same settings as I am using in bitwarden, and bitwarden is working fine. I am using an account without 2FA and using the allow insecure apps option in gmail. All I am getting is Test result: Authorization failed
  5. I can't get google authenticator to work, it won't let me enroll, says I'm putting in the wrong code. Also the admin account keeps resetting itself to username admin password password and I can't disable login to it without also locking my other admin account out (when I disable login for user admin, my other user suddenly gets blocked from accessing the admin panel) which makes it... not really useable. Also I found out that turning on the google authenticator MFA also affects clients logging on from the LAN, but doesn't affect the default admin account (fortunately, in this one case, because otherwise I wouldn't have been able to get back into the admin panel after turning it on, to turn it off again.) so yeah I need to get MFA working, but only require it for users accessing from outside the LAN, and also I need to not let literally anyone on the internet admin my VPN server.
  6. What do you mean port 80? specifying a port is required but you should be able to change it, unless you mean the inbound port past the container NAT, in which case it's not exposed on the outside so it shouldn't matter.
  7. ok I'm gonna nitpick here and say that's moving it. Functionally the data has moved and is no longer at the path where applications are looking for it. What... the fuck? why? appdata is or should be by its nature assigned a static path by the applications that use it. You don't want appdata on the array, and with this arrangement, how do you even control where it goes if you actually point something at the share? This seems like a really poor design choice from the perspective of safe data storage. It explains perfectly why I had this issue and why I was confounded by it: doing it that way seems insane to me so it never crossed my mind. Sorry, that's just frustrating. OK Again I still don't understand why you would ever want to point them to the share if you can't effectively control what device pool the data ends up on. The UI intuitively makes it sound as if the data will go to the pool location you specify, but I'm not entirely convinced that's the case. Can I trust it to do that? I explicitly want the data to exist on the docker pool and only on the docker pool. That is where my apps are already pointing at this point, and it's where I need SMB access to pull/put files such as for nginx. So this brings me to a new question of, is there no way to share an existing directory in unraid? will I fuck something up if I try to call samba directly to do it? I would, for example, really like to make a share mapping to a directory that already exists and is full of files I want to access without moving them. e.g. /mnt/docker/appdata/nginx/html/www/ so that I can give someone access to tamper with the website without giving them access to the whole drive.
  8. I am using this and it works, although as of unraid 6.9.0 RC2 it appears that the alias no longer works when called with the exec command in shell scripts.
  9. Here's what I did create new pool, name it docker add single disk to new pool unpack the contents of my old docker unassigned device disk to the new docker pool. The volume contains a directory named 'appdata' try to share the pool in shares. Select cache:only and select pool:docker, name the share data try to go to \\server\data, find the folder empty go to console, and see that it has created a new directory, 'data' in /mnt/docker/ and shared that. I need to share the 'appdata' folder in the docker volume, so I rename the 'appdata' share that I'm not using for apps (cache:preferred, pool:cache) to 'appcache', and rename the 'app' share to 'appdata' All my dockers break look in \\server\appdata share, and see that it is empty! In the console, see that /mnt/docker/ now has an 'appcache' directory that contains all of the files that used to be in the 'appdata' directory. check on the 'appcache' share, and see that it is still using pool:cache? change it to cache:only, but it still is connected to the data on the docker pool rename 'appdata' to 'appdata2' and 'appcache' to 'appdata'. Now the data is back in the /mnt/docker/appdata directory where it belongs, but I can't unlink it from the 'appdata' share, and I can't link a new share to it even after restarting the docker service and all my dockers, I can no longer access any of them from my browser, despite seeing no changes in the config How boned am I? mewcaster-diagnostics-20210209-1915.zip
  10. ok problem... I backed up my old 60GB unassigned device /mnt/disks/device and now want to restore to my new 240GB pool /mnt/docker... how do I do this? it doesn't seem like I can change the destination from the old disk. is it adequate to just untar it? tar -C /mnt/docker -xvf /mnt/user/backup/docker/date/ca_backup.tar
  11. My dockers combined are using maybe 1GB of my total 16GB memory according to the Docker panel, but according to the dashboard it's 99% full. No log files in my containers over 16MB. Tried stopping some containers, but it just turned them into orphan images, and I'd rather not break *all* of my dockers.
  12. Need help configuring SWAG nginx reverse proxy to work with docker container that uses websocket. Hoping someone here can help me with this. I'm using SWAG for WAN access to my web applications and recently tried to set up a docker solution to host Taiga.io (kanban/scrum Trello alternative). The official Taiga docker config builds its own virtual network on which it runs 8 docker containers for the the different services, including the front end, back end, database, events handler, and its own nginx reverse proxy. The problem I'm encountering is that out of the box, Taiga isn't configured for SSL. If you connect with HTTPS through SWAG, the page will refuse to load, because chrome won't let you load an HTTPS web page that includes an insecure websocket connection. I can get the page to load by changing the configuration variable in Taiga's docker-compose file to use wss: instead of ws: for the websocket connection URL. However, the websocket connection fails to connect, and the application won't function properly. I've tried playing around with the subdomain.conf and I haven't been able to get it to complete the handshake, and my browser console is filling up with the following errors: app.js:3370 WebSocket connection to 'wss://taiga.******.***/events' failed: WebSocket is closed before the connection is established. app.js:3354 WebSocket connection to 'wss://taiga.******.***/events' failed: Error during WebSocket handshake: Unexpected response code: 200 Here's my taiga.subdomain.conf: ## Version 2020/12/09 # custom for taiga to proxy? server { listen 443 ssl; listen [::]:443 ssl; server_name taiga.*; include /config/nginx/ssl.conf; # restrict access to authenticated users #auth_basic "Restricted"; #auth_basic_user_file /config/etc/htpasswd/.htpasswd; #client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://10.0.0.10:9000/; } # Events location /events { proxy_pass http://10.0.0.10:9000/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_connect_timeout 7d; proxy_send_timeout 7d; proxy_read_timeout 7d; } } and the taiga.conf that taiga's nginx instance is using: server { listen 80 default_server; client_max_body_size 100M; charset utf-8; # Frontend location / { proxy_pass http://taiga-front/; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; } # Api location /api { proxy_pass http://taiga-back:8000/api; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; } # Admin location /admin { proxy_pass http://taiga-back:8000/admin; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; } # Static location /static { root /taiga; } # Media location /_protected { internal; alias /taiga/media/; add_header Content-disposition "attachment"; } # Unprotected section location /media/exports { alias /taiga/media/exports/; add_header Content-disposition "attachment"; } location /media { proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://taiga-protected:8003/; proxy_redirect off; } # Events location /events { proxy_pass http://taiga-events:8888/events; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_connect_timeout 7d; proxy_send_timeout 7d; proxy_read_timeout 7d; } }
  13. Tempting but sounds risky. Is there even a way to pool the disk without losing the data that is currently on it? I have a ton of dockers and a VM all dependent on that disk.
  14. Setting user permissions... Modifying ID for nobody... Modifying ID for the users group... Adding nameservers to /etc/resolv.conf... Extracting packaged nessus debian package: Nessus 8.10.0... Changing owner and group of configuration files... Creating symbolic links... Cleaning up... Starting Nessus : . [Thu Feb 4 11:00:52 2021][30.1][op=_qdb_map][name=services-udp.db][fd=-1][map_sz=38575]: complete [Thu Feb 4 11:00:52 2021][30.1][op=_qdb_map][name=services-tcp.db][fd=-1][map_sz=40899]: complete [Thu Feb 4 11:00:52 2021][30.1][op=_qdb_map][name=services-tcp.db][fd=-1][map_sz=40899]: complete [Thu Feb 4 11:00:52 2021][30.1][op=qdb_sync][name=upgrades.db][fd=5][map_sz=0][file_size=55]: complete [Thu Feb 4 11:01:02 2021][30.23][sched=100][pid=53][plugin=nessusd_www_server6.nbin][instr=0xd779] : Error: Could not find function 0xf000020c call stack: ----------- [0d779:Socket.accept+36] call, addr(0xf000020c), -, # ???? () [C func] [07bb2:Mug::Connection.on_client+46] refcall, fp(2), string#707, # from: fp(2) [22d1e:!anon5+4] refcall, fp(-2), string#4934, # from: fp(-2) [2eb54:main+10121] eop, -, -, [Thu Feb 4 11:01:14 2021][30.1][op=qdb_sync][name=plugins-desc.db][fd=19][map_sz=0][file_size=187756524]: complete [Thu Feb 4 11:01:14 2021][30.1][op=qdb_sync][name=plugins-code.db][fd=7][map_sz=0][file_size=2967368547]: complete [Thu Feb 4 11:02:05 2021][30.1][op=_qdb_map_lowmem][name=plugins-code.db.16124364741952786213][fd=7][map_sz=0][file_size=2967368547]: complete [Thu Feb 4 11:02:08 2021][30.1][op=_qdb_map_lowmem][name=plugins-desc.db.1612436525562718452][fd=19][map_sz=0][file_size=187756524]: complete similar issue to above poster. I last ran this back in november and it worked fine, but now it fails to update plugins or run scans.
  15. I have a UD disk that I am running my dockers on. I want to know if there is a way to create custom mount points and shares on the UD disk and control access via SMB users. When I share the UD disk, it creates a public share at \\server\Patriot_Blaze_DE3F07580DC602009379\ I do not want it public, and I also don't want it named Patriot_Blaze_DE3F07580DC602009379. There are a couple things I am hoping to do with this: I want to be able to run an rsync cron job to back up the contents of the UD disk to /mnt/user/backup/docker I would like to rename the disk, if possible I want to be able to mount and share arbitrary directories within the UD disk, for example /mnt/disks/Patriot_Blaze_DE3F07580DC602009379/appdata/swag/www as \\server\www only visible to specific samba users I want to be able to mount shares to arbitrary directories within the UD disk, for example I want to mount /mnt/user/media (or \\server\media) to /mnt/disks/Patriot_Blaze_DE3F07580DC602009379/appdata/swag/www/media I want to be able to persist all of this What would be the way to go about this?
  16. Yes, and I can reach it locally using http://server:8086 I am using http validation, but I could theoretically change that if I have to. My local DNS is managed at the router. For configuring nginx, would that be done in the subdomain.subdomain.conf file? or is there a way to manage specific upstream hosts elsewhere in nginx? Any information on how to do that would be helpful, since I'm new to nginx (and docker in general). EDIT: I got it sorted. Didn't realize the container and app used different ports.
  17. I'm trying to use this with SWAG/nginx and there are 2 problems I'm having trouble solving: 1. I get a 502 bad gateway using the subdomain to try and hit the bitwardenrs docker on 8086. ## Version 2020/12/09 # make sure that your dns has a cname set for bitwarden and that your bitwarden container is not using a base url # make sure your bitwarden container is named "bitwarden" # set the environment variable WEBSOCKET_ENABLED=true on your bitwarden container server { listen 443 ssl; listen [::]:443 ssl; server_name bitwarden.*; include /config/nginx/ssl.conf; client_max_body_size 128M; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app bitwardenrs; set $upstream_port 8086; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location /admin { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app bitwardenrs; set $upstream_port 8086; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location /notifications/hub { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app bitwardenrs; set $upstream_port 3012; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location /notifications/hub/negotiate { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app bitwardenrs; set $upstream_port 8086; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } } 2. I don't want bitwarden exposed to the internet, despite the fact I am also using the reverse proxy to handle internet traffic to other dockers. Is there any way to configure this so that BitwardenRS gets SSL on the LAN but can't be reached from the WAN?
  18. Forgive me if similar has been asked already; I'm new no unraid and googled the issue and wasn't able to find relevant threads. Everything was working fine when set things up about 2 weeks ago. Then I couldn't access the web GUI of one of my dockers (deluge) so I decided to reboot unraid. Now my VMs won't load and when I go to the VM tab I get libvirt service failed to start. No other information. Please advise, thanks.
  19. Long story short I had a Windows Server VM which had been installed on my cache SSD. I had to create a new config because I swapped SAS cards, and build a new array. The SSD data is left intact, but I had to manually re-map the VM to the vdisk image on the SSD since the shares were gone and it was now an unassigned device. However when I try to start the VM now, this is all I get: 2020-11-28 06:41:45.737+0000: shutting down, reason=failed These are my mount points: /mnt/disks/Patriot_Blaze_DE3F07580DC602006676/domains/WinServ/vdisk1.img VirtIO 30G/16G /dev/disk/by-id/ata-WDC_WD40PURZ-85AKKY0_WD-WX12D4071DN7 SATA 4T/4T /mnt/user/isos/virtio-win-0.1.185.iso (backed up and copied back to the array) Is there any way I can get my server VM back without rebuilding it? EDIT: Nevermind, it just fixed itself....