demonspork

Members
  • Posts

    11
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

demonspork's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. Ok. I implemented it fully. Here's how: I created a version of the primary nginx server statement that uses my oauth2 authentication and runs on an alternate port. It sets auth_request /validate; which has the code and redirects to auth (and replaces the http global auth_request set in the emthtp-servers.conf for requests to the alternative port. I then in my go file I added cp /boot/config/oauth-emhttp.conf /etc/nginx/conf.d/oauth-emhttp.conf nginx -s reload. My reverse proxy points to the alternate port specified in my alternative nginx conf so I can load https://unraid.example.com and it uses oauth2 then lets me in. I can still load http://10.0.0.200:8080 and it takes me to the new fancy login form. This is important because part of the login flow runs in a docker container. Eventually that login flow and the entire nginx reverse proxy will move to a raspberry pi to remove the complications encountered by the inability for docker containers on br0 to talk to the host unraid server. That being said, all of this work is replaced by a single line in the reverse proxy if I can just natively enable basic auth in the unraid management page to restore the way it has behaved for years and years.
  2. It is a change in the /etc/nginx/conf.g/emhttp-servers.conf They did write a login form (auth_request.php) but they use nginx to implement it by adding: deny all; auth_request /auth_request.php; Then in certain locations like /login they have an allow all; so that you don't have to be authed through the auth_request directive to access it. I don't have a copy of the nginx config from 6.7 at the moment. If someone could show me the authentication section that would be great or I can revert back to find it. Also do the nginx conf files revert on reboot or will I need to handle my modifications in the go file? Not super familiar with modifying unraid yet. My current plan is to work through setting up basic auth directly in the nginx config on unraid and just use a copy of it running on an alternate port because nginx.conf includes /etc/nginx/conf.d/*.conf so any .conf file I put in the conf.d folder should be loaded. This way I can keep the normal login form and my reverse proxy at the same time just running on a different ports. Still rooting for a toggle in the unraid settings that turns on and off the basic auth. The ultimate solution would be for auth_request.php to handle the "proxy_set_header Authorization "Basic dn7jaDczpA9yeyM9NYT="; that my reverse proxy is sending for previous versions to authorize the user. (Password has been scrambled randomly)
  3. You can comment out the auth_request directive in /etc/nginx/*.conf, but that doesn't reenable basic auth. I can look at the version from 6.7 to see how basic auth was implemented. This change has broken the way I use unraid, so definitely a bug, or at the very least a feature only half implemented.
  4. I am trying 6.8rc7 at the moment and I need to disable the new login form. Is this possible? I need my basic HTTP Auth back. I am using a reverse proxy with http-authorization header to sign into unraid. The reverse proxy uses an oauth2 flow to authorize me then passes the http-authorization headers with the base64 encoded user:password to unraid so I don't have to also type in those details.
  5. I am using a reverse proxy with http-authorization header to sign into unraid. Will there be a way to emulate this once the new login is live? I just updated and I don't see anywhere to turn it off and go back to basic http auth anymore. Will I have to make another feature request to be able to turn this off? I handle all of my auth in the reverse proxy at the moment.
  6. I read back further in the threat, and there is something wrong with mongodb and unraid user shares through docker. If I reference a disk/cache directly it works just fine.
  7. I am having trouble with it. Just installed it today and I get some repeating errors. So far I have removed the container and image, delete the entire appdata/unifi folder and re-installed it clean. Any idea what is going on here? server.log [2017-02-19 12:00:35,086] <db-server> INFO db - DbServer stopped [2017-02-19 12:00:35,086] <db-server> WARN db - DbServer not shutdown cleanly and need repairing on next startup [2017-02-19 12:00:39,216] <db-server> ERROR system - [exec] error, rc=14 [2017-02-19 12:00:39,216] <db-server> INFO db - DbServer stopped [2017-02-19 12:00:39,216] <db-server> WARN db - DbServer not shutdown cleanly and need repairing on next startup and my mongodb.log 2017-02-19T12:02:40.401-0600 [initandlisten] SEVERE: Got signal: 6 (Aborted). Backtrace:0xedb3e9 0xeda3a5 0x2ba0c07604b0 0x2ba0c0760428 0x2ba0c076202a 0xe4a213 0xe7039b 0x8869fa 0x886f3a 0x88ea86 0x87d184 0x61f92f 0x620903 0x5e943c 0x2ba0c074b830 0x61a2d9 bin/mongod(_ZN5mongo15printStackTraceERSo+0x39) [0xedb3e9] bin/mongod() [0xeda3a5] /lib/x86_64-linux-gnu/libc.so.6(+0x354b0) [0x2ba0c07604b0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38) [0x2ba0c0760428] /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x2ba0c076202a] bin/mongod(_ZN5mongo13fassertFailedEi+0xc3) [0xe4a213] bin/mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x29b) [0xe7039b] bin/mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x22a) [0x8869fa] bin/mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x2a) [0x886f3a] bin/mongod(_ZN5mongo3dur16preallocateFilesEv+0x966) [0x88ea86] bin/mongod(_ZN5mongo3dur7startupEv+0x74) [0x87d184] bin/mongod(_ZN5mongo14_initAndListenEi+0x76f) [0x61f92f] bin/mongod(_ZN5mongo13initAndListenEi+0x23) [0x620903] bin/mongod(main+0x23c) [0x5e943c] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x2ba0c074b830] bin/mongod(_start+0x29) [0x61a2d9] 2017-02-19T12:02:44.418-0600 ***** SERVER RESTARTED ***** 2017-02-19T12:02:44.420-0600 [initandlisten] MongoDB starting : pid=975 port=27117 dbpath=/usr/lib/unifi/data/db 64-bit host=06fa76df8022 2017-02-19T12:02:44.420-0600 [initandlisten] db version v2.6.10 2017-02-19T12:02:44.420-0600 [initandlisten] git version: nogitversion 2017-02-19T12:02:44.420-0600 [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 2017-02-19T12:02:44.420-0600 [initandlisten] build info: Linux lgw01-12 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 BOOST_LIB_VERSION=1_58 2017-02-19T12:02:44.420-0600 [initandlisten] allocator: tcmalloc 2017-02-19T12:02:44.420-0600 [initandlisten] options: { net: { http: { enabled: false }, port: 27117 }, repair: true, storage: { dbPath: "/usr/lib/unifi/data/db" }, systemLog: { destination: "file", logAppend: true, path: "logs/mongod.log" } } 2017-02-19T12:02:44.510-0600 [initandlisten] finished checking dbs 2017-02-19T12:02:44.510-0600 [initandlisten] dbexit: 2017-02-19T12:02:44.510-0600 [initandlisten] shutdown: going to close listening sockets... 2017-02-19T12:02:44.510-0600 [initandlisten] shutdown: going to flush diaglog... 2017-02-19T12:02:44.510-0600 [initandlisten] shutdown: going to close sockets... 2017-02-19T12:02:44.510-0600 [initandlisten] shutdown: waiting for fs preallocator... 2017-02-19T12:02:44.510-0600 [initandlisten] shutdown: closing all files... 2017-02-19T12:02:44.510-0600 [initandlisten] closeAllFiles() finished 2017-02-19T12:02:44.510-0600 [initandlisten] shutdown: removing fs lock... 2017-02-19T12:02:44.511-0600 [initandlisten] dbexit: really exiting now 2017-02-19T12:02:44.524-0600 ***** SERVER RESTARTED ***** 2017-02-19T12:02:44.526-0600 [initandlisten] MongoDB starting : pid=981 port=27117 dbpath=/usr/lib/unifi/data/db 64-bit host=06fa76df8022 2017-02-19T12:02:44.526-0600 [initandlisten] db version v2.6.10 2017-02-19T12:02:44.526-0600 [initandlisten] git version: nogitversion 2017-02-19T12:02:44.526-0600 [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016 2017-02-19T12:02:44.526-0600 [initandlisten] build info: Linux lgw01-12 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64 BOOST_LIB_VERSION=1_58 2017-02-19T12:02:44.526-0600 [initandlisten] allocator: tcmalloc 2017-02-19T12:02:44.526-0600 [initandlisten] options: { net: { bindIp: "127.0.0.1", http: { enabled: false }, port: 27117 }, storage: { dbPath: "/usr/lib/unifi/data/db" }, systemLog: { destination: "file", logAppend: true, path: "logs/mongod.log" } } 2017-02-19T12:02:44.591-0600 [initandlisten] journal dir=/usr/lib/unifi/data/db/journal 2017-02-19T12:02:44.591-0600 [initandlisten] recover : no journal files present, no recovery needed 2017-02-19T12:02:44.591-0600 [initandlisten] LogFile::synchronousAppend failed with 8192 bytes unwritten out of 8192 bytes; b=0x384a000 errno:22 Invalid argument 2017-02-19T12:02:44.591-0600 [initandlisten] Fatal Assertion 13515 2017-02-19T12:02:44.594-0600 [initandlisten] 0xedb3e9 0xe6fb3f 0xe4a1c1 0xe7039b 0x8869fa 0x886f3a 0x88ea86 0x87d184 0x61f92f 0x620903 0x5e943c 0x2b1d21bbb830 0x61a2d9 bin/mongod(_ZN5mongo15printStackTraceERSo+0x39) [0xedb3e9] bin/mongod(_ZN5mongo10logContextEPKc+0x21f) [0xe6fb3f] bin/mongod(_ZN5mongo13fassertFailedEi+0x71) [0xe4a1c1] bin/mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x29b) [0xe7039b] bin/mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x22a) [0x8869fa] bin/mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x2a) [0x886f3a] bin/mongod(_ZN5mongo3dur16preallocateFilesEv+0x966) [0x88ea86] bin/mongod(_ZN5mongo3dur7startupEv+0x74) [0x87d184] bin/mongod(_ZN5mongo14_initAndListenEi+0x76f) [0x61f92f] bin/mongod(_ZN5mongo13initAndListenEi+0x23) [0x620903] bin/mongod(main+0x23c) [0x5e943c] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x2b1d21bbb830] bin/mongod(_start+0x29) [0x61a2d9] 2017-02-19T12:02:44.594-0600 [initandlisten] ***aborting after fassert() failure 2017-02-19T12:02:44.596-0600 [initandlisten] SEVERE: Got signal: 6 (Aborted). Backtrace:0xedb3e9 0xeda3a5 0x2b1d21bd04b0 0x2b1d21bd0428 0x2b1d21bd202a 0xe4a213 0xe7039b 0x8869fa 0x886f3a 0x88ea86 0x87d184 0x61f92f 0x620903 0x5e943c 0x2b1d21bbb830 0x61a2d9 bin/mongod(_ZN5mongo15printStackTraceERSo+0x39) [0xedb3e9] bin/mongod() [0xeda3a5] /lib/x86_64-linux-gnu/libc.so.6(+0x354b0) [0x2b1d21bd04b0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x38) [0x2b1d21bd0428] /lib/x86_64-linux-gnu/libc.so.6(abort+0x16a) [0x2b1d21bd202a] bin/mongod(_ZN5mongo13fassertFailedEi+0xc3) [0xe4a213] bin/mongod(_ZN5mongo7LogFile17synchronousAppendEPKvm+0x29b) [0xe7039b] bin/mongod(_ZN5mongo3dur20_preallocateIsFasterEv+0x22a) [0x8869fa] bin/mongod(_ZN5mongo3dur19preallocateIsFasterEv+0x2a) [0x886f3a] bin/mongod(_ZN5mongo3dur16preallocateFilesEv+0x966) [0x88ea86] bin/mongod(_ZN5mongo3dur7startupEv+0x74) [0x87d184] bin/mongod(_ZN5mongo14_initAndListenEi+0x76f) [0x61f92f] bin/mongod(_ZN5mongo13initAndListenEi+0x23) [0x620903] bin/mongod(main+0x23c) [0x5e943c] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x2b1d21bbb830] bin/mongod(_start+0x29) [0x61a2d9]
  8. This issue is resolved in the community.home-assistant.io forums. https://community.home-assistant.io/t/docker-zwave-secure-add-device/5622 Check this out!
  9. Here is my setup: Windows Home Server 2011 installed on 1TB HDD 6x 1.5TB HDD in windows software RAID 5 - almost completely full 3x 4TB Drives- newly acquired and no data on them This is what I was working on: My goal is to migrate this system to unraid. The problem is that unraid can't read the Windows Software RAID5, and Windows can't write to reiserfs drives. This was my setup to migrate this: Unraid in a virtual machine (virtualbox) w/ direct access to the 3 new drives. It formats the drives, I transfer the data through unraid onto the 3 reiserFS drives Then I would boot the system on unraid directly, verify access to the 3 new drives and the data integrity, then I would wipe the 6 drive away and format it into the unraid setup. My mistake: First thing I did after getting unraid to work virtually and format the 3 drives was do a dry run. I rebooted off of my unraid flash drive, manually selected my 3 new drives as part of the array (it didn't auto detect them) and start the array OH BALLS I JUST SCREWED UP. I accidentally selected 2 of the 1.5TB drives as part of the array. As such, it listed them as "unformatted", I quickly stopped the array **without clicking format**, assigned the correct 3 drives and it saw the test files perfectly. When I rebooted, my windows software raid was missing. Opened disk management and it showed Failed, and 2 of the 1.5TB drives just show up as healthy (Active), windows doesn't recognize how they are formatted anymore. So far I have used a tool to get all of the RAID parameters, then used ZAR to attempt reconstruction of the array, but it said like 49% integrity after scanning for like 20 hours. I honestly don't know what I am doing, and there are about 2GB of data on the array that are going to be difficult to replace. If anyone could give me any advice on how to 1) Repair the array to a functional state without losing data -or- 2) Recover the data from the array using recovery software