• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

phil1c's Achievements


Newbie (1/14)



  1. Ok. I didn't change anything else except I updated from PuTTY 0.73 to 0.74 and now it works no issue from my Win10 machine. I'm on Unraid 6.9.0 beta35.
  2. Yeah, this is where I was/am at. Works no issue from my Macbook but Putty on my Win10 desktop refuses to work.
  3. Thanks, but I already tried that. No help.
  4. I'm having that exact issue all of a sudden (the one described by grantbey above) and already tried: cd / chown root:root . and chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys and I've verified that my public key (that I've used for over a year) is in the authorized_keys file. When the key log in fails, I can log in with password. I'm also using the "built-in" functionality of unRAID (I forget what version it was introduced) where, placing a folder in /boot/config/ssh/ titled "root" automatically get's symlinked from ~/.ssh/ so I don't use a script to copy an updated sshd_config. I'm on the nvidia-plugin version of 6.9.0beta30. Any other ideas what I could try?
  5. Does the shutdown order work in reverse of the startup order? As in, if I set dockers to start up in the order "1 -> 2 -> 3", does it shut them down in the order "3 -> 2 -> 1"?
  6. So, another person to throw their hat into this ring, as I am having the same issue, but I happened to see something possibly interesting (or maybe entirely irrelevant) before the issue started. Unraid Version 6.8.3 (NVIDIA build) Drive: LG WH16NS60, mounted internally via SATA. came with firmware 1.02 BEFORE PATCHING: Upon first installation of the drive into my Unraid machine, I installed and fired up the binhex-makemkv docker, and the drive was, with no changes to the template or extra parameters (privileged mode), visible in MakeMKV. Great! Time to flash a new firmware. I fired up a Win10 VM, passed the drive through, and flashed with a patched version of the 1.02 firmware (1.02-MK) so I can backup blu-rays. I tested it in the VM, ripped a basic Blu-Ray, no issues. AFTER PATCHING: Ok, turn off the VM. Reboot the server, just cause. Fire up the MakeMKV docker (privileged mode), and poof, no drive. I edited the docker to set PUID and PGID to 0 each, restarted the docker, and bam the drive is visible. One more strange note: In the Win10 VM, under LibreDrive Information, all of the statuses below "firmware version" show "Yes", but in the MakeMKV docker, "Unrestricted read speed" shows "possible, not yet enabled". There at least seems to be a difference in performance as the read speed seems to be limited at a steady 15MB/s read from the docker, but manages almost 30MB/s from Win10. Already have a correction: I re-ran the test in the WIn10 VM to make sure I was ripping the same exact stream from the disk and the speeds are nearly identical (16MB/s) to what the docker was able to read (15.4MB/s), so I guess I originally ripped a different stream further to the edge of the disc. Maybe there is something strange with the firmware causing the drive not to be recognized by the docker with the default 99/100 PUID/PGID? I have no idea how that would be, but that's the only thing that changed between the first boot-up and then using the patched version.
  7. Hey! I was having the same problem and eventually gave up. If you get this sorted, let me know. I will gladly try and help track this down again because it drives me nuts. For reference below are a few links to my own troubleshooting that I tried months ago, both from this post and from UBNT forums. Do note that my set up has changed since some of those posts in that I now have an EdgeSwitch8 instead of an Asus router in AP mode referenced in some of the posts. That change had no effect. If it lends itself to some other connection, I also cannot browse to my domain website (Ombi) from within my network. The UBNT rep suggested a static host map, but that brings me to an "ERR_CONNECTION_REFUSED" page when attempting to go through my domain. Initial Post: Follow-up: UBNT and exchange with UBNT support
  8. I, too, am having issues getting the UNMS docker to work, though mine are when I try and use an external domain and reverse proxy to reach UNMS: First, you can find some more information on my issue here (trying to crosspost as I'm not strictly sure of where the issue is arising from): link Short version is: I have the LSIO Let's Encrypt docker (tried it with the NGINX Proxy Manager one as well) running and working (serving other dockers externally). I also have the oznu/unms docker installed and running. I can add my ER4 to UNMS if I use tower.local:6443 (internal) as the UNMS server name in the host key but when I try to use (external), UNMS connects and passes credentials to the ER4 but the ER4 never connects. Setup Note I have set the UNMS docker set up with PUBLIC_HTTPS_PORT and PUBLIC_WS_PORT set up to 6443 as that is the port assigned for SSL to the docker container. So, internally, I use tower.local:6443, and this is what NGINX routes to for However, if I remove the two PUBLIC assignments, none of the below troubleshooting results change. Also, and both work to access the UNMS webUI externally but fails to connect nor is it reachable by ping. New Troubleshooting: 1. In the unms.log on my er4, this is repeated over and over: 2019-03-29 21:23:00 ERROR connection error ( HS: ws upgrade response not 101 - This is returned whether I use NGINX Proxy Manager with "Websockets Support" enabled or if done through LE. - When using tower.local, I receive the following and the er4 connects to UNMS fine: 2019-03-30 17:08:23 INFO unms: connecting to tower.local:6443 2019-03-30 17:08:23 INFO connection established 2019-03-30 17:08:25 INFO got unmsSetup 2. I found this page on support for the official UBNT UNMS docker and tried out the tests within the "Is it possible to ping UNMS from the device?" and "Does the connection upgrade to WebSocket?" sections. Results, from my er4, internal and external are as follows: Is it possible to ping UNMS from the device? Internal: "ping tower.local": completes, no error "traceroute tower.local" and "traceroute": completes, one hop "curl --insecure": version 0.13.3 is returned, which is accurate External "ping": completes, no error "traceroute": completes, one hop "curl --insecure": I receive the html for a 404 error page. BUT, if I go to that URL in my browser, I receive a page that only contains '{"version":"0.13.3"}', same as when i run the command command on the local address. I have no idea why this is. Does the connection upgrade to WebSocket? Internal "curl --insecure --include --no-buffer --header "Connection: Upgrade" --header "Upgrade: websocket" --header "Host:" --header "Origin:" --header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" --header "Sec-WebSocket-Version: 13" https://tower.local:6443/" Instead of the header that is supposed to happen as per the help page, I get the following 502 Bad Gateway header followed by the HTML of the "UNMS is starting" landing page: HTTP/1.1 502 Bad Gateway Server: nginx Date: Sat, 30 Mar 2019 17:37:03 GMT Content-Type: text/html Content-Length: 2325 ... Keep in mind, connecting locally WORKS with no issue. External "curl ...." Now, I receive the following header, followed by the HTML of the UNMS login page. HTTP/1.1 200 OK X-Frame-Options: SAMEORIGIN X-Xss-Protection: 1; mode=block Content-Length: 8473 ... So, internal access gives me a bad gateway error when using the above curl command but the ER4 can connect, and external access at least passes me the login page with no error but does not perform the websocket upgrade and the er4 fails to connect (which at least explains the "ws upgrade response not 101" error from #1). Sadly, no additional help is provided on what to do next on the referenced UBNT support page. The Cry for Help Is there something that I am missing? Is there something additional that I need to set up in the UNMS docker? I'm continuing to search and learn but I have been smashing against this wall for a couple days now.
  9. So, I'm beating my head against my keyboard trying to get UNMS to work with the letsEncrypt docker and I can't seem to figure it out. I am posting here and a few other places as I'm not sure if this is a failure of the setup of LE or if it's my UNMS docker or something else so sorry if this doesn't fully belong. For the record: I think I'm only just moving out of noob territory when it comes to unRaid/Linux usage so apologies if I fail to include something or miss something obvious. Issue: I cannot get my ER4 to connect to UNMS, even though if i go through the Discovery manager, the UNMS key *will* be automatically populated into my ER4 system settings. Setup (hopefully in a concise order): FQDN for unms access = (UMD) LetsEncrypt Docker (LED) setup = - port forward on router = WAN:443 --> tower:643 / WAN:80 --> tower:280 - LetsEncrypt ports = tower:643 --> LED:443 / tower:280 --> LED:80 - LED is on a custom Docker network "proxynet", along with Ombi and UNMS docker - config for UMD = # make sure that your dns has a cname set for unifi and that your unifi-controller container is not using a base url server { listen 443 ssl; listen [::]:443 ssl; server_name include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver valid=30s; set $upstream_unifi unms; proxy_pass https://$upstream_unifi:8443; } location /wss { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver valid=30s; set $upstream_unifi unms; proxy_pass https://$upstream_unifi:8443; proxy_buffering off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_ssl_verify off; } } UNMS docker (UD) setup (oznu/unms docker template added through Community Apps): - UD port mapping: tower:643 --> UD:443 / tower:6080 --> UD:80 - LetsEncrypt within UD is currently disabled Currently Working: 1) Lets Encrypt docker is currently working: - I can access ombi from outside the network, as can my Plex users. 2) UNMS docker is working from within the local network: - I can access the WebUI, I have set up UNMS. 3) LED --> UD: - If i change port in unms.subdomain.conf from :8443 to :443 (EDIT: wrong port #), I can access the WebUI from the internet at with no problem. The Failure: When I go into UD and use the Discovery Manager to add my er4, the er4 never completes the connection to UNMS (connection times out). What is confusing to me is that, when I initiate the connection from the Discovery Manager, the credentials are clearly correct along with some portions of forwarding/connects/etc because the UNMS key is then populated in my ER4. However, it always hangs on "device connecting". (EDIT: added this line from more testing) Additionally, if I manually change the fqdn in the UNMS key to tower.lan:6443, the er4 connects to UNMS fine. Is there anything glaring in my configuration that anyone can see as to why connecting to the er4 isn't working? Is there something about port forwarding 443 to LED that's messing with the UNMS connection? I'm just not versed enough in how the actual UNMS --> device connection occurs to better troubleshoot it.
  10. Woot! Thanks. Guess I need to read up on file permissions. Cheers
  11. Hello All, A week or so ago, my Windows 10 PC updated with the Fall Creators Update (build 1709). Like a few others, I had the issue where I couldn't access my unRaid server due to SMB1 being uninstalled by the update. The best fix I found to allow me access to the server was adding credentials to the Windows Credential Manager. It was all going well until now... When I have Sickrage/SAB download episodes, I can't delete the episodes using my PC. Every file I try and delete says I need permissions from "TOWER/nobody". In order to get around this, I run Docker Safe Permissions and then I can delete the files. Is there any way to avoid running Docker Safe Permissions every time I need to delete old files? Cheers
  12. Hey! Trying to run the LS.IO docker for PlexRequests. Installs no problem, I'm able to connect CP to it with no issue, but for some reason SickRage keeps showing an error whenever I click "Test Server" (both exist on the same unRaid server). The logs don't show anything of use. I've tried: 1) Verifying IP address and server port 2) Changing SR server port 3) Changing SR API key 4) Restarting SR 5) Restarting PlexRequests 6) Connecting to SR without and with SSL 7) Adding "SickRage Sub-Directory" of "/sickrage", "/", "/sickrage/", "/home" and nothing has helped. Any clue as to what is going on? What additional information would you like to see? Cheers, Philip
  13. Ok, I got it. I changed the preg_match pattern to "/Temperature.+\-\s+(\d\d).?/im" which just looks for the first pair of digits after a dash and some whitespace and leaves the option for additonal characters after; that relies entirely on "WHEN_FAILED" always being a dash so perhaps it's not the most robust method. To test, I ran it against the similar bottom outputs to what you have, and received the correct temperatures. Now I have it running in the hddtemp script and all of my drives are present and reporting. Let me know if that change works for you as well.
  14. Alright, I have the hddtemp script MOSTLY working (thanks for showing me those), which is why I'm back. The hdtemp script is not returning SOME of my hdd temps; I updated the script to $tagsarray to include all 7 drives but 3 are not returning values. Then I ran smartctl on the and all of my drives, including the ones that are missing, and they show 194 Temperature_Celsius and some show 190 Airflow_Temperature_Cel. Do all of your drives show up appropriately? Did you make any other edits to your script besides what you posted on the thread you listed? My script is as follows: #!/usr/bin/php <?php $tagsArray = array( "/dev/sdb", "/dev/sdc", "/dev/sdd", "/dev/sde", "/dev/sdf", "/dev/sdg", "/dev/sdh", "/dev/sdi", ); //do system call and parse output for tag and value foreach ($tagsArray as $tag) { $call = "smartctl -A ".$tag; $output = shell_exec($call); preg_match("/Temperature.+(\d\d)$/im", $output, $match); //send measurement, tag and value to influx sendDB($match[1], $tag); } //end system call //send to influxdb - you will need to change the parameters (influxserverIP, Tower, us-west) in the $curl to your setup, optionally change the telegraf to another database, but you must create the database in influxdb first. telegraf will already exist if you have set up the telegraf agent docker. function sendDB($val, $tagname) { $curl = "curl -i -XPOST '' --data-binary 'HDTemp,host=Tower,region=us-east " .$tagname."=".$val."'"; $execsr = exec($curl); } ?>