phil1c

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by phil1c

  1. I would like to use the name of the docker container in the "WebUI" field instead of [IP] or [PORT:xxx] so that I could write a webui value of "https://[NAME].local".  I have tried [name], [NAME], and [Name] and none have worked.  Is something similar to container name an available option for variables in the WebUI field?  And are their other variables available?

  2. Just installed Unmanic for the first time just now and I'm trying to get everything setup.  For reasons I don't know there are no plugins available and "Refresh Repositories" displays a message that it completed the refresh but still nothing.  I've tried wiping appdata and deleting and reinstalling the docker container but no change.  Any advice on how to get the plugins to show up?

  3. On 11/16/2020 at 1:07 AM, Can0nfan said:

    I have noticed since installing 6.9 Beta 35 I am getting server refused our key on both of my servers it was working on Beta 29

    I compared the authorized_keys file on both machines they match the keys file i have on my linux, Mac's and Putty setups i have in my home and no changes to the go file with the chmods were done

    anyone have an idea what might be causing this?

    Yeah, this is where I was/am at.  Works no issue from my Macbook but Putty on my Win10 desktop refuses to work.

  4. I'm having that exact issue all of a sudden (the one described by grantbey above) and already tried:

    cd /
    chown root:root .

    and

    chmod 700 ~/.ssh
    chmod 600 ~/.ssh/authorized_keys

    and I've verified that my public key (that I've used for over a year) is in the authorized_keys file.  When the key log in fails, I can log in with password.

     

    I'm also using the "built-in" functionality of unRAID (I forget what version it was introduced) where, placing a folder in /boot/config/ssh/ titled "root" automatically get's symlinked from ~/.ssh/ so I don't use a script to copy an updated sshd_config.

     

    I'm on the nvidia-plugin version of 6.9.0beta30.

     

    Any other ideas what I could try?

  5. So, another person to throw their hat into this ring, as I am having the same issue, but I happened to see something possibly interesting (or maybe entirely irrelevant) before the issue started.

     

    Unraid Version 6.8.3 (NVIDIA build)

    Drive: LG WH16NS60, mounted internally via SATA. came with firmware 1.02

     

    BEFORE PATCHING:

    Upon first installation of the drive into my Unraid machine, I installed and fired up the binhex-makemkv docker, and the drive was, with no changes to the template or extra parameters (privileged mode), visible in MakeMKV.

     

    Great! Time to flash a new firmware.

    I fired up a Win10 VM, passed the drive through, and flashed with a patched version of the 1.02 firmware (1.02-MK) so I can backup blu-rays.  I tested it in the VM, ripped a basic Blu-Ray, no issues.

     

    AFTER PATCHING:

    Ok, turn off the VM.  Reboot the server, just cause.  Fire up the MakeMKV docker (privileged mode), and poof, no drive.  I edited the docker to set PUID and PGID to 0 each, restarted the docker, and bam the drive is visible.

     

    One more strange note: In the Win10 VM, under LibreDrive Information, all of the statuses below "firmware version" show "Yes", but in the MakeMKV docker, "Unrestricted read speed" shows "possible, not yet enabled".  There at least seems to be a difference in performance as the read speed seems to be limited at a steady 15MB/s read from the docker, but manages almost 30MB/s from Win10.  Already have a correction:  I re-ran the test in the WIn10 VM to make sure I was ripping the same exact stream from the disk and the speeds are nearly identical (16MB/s) to what the docker was able to read (15.4MB/s), so I guess I originally ripped a different stream further to the edge of the disc.

     

    Maybe there is something strange with the firmware causing the drive not to be recognized by the docker with the default 99/100 PUID/PGID?  I have no idea how that would be, but that's the only thing that changed between the first boot-up and then using the patched version.

  6. On 10/8/2019 at 3:06 PM, dandiodati said:

    Anyone else have luck setting up letsencrypt and unms ? I have both services running in docker containers. If I send a websocket request (curl --insecure --include --no-buffer --header "Connection: Upgrade" --header "Upgrade: websocket" --header "Host: example.com:80" --header "Origin: http://example.com:80" --header "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" --header "Sec-WebSocket-Version: 13" https://192.168.x.x:443/)  then the nginx service within letsencrypt container just redirects me to its default home page instead of the unms container. If I use a regular https request then I do get redirected to the unms container(The login page). So something is weird with trying to connect as a websocket container which is needed for discovery. I tried the setting above too but no luck.  

    Hey!  I was having the same problem and eventually gave up.  If you get this sorted, let me know.  I will gladly try and help track this down again because it drives me nuts.  For reference below are a few links to my own troubleshooting that I tried months ago, both from this post and from UBNT forums.  Do note that my set up has changed since some of those posts in that I now have an EdgeSwitch8 instead of an Asus router in AP mode referenced in some of the posts.  That change had no effect.

     

    If it lends itself to some other connection, I also cannot browse to my domain website (Ombi) from within my network.  The UBNT rep suggested a static host map, but that brings me to an "ERR_CONNECTION_REFUSED" page when attempting to go through my domain.

     

    Initial Post: 

    Follow-up: 

     

    UBNT and exchange with UBNT support

    https://community.ui.com/questions/UNMS-WSS-key-using-WAN-IP-device-connection-times-out/7ea01845-1b3d-41a9-9555-172e8ecbd4b0

  7. I, too, am having issues getting the UNMS docker to work, though mine are when I try and use an external domain and reverse proxy to reach UNMS:

     

    First, you can find some more information on my issue here (trying to crosspost as I'm not strictly sure of where the issue is arising from): link

     

    Short version is: I have the LSIO Let's Encrypt docker (tried it with the NGINX Proxy Manager one as well) running and working (serving other dockers externally).  I also have the oznu/unms docker installed and running.  I can add my ER4 to UNMS if I use tower.local:6443 (internal) as the UNMS server name in the host key but when I try to use unms.my.domain (external), UNMS connects and passes credentials to the ER4 but the ER4 never connects.

     

    Setup Note

    I have set the UNMS docker set up with PUBLIC_HTTPS_PORT and PUBLIC_WS_PORT set up to 6443 as that is the port assigned for SSL to the docker container.  So, internally, I use tower.local:6443, and this is what NGINX routes to for unms.my.domain.  However, if I remove the two PUBLIC assignments, none of the below troubleshooting results change.

     

    Also, unms.my.domain and unms.my.domain:443 both work to access the UNMS webUI externally but unms.my.domain:6443 fails to connect nor is it reachable by ping.

     

     

    New Troubleshooting:

    1. In the unms.log on my er4, this is repeated over and over:

    2019-03-29 21:23:00 ERROR connection error (unms.my.domain:443): HS: ws upgrade response not 101

      - This is returned whether I use NGINX Proxy Manager with "Websockets Support" enabled or if done through LE.

      - When using tower.local, I receive the following and the er4 connects to UNMS fine:

    2019-03-30 17:08:23 INFO  unms: connecting to tower.local:6443
    2019-03-30 17:08:23 INFO  connection established
    2019-03-30 17:08:25 INFO  got unmsSetup

    2. I found this page on support for the official UBNT UNMS docker and tried out the tests within the "Is it possible to ping UNMS from the device?" and "Does the connection upgrade to WebSocket?" sections.  Results, from my er4, internal and external are as follows:

    Is it possible to ping UNMS from the device?

    • Internal:
      • "ping tower.local": completes, no error
      • "traceroute tower.local" and "traceroute 192.168.1.30": completes, one hop
      • "curl --insecure https://192.168.1.30:6443/v2.1/nms/version": version 0.13.3 is returned, which is accurate
    • External
      • "ping unms.my.domain": completes, no error
      • "traceroute unms.my.domain": completes, one hop
      • "curl --insecure https://unms.my.domain:443/v2.1/nms/version": I receive the html for a 404 error page.  BUT, if I go to that URL in my browser, I receive a page that only contains '{"version":"0.13.3"}', same as when i run the command command on the local address.
        • I have no idea why this is.

    Does the connection upgrade to WebSocket?

    • Internal
      • "curl --insecure --include --no-buffer --header "Connection: Upgrade" --header "Upgrade: websocket" --header "Host: example.com:80" --header "Origin: http://example.com:80" --header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQ==" --header "Sec-WebSocket-Version: 13" https://tower.local:6443/"
        • Instead of the header that is supposed to happen as per the help page, I get the following 502 Bad Gateway header followed by the HTML of the "UNMS is starting" landing page:
    HTTP/1.1 502 Bad Gateway
    Server: nginx
    Date: Sat, 30 Mar 2019 17:37:03 GMT
    Content-Type: text/html
    Content-Length: 2325
    ...

                       Keep in mind, connecting locally WORKS with no issue.

    • External
      • "curl .... https://unms.my.domain:443"
        • Now, I receive the following header, followed by the HTML of the UNMS login page.
    HTTP/1.1 200 OK
    X-Frame-Options: SAMEORIGIN
    X-Xss-Protection: 1; mode=block
    Content-Length: 8473
    ...

    So, internal access gives me a bad gateway error when using the above curl command but the ER4 can connect, and external access at least passes me the login page with no error but does not perform the websocket upgrade and the er4 fails to connect (which at least explains the "ws upgrade response not 101" error from #1).  Sadly, no additional help is provided on what to do next on the referenced UBNT support page.

     

    The Cry for Help

    Is there something that I am missing?  Is there something additional that I need to set up in the UNMS docker?  I'm continuing to search and learn but I have been smashing against this wall for a couple days now.

  8. So, I'm beating my head against my keyboard trying to get UNMS to work with the letsEncrypt docker and I can't seem to figure it out.  I am posting here and a few other places as I'm not sure if this is a failure of the setup of LE or if it's my UNMS docker or something else so sorry if this doesn't fully belong.  For the record: I think I'm only just moving out of noob territory when it comes to unRaid/Linux usage so apologies if I fail to include something or miss something obvious.

     

    Issue:

    I cannot get my ER4 to connect to UNMS, even though if i go through the Discovery manager, the UNMS key *will* be automatically populated into my ER4 system settings.

     

    Setup (hopefully in a concise order):

    FQDN for unms access = unms.my.domain (UMD)

    LetsEncrypt Docker (LED) setup =

    - port forward on router = WAN:443 --> tower:643 / WAN:80 --> tower:280

    - LetsEncrypt ports = tower:643 --> LED:443 / tower:280 --> LED:80

    - LED is on a custom Docker network "proxynet", along with Ombi and UNMS docker

    - config for UMD =

    # make sure that your dns has a cname set for unifi and that your unifi-controller container is not using a base url
    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name unms.my.domain
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        # enable for ldap auth, fill in ldap details in ldap.conf
        #include /config/nginx/ldap.conf;
    
        location / {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /login;
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_unifi unms;
            proxy_pass https://$upstream_unifi:8443;
        }
    
        location /wss {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /login;
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_unifi unms;
            proxy_pass https://$upstream_unifi:8443;
            proxy_buffering off;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "Upgrade";
            proxy_ssl_verify off;
        }
    
    }

    UNMS docker (UD) setup (oznu/unms docker template added through Community Apps):

    - UD port mapping: tower:643 --> UD:443 / tower:6080 --> UD:80

    - LetsEncrypt within UD is currently disabled

     

    Currently Working:

    1) Lets Encrypt docker is currently working:

      - I can access ombi from outside the network, as can my Plex users.

     

    2) UNMS docker is working from within the local network:

      - I can access the WebUI, I have set up UNMS.

     

    3) LED --> UD:

      - If i change port in unms.subdomain.conf from :8443 to :443 (EDIT: wrong port #), I can access the WebUI from the internet at unms.my.domain with no problem.

     

    The Failure:

    When I go into UD and use the Discovery Manager to add my er4, the er4 never completes the connection to UNMS (connection times out).

    What is confusing to me is that, when I initiate the connection from the Discovery Manager, the credentials are clearly correct along with some portions of forwarding/connects/etc because the UNMS key is then populated in my ER4.  However, it always hangs on "device connecting".

     

    (EDIT: added this line from more testing) Additionally, if I manually change the fqdn in the UNMS key to tower.lan:6443, the er4 connects to UNMS fine.

     

    Is there anything glaring in my configuration that anyone can see as to why connecting to the er4 isn't working?  Is there something about port forwarding 443 to LED that's messing with the UNMS connection?  I'm just not versed enough in how the actual UNMS --> device connection occurs to better troubleshoot it.

  9. Hello All,

     

    A week or so ago, my Windows 10 PC updated with the Fall Creators Update (build 1709).  Like a few others, I had the issue where I couldn't access my unRaid server due to SMB1 being uninstalled by the update.  The best fix I found to allow me access to the server was adding credentials to the Windows Credential Manager.  It was all going well until now...

     

    When I have Sickrage/SAB download episodes, I can't delete the episodes using my PC.  Every file I try and delete says I need permissions from "TOWER/nobody".  In order to get around this, I run Docker Safe Permissions and then I can delete the files.  Is there any way to avoid running Docker Safe Permissions every time I need to delete old files?

     

    Cheers

  10. Hey!

     

    Trying to run the LS.IO docker for PlexRequests.  Installs no problem, I'm able to connect CP to it with no issue, but for some reason SickRage keeps showing an error whenever I click "Test Server" (both exist on the same unRaid server).  The logs don't show anything of use.  I've tried:

    1) Verifying IP address and server port

    2) Changing SR server port

    3) Changing SR API key

    4) Restarting SR

    5) Restarting PlexRequests

    6) Connecting to SR without and with SSL

    7) Adding "SickRage Sub-Directory" of "/sickrage", "/", "/sickrage/", "/home"

     

    and nothing has helped.  Any clue as to what is going on?  What additional information would you like to see?

     

    Cheers,

    Philip

  11. The only change I could see is the way I have disk=$tagname and value=$val. (Makes it so you only need one query in Grafana to get all the data.) in the last function. This is mine:

    function sendDB($val, $tagname) {
    $curl = "curl -i -XPOST 'http://tygra.tribe.home:8086/write?db=telegraf' --data-binary 'disktemp,host=unraid,region=us-west,disk=".$tagname." "
    ."value=".$val."'";
    $execsr = exec($curl);
    }
    ?>

     

    However, I ran the code that it runs on my box and the temperature output is different for different hard drives. The script is actually only working for 3 of my drives. They all have smartctl -A /dev/sbx output that looks like:

    194 Temperature_Celsius     0x0022   118   093   000    Old_age   Always       -       25

     

    The drives that aren't displaying temp in Grafana are have output like this:

    194 Temperature_Celsius     0x0002   250   250   000    Old_age   Always       -       24 (Min/Max 10/43)

    or

    194 Temperature_Celsius     0x0022   026   040   000    Old_age   Always       -       26 (0 15 0 0 0)

     

    What I'm pretty sure is happening is that the line

    preg_match("/Temperature.+(\d\d)$/im", $output, $match);

    doesn't match the two bottom patterns.

     

    It's late and I'm going to bed, but either you can figure out the proper match or I'll try tomorrow.

    Thanks for helping me find a problem with my plots.

     

    Ok, I got it.  I changed the preg_match pattern to

    "/Temperature.+\-\s+(\d\d).?/im"

    which just looks for the first pair of digits after a dash and some whitespace and leaves the option for additonal characters after;  that relies entirely on "WHEN_FAILED" always being a dash so perhaps it's not the most robust method.  To test, I ran it against the similar bottom outputs to what you have, and received the correct temperatures.  Now I have it running in the hddtemp script and all of my drives are present and reporting.  Let me know if that change works for you as well.

  12.  

    For HDD Temps I use the scripts in this forum post: https://lime-technology.com/forum/index.php?topic=52220

    As for your error I'm not sure. Did you modify it?

     

    Alright, I have the hddtemp script MOSTLY working (thanks for showing me those), which is why I'm back.  The hdtemp script is not returning SOME of my hdd temps; I updated the script to $tagsarray to include all 7 drives but 3 are not returning values.  Then I ran smartctl on the and all of my drives, including the ones that are missing, and they show  194 Temperature_Celsius and some show 190 Airflow_Temperature_Cel.  Do all of your drives show up appropriately?  Did you make any other edits to your script besides what you posted on the thread you listed?

     

    My hddtemp.sh script is as follows:

    #!/usr/bin/php
    <?php
    
    $tagsArray = array(
    "/dev/sdb", 
    "/dev/sdc", 
    "/dev/sdd", 
    "/dev/sde", 
    "/dev/sdf", 
    "/dev/sdg", 
    "/dev/sdh", 
    "/dev/sdi", 
    );
    
    //do system call and parse output for tag and value
    
    foreach ($tagsArray as $tag) {
    
    $call = "smartctl -A ".$tag;
    $output = shell_exec($call);
    preg_match("/Temperature.+(\d\d)$/im", $output, $match);
    
    //send measurement, tag and value to influx
    
    sendDB($match[1], $tag);
    
    }
    
    //end system call
    
    
    //send to influxdb - you will need to change the parameters (influxserverIP, Tower, us-west) in the $curl to your setup, optionally change the telegraf to another database, but you must create the database in influxdb first. telegraf will already exist if you have set up the telegraf agent docker.
    
    function sendDB($val, $tagname) {
    
    $curl = "curl -i -XPOST 'http://192.168.1.253:8086/write?db=telegraf2' --data-binary 'HDTemp,host=Tower,region=us-east "
    .$tagname."=".$val."'";
    $execsr = exec($curl);
    
    }
    
    
    ?>

  13. Sweet, thanks for finding that.  But that leads me to ask: what happens in the next release when I need to make updates?  WHile I've had unRAID for a while now, I'm a novice on a great day when it comes to any form of CLI commands/Linux/querying.

     

    You can use something like Grafana to be the front end. Or you can run a query against the web api. I tried this query in my browser and it worked:

    http://tygra.tribe.home:8086/query?pretty=true&db=telegraf&q=SELECT%20*%20FROM%20cpu%20WHERE%20time%20%3E%20now()%20-%201h

     

    Where tygra.tribe.home points to the ip of my unRAID box.

     

    Got it!  So, a lot has happened in the last few hours. I have influxdb, grafana, and untelegraf running successfully, even with some pretty graphs (all thanks to you and your awesome dockers).  Perhaps you have another set of answers in you: I want to monitor hdd temps but untelegraf doesn't have a way for me to activate the hddtemp plug in (according to the Docker hub page).  So, i tried to set up the telegraf docker.  I created the conf file, pointed the host path to it, mimicked my other keys/paths from the untelegraf docker, the docker install command completes successfully BUT the telegraf docker never shows as "started".  When I check the log, I see the following:

     

    2017/01/11 03:56:09 Using config file: /etc/telegraf/telegraf.conf
    2017/01/11 03:56:09 Could not parse [agent] config
    Error parsing /etc/telegraf/telegraf.conf, line 70: field corresponding to `logfile' is not defined in `*config.AgentConfig'
    

     

    Any idea how I can fix that?  Or how I can get untelegraf (simply because I already have it working) to record hddtemps?  Or, if it already is running, where is it hiding for graphing in Grafana?  I checked every value I could see in there but nothing related to HD temps was available.

     

    Cheers

  14. I am still having an issue with the WebUI for InfluxDB, any idea as to why this will not load?

     

    Dangit.  Ok, well, at least it's not just me.  I just learned about influxdb/telegraf/grafana and was all excited and I'm sitting here smashing my head against my desk over why I can't even get InfluxDB to load.  Any ideas to anyone still on here?

     

    I finally figured this out. I started looking (albeit lazily) when wreave posted and wanted to find an answer before posting. Well, I found the answer. The influxdb admin interface has been deprecated.

     

    To enable the admin interface you can add an environment variable as shown in the attached image. However, like the changelog says the admin interface will be completely removed in the next release.

     

    Sweet, thanks for finding that.  But that leads me to ask: what happens in the next release when I need to make updates?  WHile I've had unRAID for a while now, I'm a novice on a great day when it comes to any form of CLI commands/Linux/querying.

  15. I am still having an issue with the WebUI for InfluxDB, any idea as to why this will not load?

     

    Dangit.  Ok, well, at least it's not just me.  I just learned about influxdb/telegraf/grafana and was all excited and I'm sitting here smashing my head against my desk over why I can't even get InfluxDB to load.  Any ideas to anyone still on here?

  16. I ran the command and it looks like the owner is either root or nobody.

     

    phil1c/airbillion,

     

    Yes the files are copied first, then deleted when the diskmv command has completed.

     

    The rsync errors are most likely due to mismatch in ownership.

     

    unBALANCE runs as user nobody so if it can't set the time on the destination, it probably won't be able to delete the files on the source (that's what I'm thinking right now).

     

    Can you check who owns the files on the source ?

     

    Something like $ ls -al /mnt/disk5/Movies/

  17. I'm running unbalance for the firs time to move around free space on my drives to even it out a bit.  It's still running (about 16% done at this point) but, as I'm watching the array under the unRAID WebGUI Main tab, I see that the destination drives are increasing in space utilized but the "from" disk has not had any drop in data utilized.  I am receiving constant rsync errors while it's running, as seen below.  *Movie Name* is a placeholder, every single file that is being moved is giving this error.  Is this lack of deleting the original file and the rsync errors related?  Are the original files going to be deleted AFTER the entire move is completed?  Why am I getting these rsync errors?  I ask before it's finished because if this plugin is just giving me duplicate files that I don't know how to delete I want to end it before it gets further.

     

    EDIT: I am using unBALANCE v1.3.4-482.221ac38 on unRAID 6.1.8 Plus.  All of my dockers are offline so as not to interfere in anyway with this move.

     

    MOVE: rsync: failed to set times on "/mnt/disk5/.": Operation not permitted (1)
    MOVE: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0]
    MOVE: rsync: failed to set times on "/mnt/disk5/.": Operation not permitted (1)
    MOVE: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0]
    MOVE: rsync: failed to set times on "/mnt/disk5/.": Operation not permitted (1)
    MOVE: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0]
    MOVE: rsync: failed to set times on "/mnt/disk5/.": Operation not permitted (1)
    MOVE: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0]
    MOVE: rsync: failed to set times on "/mnt/disk5/.": Operation not permitted (1)
    MOVE: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1165) [sender=3.1.0]
    MOVE: /usr/local/emhttp/plugins/unbalance/diskmv -f "Movies/*Movie name* /mnt/disk1 /mnt/disk5
    MOVE: Moving /mnt/disk1/Movies/21 Jump Street (2012) into /mnt/disk5/Movies/*Movie name*
    MOVE: ./Movies/*Movie name*/*Movie name*-fanart.jpg
    MOVE: ./Movies/*Movie name*/*Movie name*-poster.jpg
    MOVE: ./Movies/*Movie name*/*Movie name*.mkv
    MOVE: ./Movies/*Movie name*/*Movie name*.nfo
    MOVE: ./Movies/*Movie name*/Thumbs.db
    MOVE: diskmv finished
    

  18. Hello All,

     

    I am new to the wonderful world of unRaid (6.1.7 Plus) here (everything was up and running starting a week ago) and I'm experiencing inconsistent fan speeds with my new server.  My motherboard is a SuperMicro x9SRA (with latest BIOS) which has five (5) PWM fan headers, none of which are labeled as "CPU", only "Fan" and a number.  I have the CPU fan plugged into Fan2 (fans are plugged into ports based on distance that the cable will reach) and my four (4) case fans in the other slots, all of which are PWM fans of varied 120mm and 140mm sizes.

     

    Now, to the issue: I noticed that all of my fans are constantly cycling between minimum speed and maximum speed.  They do so in roughly 6 second cycles (2 seconds full speed, 4 seconds minimum).  While it's doing this, my server is under no load (no Plex streams, no streaming over my local network), my CPU temp is 36C and my harddrives (5 in total) range in temp from 31C - 38C.  Right now there are a few Plex streams encoding and this cycling is continuing.

     

    As for attempted fixes, I have installed the Dynamix Auto Fan Control and while it detected some of my fan inputs, applying it did nothing to change the behavior.  I also tried the unraid-fan-speed shell script that's floating around here and that did run, picked a temp to base the PWM speed on, and set a PWM speed but, again, no change in speeds.

     

    What I'm looking for is some way to stop my fans from constantly spinning at full speed then back down, just because of the noise (this server is near my home theater).  I would love for the fans to run based on HDD and CPU temp with the CPU fan controlled independently.

     

    Additional notes: as I'm writing this, a couple more plex streams have been opened (transcoding), my CPU temps are up at 45C now, two of my HDDs are up at 41C now, and my fans are now running at constant speeds (~65% full speed).

     

    So, shortening: at idle, my fans are all over the place.  Under load the speeds are mostly constant.  As I've currently witnessed my temps have never neared a critical level, just looking for some quiet consistency.

     

    Any help you have would be great.

     

    Cheers,

    Philip