Jump to content

axipher

Members
  • Posts

    102
  • Joined

  • Last visited

Posts posted by axipher

  1. Thanks for the note on needing to setup 2-factor to be able to access App Passwords.

     

    Google finally disabled "less secure apps" on my account so I had to add 2-factor so I can add an app password and get email notifications working again.  Also took that opportunity to setup the discord notifications as well as those are pretty easy to set up now.

  2. 3 hours ago, unraid-user said:

     

    oh wow.  That's been a massive help. Thank you so much. I'll have a play this afternoon.  I thought I was going mad not being able to find the option.  :)

     

    The change to V2 sent me through a few loops as well since this is on my hobby side so I was more or less blind-sided by it and was playing catch-up which made it feel harder than it was.

     

    Good luck on your adventure.

  3. 3 hours ago, unraid-user said:

    ok embarrassing question. But HOW do I create a database with the Influx CLI.  I'm dropping into the console for the InfluxDB container. using influx just gives me lots of options but nothing obvious to connect to then be able to create a database.   I have created an initial user (I'm assuming this will be my admin user), org and bucket(s) plus access token for the bucket I have created. I've created a bucket for home assistant as I want all my sensor data to go there (which I have confirmed working).   But when I go to add a data store in grafana to pull data from influx it asks for a database name too.

     

    This is the howto I have been following. It specifies a database called home_assistant.  I can't seem to be able to create one.  I MUST be doing something really dumb somewhere along the line.

    https://www.paolotagliaferri.com/home-assistant-data-persistence-and-visualization-with-grafana-and-influxdb/

     

    I had some similar issues when my InfluxDB docker updated to v2.  Some of my home automation and sensors were all written with 1.8 endpoints and Grafana Dashboards built.

     

     

    I managed to get some of my data working after using this page: https://docs.influxdata.com/influxdb/v2.1/query-data/influxql/


    But in the end, I found it much simpler for myself to just have two InfluxDB instances instead, one back on the 1.8 version for all my home automation, Telegraf, other dockers and basic data.  Then I have another one using the newer 2.1 for my external web enabled things that I handle myself and can update with new endpoints for data writing and update all my Grafana dashboards.  It does mean a little bit of extra tweaking in Grafana based on which version you are accessing and remembering the differences in the queries, but it was much easier to have two InfluxDB versions then convert all my older data that isn't publicly accessible anyway.

     

    • Like 1
  4. 4 minutes ago, screwbox said:

    I startet yesterday recieving my very first Unifi AP AC Lite and spun up this Docker pretty quickly. Adopted the AP and everything is running fine.

    While exploring the functionalitys of this glorious piece of hardware and software (sorry just had pretty shitty router and APs in the Past) i stumbled upon the "Traffic Stats" and at first i thought "no statistics data" is okay because there have to be some data generated at first.

    But today, some hours later there are again no statistics to display. So what is going on? Did i miss something?

    Tried to search a bit outside of this community and found out that there is some problem with messed up mongodb or something. But all i found out is misleading because it is not meant for the docker implementation.

    Didn't find anything regarding my problem here also. 

    Did nothing special to get the container running. Touched nothing, only changed the network from "Bridge" to "custom: br0" and gave the container an IP adress because i like my services on dedicated IPs.

    Is there anything i forgot?

     

    I believe traffic stats requires you to also be running a Unifi based router/gateway as well, not just a switch or access point.

  5. 14 minutes ago, Shantarius said:

    Hello there,

     

    i have a solution for the problem with the access to /var/run/docker.sock. For me it works with telegraf:alpine. You just add the following in the Extra Parameters Value:

     

    --user telegraf:$(stat -c '%g' /var/run/docker.sock)

     

    Then Telegraf has access to the docker.sock and i have some date from all dockers in Grafana (CPU Usage, RAM Usage, Network Usage). Its pretty nice 🙂

     

    For the smartctl Problem i have no solution. If i add to Post Arguments

     

    /bin/sh -c 'apk update && apk upgrade && apk add smartmontools && telegraf'

     

    the telegraf docker doesn't starts up and in the docker log i found the error

     

    ERROR: Unable to lock database: Permission denied
    ERROR: Failed to open apk database: Permission denied

     

    Has anyone a solution for this?

     

    🙂

     

    Thanks for the Docker solution, I'll keep that in mind for if I try to go back to the current Telegraf docker once we find a solution for the smartmontools.

  6. 3 hours ago, EDalcin said:

    Hi folks, I installed InfluxDB a few days ago and created a few boards and charts. Last night the docker updates and I missed everything! It starts all over, asking me to create a new account. What I did wrong? How can I avoid it?

    Thanks in advance.

    Eduardo

     

    Which docker updated?

     



    InfluxDB has some changes from 1.8 to 2.0 that can break other dockers that assume basic Authorization (from my understanding)

    I've locked mine to: 'influxdb:1.8'

     

     

     

    Grafana has some changes form v7 to v8 that can break some dashboards and plugins using lots of the options or thresholds and value mappings

    I've locked mine to: 'grafana/grafana:8.0.2'


     

     

    Telegraf's most recent update also breaks collection of SMART and Docker statistics.

    I've locked mine to: 'telegraf:1.20.2-alpine'

     

     

    By locking the versions, then I know updates won't break those connections.  It does mean you might miss out on important updates later on, so keeping tabs on those docker's development pages, or on here is still a good idea.

    • Like 1
    • Thanks 1
  7. I'm running on the 'telegraf:alpine' default tag and had to make the following changes for it to work, this resulted in running 'Telegraf 1.20.3' per the docker logs.

     

    1) Remove the following 'Post Arguments' under 'Advanced View'.  This gets rid of the 'apk error' but also gets rid of 'smartmontools' which means you will lose out on some disk stats for certain dashboards in Grafana.

    /bin/sh -c 'apk update && apk upgrade && apk add smartmontools && telegraf'

     

    2) Open up '/mnt/user/appdata/telegraf/telegraf.conf' and comment out the following two lines.  This gets rid of the 'missing smartctl' errors and also gets rid of the constant error for not being able to access 'docker.sock'.  There might be a fix for the docker problem if someone can share it as I find it really useful to monitor the memory usage of each docker from Grafana but currently had to give that up on current version of Telegraf

    #[[inputs.smart]]
    #[[inputs.docker]]

     

     

     

    As others stated, reverting to an older version works as well if the new Telegraf provides nothing new for your use case and would be the recommended route, but I just wanted to document the couple things I had to change to get the latest Telegraf docker running again alongside InfluxDB 1.8 (tag: influxdb:1.8) and Grafana v8 (tag: grafana/grafana:8.0.2).

     

    At this point I will probably spend next weekend locking most of my dockers in to current versions and setup a trial Unraid box on an old Dell Optiplex for testing latest dockers before upgrading on my main system.

  8. 4 minutes ago, Jonwork88 said:

    I have also been on 6.4.54 for a few weeks and everything is fine. 

     

    However, for the first time I can remember, Unraid is telling me there is a docker update ready - I thought I wouldn't receive these if I set a tag?  Or is it possible this is an update to 6.5.54?

     

    Here is the tag I am using on my docker:  linuxserver/unifi-controller:version-6.4.54

     

    Thanks in advance for any feedback.

    Capture.JPG

     

    There can be updates for the 6.4.54 branch in terms of the docker, I'm not sure what they entail exactly without looking at the release note for each one:

    image.png.82921cdb04191fcb7077a9771fd3e3c8.png

  9. 46 minutes ago, PeteAsking said:

    Thats great. Thats 5 unraid users verified on linuxserver/unifi-controller:version-6.4.54 without any issues then, who took the time to post.

     

    I just migrated to a UDM-Pro over the weekend, but previous to that I was using 6.4.54 via UnRaid.

     

    I had no problems with the container itself, just issues with actual Unifi Software and its New Dashboard.  One thing to keep in mind was that I still had DPI turned off as that was the cause of my Unifi Container ballooning in memory on previous releases and I never turned it back on.

     

    Otherwise the 6.4.54 container itself was running no issues for my previous if anyone else was loking for a few more positive reports before they upgraded.

     

     

    Also NGinx Proxy Container worked great for hiding my Unifi Controller behind a sub-domain of mine before I migrated to the UDM-Pro in case anyone is looking for a way to get somewhat HTTPS connection while not at home, or use Wireguard to access your network and then the container from there.

    • Like 1
  10. On 3/14/2021 at 6:21 AM, reyo said:

    Just view it as XML, find the clock section and change the  <clock offset='localtime'> to <clock offset='utc'> . Seems like Windows wants UTC as base and adds the timezone to that. Thats why the time goes wrong: it gets localtime from host (which is already +3h e.g) like 14:00, adds offset +3 and you get 17:00. Which is 3 hours ahead.

     

    Thanks for this, was driving me nuts every time my Windows VM that runs some scripts reboots, the time is off by my timezone and I need to toggle off/on the auto sync time option.

     

    This change in the VM XML fixed that issue.

  11. Something weird has been going on with Grafana.

     

    A short time ago, it updated to V8.0 which broke a bunch of my value mappings.  I managed to fix those under new templates.

     

    Today there was another update for Grafana Docker and now it's back at version 7.5.8 which isn't even on their release page: https://grafana.com/docs/grafana/latest/release-notes/

     

    This change back from 8.0 broke some of my dashboard panels again.

     

     

    for now I've gone and put in "grafana/grafana:8.0.2" as my repository to force the version as the default seems to not be consistent with what version it grabs and v7 to v8 changes a lot of the panel configs.

     

     

    image.thumb.png.1426c6c1e356b71206ed4c122f0b0803.png

     

     

  12. 1 hour ago, guyonphone said:

     

    Thank You, I ended up rolling back to a previous version and its working now. Must be an issue with the release.

     

    This might be related to a recent Plex release that is upgrading the secure portion to using Let's Encrypt that might not be playing nicely with some browsers.  I just tried and it's working for me in Firefox, Edge (Chromium) and Chrome.  Not sure if that is fully implemented in the latest docker release and if Plex is forcing a change in the secure connections settings on certain installs.

     

    https://support.plex.tv/articles/206225077-how-to-use-secure-server-connections/?utm_source=Plex&utm_medium=email&utm_content=network_security_button&utm_campaign=Newsletter_Mar_24_2021_RoW_PP

     

    I do clear my cache/cookies once a month or so in all my browsers (CTRL + SHIFT + DELETE) in most to bring up the dialog.  Maybe that might clear some things up for you as well if you are using the secure access mode.

  13. 1 minute ago, jademonkee said:

    Any idea why my Unifi Controller is using so much RAM (4.887GiB)?

    I don't know how much it usually uses, but my server almost never goes above 6GB RAM usage but is now currently up at 10GB, so I'm guessing Unifi is the culprit.

    See attached.

    unifiRAM.png

     

    I believe for most people, it has been traced back to having DPI turned on, turning off DPI has brought my memory usage under 1 GB, previous to that it would climb to 4-6 GB each week before the weekly CA Auto Backup kicked in and restarted the docker.

     

    There were some flags people were adding to limit memory usage, but for me, I just realized I didn't need DPI for home use and gave that up until I can afford a Dream Machine Pro to slot in my rack to replace the Unifi Docker.

    • Thanks 1
  14. On 3/6/2021 at 6:57 PM, Frank1940 said:

    Personally, I would pick up another card.  Those SASLP cards have proven to be troublesome in Unraid servers.  It is not that they absolutely refuse to work.  It is that they seem to be working and then barf.  

     

    Thanks for the reply.  Looking at the thread below, I think I'm looking at picking up maybe a LSI 9211-8I mostly because it's just for connecting some spinning 12 TB hard drives and I already have the SFF-8087 to SATA fan-out cables.

     

     

    Would that be a good choice?

     

     

     

  15. I'm running Unraid 6.8.3 right now with plans to upgrade to 6.9.0 once I rebuild a drive that died.  I'm on AMD, specifically a 3600x on X570.

     

    I came to this thread last year and ended up picking up a AOC-SASLP-MV8 card off eBay to add some more drives, but I'm seeing a few threads and posts recommending against them.  I just need to add 4 SATA ports and had alrady picked up some of the SFF-8087 breakout cables.

     

    Should I go ahead with the AOC-SASLP-MV8 or find something else?  Also the Firmware_3.1.0.15n.zip in the first post is also not working.

  16. 9 hours ago, PeteAsking said:

    @axiphermy unifi switch runs at around 60 degrees celsius. Seems nuts to me. What temperature does your one run at?

     

    My US-48-G1 runs ay 60 C in my rack alongside a mining rigs and my UnRaid rig in 4U cases. I have a 90 mm fan running at low speed just blowing air across the top of the switch and my USG-3.  Without the fan running, it gets up to almost 70 C though, but hovers around there.

     

    It is hot to touch at that level and if it does get to 75 C, the internal fan does come on for 10 minutes or so as the fan is only configured for full on or full off at 75 C.  The internal fan is pretty loud though, people have come up with commands to turn it on, but no speed control and it turns itself off after the temp is below 75 C for a certain amount of time.

     

    There is a large discussion thread on it on the Unifi Forum, I don't have that link handy though.  The general consensus was that people aren't happy with the hot temperatures of their 48-port non-POE switches.  I'm not sure what the safe temperatures of the chips being used in the Switch are and if running at 60 - 70 C is actually detrimental for long term reliability or not.

     

     

    Personally I prefer my electronics to be at most warm to the touch on the cases in case a cable sits against it or my hand grazes by it while working on something else so I installed the extra fan.

     

    Once the warranty runs out on my switch, I'll likely open it up and swap in a couple Noctua fans with a low noise adapter spliced in to some constant 12 V in the switch somewhere.  I'll try to remember to post a link to that here if I do it.

    • Like 1
  17. 21 hours ago, jonathanm said:

    Yep. container ID != container name. Reference them by name and you will be fine. The container ID will change regularly, the name will not.

     

    To build on this reply, I have a User-Script setup like the following that I was using when I had DPI turned on in my Unifi-Controller causing some really high memory usage over time.  So I had this script set to run once a week:

     

     

    #!/bin/bash
    #arrayStarted=true
    
    docsrunning=$(docker ps | awk '{ print $2}')
    
    echo "These Dockers are running"
    echo $docsrunning
    
    echo "Is 'unifi-controller' running?"
    
    if [[ "$docsrunning" == *"unifi-controller"* ]]; then
        echo "Looks like it is, triggering a restart to reclaim memory"
        echo ""
        docker restart unifi-controller
    fi

     

     

    It could be cleaned up to include the docker name as a variable and stuff, but its something that someone can use as a reference for restarting dockers.

  18. 9 hours ago, bschaeff18 said:

    How are you doing the vlans? I'm trying to configure a U6-Lite with a 16 port poe lite switch with this docker as the controller and a pfsense router vm running on the server where I define the vlans. But this is not working as the vlan wifi does not get internet.

     

    I know the recent controller version 6 broke some things, I think the new setup I have now has a Network that has a VLAN associated with it and then a wireless network SSID configured that uses that configured network, I have my Guest network shown for the wireless example, but "3_Guest" is set up similar to "5_IoT_Cloud_Access".  If I connect my phone to my "Cayde-Guest" SSID that is connected to network "3_Guest" on VLAN 10, I only get internet access and can't access other devices or my Chromecasts in the network.  Similarly, if I configure a port to be only the "3_Guest" or "5_IoT_Cloud_Access" networks and plug in my laptop to that port, I also get only internet access.

     

    image.thumb.png.7ed7373f023fc2f4c9137d216d76eddb.png

     

    image.png.19f94bd4d228da9247e3981756c1b3d7.png

     

     

    As for wired devices, I have the ports on my switch that I'm plugging IOT devices configured to only have the IOT network with a VLAN tagged to that port

     

    So that my Philips Hue Bridge for example is plugged in to a port that only has my IOT network turned on for it:

     

    image.png.de90bc22ea1d2efa3179485e8217c771.png

    • Like 1
  19. I was taking my monthly look at my server to see where my SSD's were at in regards to TB Written (Data units written) as most SSD's have a limited warranty of something like 5 years of 320 TB Written in the case of my SX8200 Pro.

     

    There is currently a Warranty Period field where you can enter a warranty that is limited to 5 years:

    - Unknown

    - 6 Months

    - 12 Months

    - 24 Months

    - 36 Months

    - 48 Months

    - 60 Months

     

     

    It would be nice to expand that list since some drives have 6+ years or allow a custom entry with just a number field and a dropdown for months or years.

     

    It would also be nice to have a secondary warranty condition that can be linked to the Data units written so I can enter in a value in TBW for an SSD.

     

    Finally, it would be great to have this added to the Alerter/Notifier and its settings to be able to be able to trigger alerts and emails when a drive is nearing it's warranty, say 10% away from either of the defined warranty periods, and then another time when it has surpassed either defined warranty period.

    • Like 3
  20. Just Export your site and Import to a version of the Beta controller running on a Windows Server 2016 Essentials VM (Free for 180 days up to 5 times for evaluation).

     

    Then once the that version of the software becomes mainstream and in the Docker, do the Import/Export again.

     

    That's how I moved my Unifi Setup when I was building a new UnRaid box but needed controller access in between.

     

     

     

    Sadly that is the downside to running early access and/or alpha or beta stuff, you won't get much support outside of the manufacturer themselves in the method they recommend.  We can do what we can to help you out here though, just don't expect and miracles.

  21. WS discovery is a welcome convenience on my network and I would prefer to leave it running, is it possible to maybe lock it to a specific core to play more nicely with Docker and VM CPU Pinning settings?

     

    As it stands, I use CPU pinning to give VM's and Docker Containers access to certain cores to help with higher CPU dependent things like a game server or Plex container, so it would be nice to just throw WS Discovery on specific cores that aren't shared with the CPU dependent stuff.

×
×
  • Create New...