avluis

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by avluis

  1. Looking to make use of a python based application that interacts with Home Assistant and Elgato Stream Deck (runs on the unraid host, not Docker) which requires this library: https://github.com/libusb/hidapi Slackware builds is making use of https://github.com/signal11/hidapi which is no longer supported/outdated. On Debian these would be the packages I would be seeing to install: libudev-dev libusb-1.0-0-dev libhidapi-libusb0 Since Fix Common Problems now warns about packages being installed via the extras (and because this would be a great utility to have in NerdPack) I'd like to make the request to have hidapi included.
  2. I've had a few python scripts that run in the background which I needed to start/stop alongside the array; ex: #!/bin/bash #description=Starts Stream Deck Home Assistant Client. Req: python3 #arrayStarted=true #name=start stream deck HASS client # Change to app directory cd /mnt/user/appdata/homeassistant-streamdeck/ # Source python venv source venv/bin/activate # Change to app src directory cd /mnt/user/appdata/homeassistant-streamdeck/src # Check if already running and stop if pgrep -f "python3 HassClient.py" &>/dev/null; then echo "app is already running, killing" pkill -9 -f "python3 HassClient.py" sleep 5 fi # Start app python3 HassClient.py & pid=$! # User Scripts background fix echo $pid > /tmp/user.scripts/running/start-streamdeck-ha wait $pid I've been able to get the intended behavior by saving the child PID manually along with a wait command. Haven't looked too deep into exec.php for a better workaround but this is working for the moment.
  3. When uninstalling this plugin, is it supposed to remove pigz/unpigz? After doing so was not able to update my docker containers until re-installed (didn't try restarting); Note that when I went to reinstall, got the following message so the plugin installs its packages but not shown as installed: plugin: run failed: /bin/bash retval: 1
  4. System locked up today when updating a few containers --- going to have to stop making use of macvlan for a while
  5. I've been reading/researching call traces related to macvlan with containers that experience high traffic (Plex, pihole, etc). This seems to be an issue with macvlan not being able to handle certain broadcasts (which can happen often even on a 'small' network). Attached is a diagnostics for those that wish to delve a bit deeper. I'm looking to avoid making use of vlans to mitigate the issue, but any tips to resolve this is greatly appreciated. avnet-un-diagnostics-20190410-0743.zip
  6. If time permits; I would love if some attention is given to Docker and it's macvlan issues referenced here: I'm having similar issues and would love to avoid setting up VLAN for my containers just yet -- at least, would love to know if the issue is with Docker or with unRAID. avnet-un-diagnostics-20190410-0743.zip
  7. Make sure to update python to 2.7.15 via Nerd Pack and you'll be good to go
  8. All file operations should take place in the background -- WebUI updates itself as things progress -- kind of how things have shifted with the latest updates -- love the WebUI experience so much more now! For those that are unaware of solutions from QNAP and Synology, give their online demo a try -- should give a few ideas: https://www.qnap.com/en-us/live-demo/ https://demo.synology.com/en-us
  9. Just what I wanted to hear -- thank you very much for confirming!!
  10. I've been forgetting to ask as I've been out of the loop for a good week or so and with a server needing replacement; In regards to the lovely new dashboard: I make use of the nut plugin for ups support; Is it possible for the new dashboard to pull data from there or will it only pull data from the built-in UPS? And, is it possible to disable this field entirely?
  11. Just verified that all plugins have been updated, as well as giving the server a reboot -- unfortunately I'm still seeing the 'cat: write error' message. Gonna keep checking my config directory, hopefully I'll find something.
  12. I was dealing with a few thumb drive failures a few days ago which got me diagnosing a few unrelated issues. Noticed that I had a few xz unpack errors -- checking over my installed plugins and replacing/re-installing got that issue fixed. But there are two outstanding issues that need to be addressed; Referring to the attached image, I'm seeing; 'cat: write error: Broken pipe' as well as two warnings for 'br0' and 'br2'. Any hints as to where I need to start looking to address these? avnet-un-diagnostics-20181021-0204.zip
  13. Bumping this as I'm heavily interested in making use of the Docker Engine API as well under unRAID.
  14. Just wanted to drop in and say my thanks for taking on this plugin. I'm finally able to share my UPS from Synology --> unRAID. Thank you.
  15. Have you tried the getting started guide already? You'll need to scroll down a bit and expand the Mac OS X section. As for actually booting from it, you will need to (after a reboot) press and hold the Option (⌥) key immediately upon hearing the startup chime.
  16. @Frank1940 Are you referring to the Ransomware Protection plugin? To beef up my post (and to intrude into this conversation) -- my current strategy has been to limit access to shares hosted on all of my servers down to a single management system (you could potentially have this under vlan, with no internet access). As for accessing my data -- well, I'm leaving that up to the applications that need access (I'm also limiting write access), e.g; My movie collection can only be accessed by end-users via Plex. If a particular piece of software needs to manipulate files, then that software is usually ran via a container (directly on the host system, so no need for a share -- side benefit here is version control). If I need to have two servers talking, then this is done over NFS -- I lock this down to their individual IPs as well as a dedicated account for their own use. This is not as secure though if the server itself is compromised, let's say a Windows Server (so I simply don't have my Windows Server setup to talk over NFS). What of ransomware targeting Linux-based systems? Simple really: limited access. Segregated services (Docker is great for this -- same can be said about VMs). While we lose the convenience of easily accessible shares, there should be no need to go full tinhat.
  17. Redundancy -- simple as that.
  18. Yep -- that's what I'm trying to confirm as it does look like it is an issue with the switch under 10GBASE-T rather than with unRAID. Thanks for confirming that SFP+ is working without issue as I do have an adapter that I can install for temporary use (and it is dual port so I can actually try bonding as well if needed).
  19. Moving this here as I've not had any luck in its current thread: Currently troubleshooting an issue with an Ubiquiti ES-16-XG and a Supermicro 5028D-TN4T (reports: X552/X557-AT 10GBASE-T). I'm wondering if I'm being optimistic with the current driver (unRAID reports ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 4.4.0-k) My issue; Anytime that I make use of any type of bonding (though it has happened without bonding) -- that is either mode=1 (Active backup) or mode=4 (802.3ad) -- I would either have the switch flip the link on/off until it falls back to 1G or I get it to link at 10G if I disconnect one of the links (but getting 10G upon connecting this link would be hit or miss). As for non-bonded links, I have a higher success rate at linking at 10G -- though there are instances where DHCP will fail (as if there was no routing, only for it to work after going from dhcp, to static and back to dhcp), but most often, after rebooting the switch or by reconnecting the link. I'm attempting to figure out if this is a driver issue or an issue with 10GBASE-T (RJ45/Copper) support on this switch so any info will be appreciated~ Please let me know what you need from me, I'm an open book~
  20. I've the SNMP plugin working -- and it exhibits the same behavior as your plugin after a shutdown as well, so it does support what you are saying here. Would love to see that NUT plugin taken under someone's care
  21. Not necessarily -- but some may require installing the Low-Noise Adaptor (NA-RC7). Also, I would love to have that exact same setup, but for the D-1541. Yep -- that's what I saw as well. That NUT plugin is currently unmaintained so I may have to fork and go from there. But thanks for confirming that, wanted a second set of eyes on that to dismiss any issues with your plugin. It's not often, but I have seen BIOS updates change the behavior of system accessories (fans, temp probes, etc) -- but I'm not going to sit here and tell you that's the issue since I have not worked with your particular board. What I would look into is that max rpm -- the system is, more than likely, simply pushing more than what the fan needs. Took a look at the Amazon listing (https://www.amazon.com/Nanoxia-Silence-140mm-Cooling-NDS140PWM-1400/dp/B00CHW8QD2) for additional info and in all honesty, unless they are running loud (which they shouldn't), I wouldn't worry about it. What I would give another try would be to play with these options: IPMI-IP -> Configuration -> Fan Mode I have seen a few SM systems get stuck at Full Speed while changing settings over ipmi -- so you'll need to get on the web interface and change it a few times (between full speed and another setting) to get it to apply.
  22. Had this happen again: This time around, I happen to notice a few things on the log that should make you happy~ From my quick look -- it seems as if some packages are forcing dependency 'upgrades' if already installed. So we end up with an older version of a dependency (in my case, freeipmi-1.4.11-x86_64-3.txz vs freeipmi-1.5.3-x86_64-1.txz). avnet-unraid-syslog-20170428-0218.zip
  23. Your unRAID license is bound to the flash drive so wherever that goes, your license goes with it. As for your P410 -- I would fire off a new unRAID thumbdrive and simply test that it can see the drives behind your controller. From there, it's just a matter of making sure you add the drives in the same order as your current setup -- it is essentially this procedure: https://lime-technology.com/wiki/index.php/Replacing_the_Motherboard_in_Your_unRAID_Server Note that the guide there is for v4 and v5 -- so steps are different for v6 -- hoping to find what's needed for v6 and I'll post it here. Edit: This thread should have what you need (in regards to moving over to a new system):
  24. Yep~ Keep those ideas coming in -- I'm still picking the most suitable way to do this. My limitation is what I have available to do this with -- with the VMs and containers off, it has to be done via a webhook (when a build job is fired off) -- said hook can be consumed by the utility I mentioned above (running on the unRAID box -- on it or via Docker to be decided). The above utility can call off scripts -- these scripts will in turn power up the VMs and/or Docker containers I need. I thought I did -- was probably way too tired: - GitLab container receives a push for a CI enabled branch - GitLab fires off a webhook due to the above - webhook (the Go utility) consumes this and in turn, calls off the associated script - The script processes a few variables to determine what needs to start up and does so - The runner comes online, fetches the job and processes away -- it powers back off on its own after some time - In the case of a container (SonarQube), a cron task is fired off to take care of turning it back off after some time That's exactly what I wanted to know -- that's perfect! I'm still trying to wrap my head around the concept of a script that's running in a container calling up commands on the host but I'm simply over thinking things. I could go about it by establishing a connection to unRAID (SSH?) and running my commands that way -- but I'd like to keep it local and slim. Yeah -- I need to stop overthinking this and go have some fun with my next container~