Jump to content

b0m541

Members
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

0 Neutral

About b0m541

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Running 6.8.2 LibreELEC, works fine with the i3 CPU. Replaced the i3 with a Xeon e3 today and the video boards (DVBSky T9580) are not detected any more. Any ideas?
  2. This plugin hasn't been updated in a while and is reported as outdated in "fix common problems". Is its functionality superseded or incorporated by unRAID 6.8 ? If yes, how? If not, do you plan an update of the plugin? Thanks!
  3. It would be great if the yubikey could be used to unlock LUKS drives by providing a keyfile authenticate the user on various login channels, such as console login, SSH, web UI dashboard (no HTTP basic auth any more!) authenticate the user to docker containers (ssh, web proxy with U2F,..)
  4. Notice that this post is more of theoretical nature, you might not be interested. I was thinking as newly available drives are always bigger than the year before, should we have plan for upgrading the array with every drive replacement? If a drive needs to be replaced, e.g. due to: showing some defect, or need for more storage space One could use the following strategy: buy 1 drive of the economically biggest available size if a parity drive has a defect, replace defective parity drive, else replace the smaller parity drive with the new big drive if the replaced parity drive has no defect: if a data drive has defects, replace it with the bigger old parity drive, else replace the smallest data drive The general idea is to have the biggest drives as parity drives and to always upgrade the smallest data drives with the replaced parity drives. This way the array always grows, but surely much slower than the technological development of drives, because you always firstly replace the parity drive(s). If one has 2 parity drives, one could also always replace them both, so the growth is faster, but of course, more expensive. If one wanted drastically more space, one would have to replace all parity drives and at least 1 data drive, which might be a huge cost, depending the own budget. The only down-side I can currently see is that the parity drives are already aged when they get re-purposed as data drives, but I am not sure if this is a serious problem. I am sure this is not a novel idea, but I just had it and could not find a similar post here (but did not spend a high effort for finding). If this strategy has already been discussed or is well-known and documented with pros/cons somewhere, please provide references. I would be happy if you share your thoughts and experience that are relevant to the described drive replacement strategy. Most importantly, please also point out problems that you are seeing with it.
  5. Ironwolf and Ironwolf Pro drives have an advanced health management feature where the NAS can start a health check. This health check seams to be different from the SMART checks, but I do not know it. Is it? This feature supported e.g. by Synology DSM, you can schedule runs of this health tests as you like. I am wondering if unraid does in any way make use of this feature. Can someone please comment? The background of the question is: The Seagate Exos drives (Enterprise segment, longer lifetime and warranty, faster) are currently cheaper than the Ironwolf drives. However, the Exos drives do not ship with the health check feature. So everything seems in favor for the Exos drives at this point in time, I am just holding back because of the health check feature.
  6. The issue seems to have been fixed. If I check for updates, containers will no (as expected) show to be up to date, if there is not update available.
  7. Updating the one by one does not help on my end. Also if I update just 1 of the LSIO containers it will show that it has an update available immediately after updating and checking for updates.
  8. I haven't had this problem before, it occurred yesterday for the first time. No update of unRAID OS since quite a time. Running 6.7.2. Docker says that most of the containers have updates available. However, when I click on "apply update" Docker tells me that the container is already up to date and nothing is downloaded. So basically the update notifications are "false alarms". Now it is hard to know which containers really need updating without asking to update all of them, which means they are all restarted - not what I want. Any ideas what's going on and how to fix it? Thanks for spending your time on helping out!
  9. Summary: Fetching NZB from URL works only with internal docker container IP, not with real host IP Description: Some content management application M only send references to NZBs to the downloader as an URL so that the NZB must be fetched by the downloader itself. Imagine that the NZB URL points to NZBhydra2 as meta indexer X running in a docker container A on the same host H where SABnzb is also running in a docker container B. As a result M (running in docker container C on H) wants SABnzb in B to fetch the NZB from X in A. The containers A, B, C run in their vanilla setting as downloaded (Bridge mode), just the pathes were set properly. Container A is the linuxserver container with NZBhydra2. Container B is the binhex container with SABnzbd, OpenVPN and Privoxy. Observation: Depending on how X (NZBhydra2) is referenced in the URL, SABnzbd will be able to fetch the NZB or it will time out, and this can be verify using curl in the container terminal session What works: (1) internal IP of container A running NZBhydra / curl 172.17.0.x:5076 works What does not work (2) external IP of container A running NZBhydra, i.e. real IP of host H / curl "local IP":5076 -> connection refused (3) local DNS name of H / curl "local dns name":5076 -> Could not resolve host Thoughts: (1) should NOT work, since this IP may change and applications external to the container should always use the real IP address or DNS name of H, could be that OpenVPN treats the container-internal IPs as on the same network and not to be forwarded through the VPN tunnel? (2) should actually work, could this have to do with OpenVPN, sending traffic for IP of H through the VPN tunnel and nothing answering there? Why would it consider its won hosts real IP address to be on the internet and not on the own end of the tunnel? (3) reason: if using the binhex SABnzbd-VPN container, within the container local DNS names cannot be resolved because all DNS traffic is sent to the DNS server on the other end of the VPN tunnel My questions (decreasing priority): -How can I make (3) work? probably some proper setting for OpenVPN for name resolution? which? -Why does (2) not work and how can I make (2) work? -Why does (1) work and how can I prevent container-external processes to access container-internal IPs? Thank you for your tme and effort!