• Posts

  • Joined

Everything posted by jaylo123

  1. Great. I fat fingered my login because my password locker wasn't available at the time. This isn't seeing the forest for the trees. The Web UI wouldn't be a vector of attack. SSH is already open - this is where attackers would focus their efforts in a serious security breach. Well, maybe the web ui could be used for a 'bobby tables' type of situation. Sigh. I guess it would be a vector of attack... (yes I just literally talked myself out of my own argument)
  2. Yea. I can see both sides certainly!
  3. Well, this may come back to bite ya. Yes, there could be reasons to 'balance the load out'. I know this is 3 years old, but I was looking up another issue for clearing out my cache disk and wiping/formatting to XFS from BTRFS and while that's happening this comment caught my eye. I could sit here and say the same thing, in a sense. "I have never heard any good argument for *not* "balancing the load out"". I suppose on a technical level, without having much understanding about how the UnRAID FUSE FS works under the hood, sure. Maybe its fine to frontload a bunch of drives with data and default to a high-water setup. But from an end user perspective (read: optics), it gives a sense of comfort in knowing that your disks are being used efficiently. Even if you and I know it doesn't mean that on the technical side.
  4. And your Plex container logs aren't being flooded with errors or anything? Not the Plex Server logs - the Plex container log itself (Click on Plex docker -> Logs) That's where I'd start. Jackett and GluTunVPN look suspiciously large too.
  5. Suggestion, or maybe if there is a way to do this and I didn't see it let me know: A flag (checkbox in the UI) to ignore sending alerts if there are docker updates. Netdata, for an example, has updates just about every day. And I auto-update my containers overnight automatically. Yet, since there is a discrepancy between when this plugin runs its scans (or detects updates in Docker that are available), I get beeped every morning around 3 AM or so from my phone. 95% of the time, it's from this plugin, and its because I have an update available for some container, which will be auto-updated within 24 hours anyway.
  6. Admins, can you remove this application? The dockerhub link is gone, the container ships with a username/password already configured and the only way to obtain the original username/password for configuration is in the dead dockerhub url.
  7. Got ya. Yea, the containers used in Unraid are built upon projects mostly out of Github or elsewhere. The container itself, when clicked on in the "Docker" screen should have a link or two that can direct you to either the source for the container or even sometimes the source for the project upon which the container was built. The author of the container used in Unraid grabs the Github (or whatever source) code and makes it a container for use in Unraid. That's all, in a nutshell (a VERY high-level overview). So if the application being used in the container has a potential issue, its best to chase it upstream directly to the developer of the application. The application maintainer will resolve the issue, then the container owner will create a new build around it and then boom, you'll see a notification to update your downstream Unraid container in your web GUI. All the container creator (in this case, linuxserver.io) does in that process is take any updates to the application and just wraps it in a docker container and presents it in an easily digestible format for Unraid to eat, preferably like a pizza with plenty of pepperoni. So - issue with the container? That is an issue for the container owner (linuxserver.io in this case). Issue with the underlying app? You gotta go further upstream to the actual app developer. Even more confusing - sometimes they're the same people
  8. I'll readily admit that I haven't read this entire thread. But I've been on 6.9.2 since it was released and I haven't experienced this issue at all. My disks spin down just fine. Hardware specs are in my signature. --------------------------------------------------------- One question I do have though, for QEMU / KVM: I heard from a birdie about 5-6 months ago - an oVirt dev, really - that QEMU/KVM had an update that allowed console access to a Windows VM that had a GPU passed through to it. In the past, if you passed a GPU through to a VM you were left with a blank screen on the console. So this enhancement added a feature that Citrix / VMWare have had for quite a while. Is that a thing here? Or am I possibly making something up in my own head? I've checked Phoronix and other upstream sites but haven't found anything to link here to back up my question with documentation (I know, I'm the worst...). But I figured I'd ask the question anyway. But I swear I read this somewhere. I'm going to keep searching. I can't find it now. Maybe I had too many IPAs when I heard it. /shrug
  9. Heya @jbartlett - I looked for a github or other source and did a very cursory search through a few pages here but didn't see an answer to a question I have, so I'll just ask: What are you using to test? IOzone? I guess I could just check the container itself, but I guess I'm lazy.
  10. You would probably want to ask that question on their github page, not here. https://github.com/Tautulli/Tautulli That said, I did just recursively check for any java or log4j installation in THIS container and I did not get any returns. So, no, not impacted.
  11. It's fine, just wanted to provide some feedback! I'll keep an eye on this project, certainly. Looks interesting. I can explore it with Edge for now.
  12. Oh - and yea, I can logout / login just fine with Edge. I cannot on Firefox. The only extension I have installed is my 1Password extension on Firefox. I also have it installed on Edge. Oh well, got in
  13. It's an empty file Now, I did just try it in Edge/Chromium. It worked that time! So maybe it's a Firefox thing /shrug
  14. Darn. Nope. Wiped the db, recreated from scratch, still same blue screen "419 page expired" message.
  15. Let me try to just wipe the database on mysql and see if a re-instantiation fixes it?
  16. Yep. I was about to post a message here because initially it was complaining about an invalid cipher. Then I realized I only input 31. Changed to 32, got the Network Manager container to start, got to the register page automatically, and now I'm stuck here.
  17. Restarted, launched webgui, filled out the form (~10 seconds) after the page came - same issue. I also changed the db account being used for the database to the db root instead of mine - still didn't help. The table is populated, but no users are being generated in the database. Here are the logs, if they help any. I didn't see anything jump out:
  18. Creating a new user account generated a blue "419 | Page Expired" message. Can't proceed. Nothing in the logs on either the MySQL server or Network Manager containers.
  19. Just wanted to say thanks - this worked. I'm still leery of the whole "chmod 777" step but yea, it works. Hopefully UnRAID includes this as a toggle in the UI in the next release. Seems simple enough on the surface, although I'm sure it really isn't that simple.
  20. Update: This will, I assume, be fixed soon. But since I needed this sooner for my discussions with my ISP I made the following change to the "Repository" for the container in Unraid: Modified from henrywhitaker3/speedtest-tracker to henrywhitaker3/speedtest-tracker:dev (appended :dev to the end) Saved, restarted the container. It grabbed the dev branch, and confirmed, the graphs work again. Once the main branch is updated with the graph fix, I'll switch the repository back. As expected, all historical data is still there. *non-relevant information for the docker issue, but a use case for this docker* As it turns out, I've been losing ~7% of my download bandwidth since August 26th about every 3 days. Tech support is sending someone out to check my line and the main fiber panel at the end of my street Tuesday, October 5th. And a large part of that was due to this information. I didn't notice it before because it was slowly and slowly dropping. 45 day graph: I'm supposed to be getting ~950Mb/s down, I'm NOW getting ~36Mb/s down! Yikes! We both are wondering if I'll be at dial-up speeds by Tuesday... Time to bust out an old US Robotics 56.6...
  21. Hi. The speedtest tracker container running v1.12.0 currently has a broken feature. The graphs don't change to anything other than the 7 day default view. I recently discovered that my 1Gbps line is performing fine for uploads (980Mbps) but downloads are ~90 to ~200Mbps. I wanted to look up my history to see when this happened so I can call my ISP. The historical data is there as I can see a median result over the lifetime of me running the container. The graphs just don't adjust. This is a known bug, and there is a fix in 1.12.2, but the container itself is still running 1.12.0. I couldn't tell from the github ticket whether or not it's actually released to anything other than the dev branch though.
  22. Ok no problem. I was just pulling my hair out this past week trying to see why it wasn't working. Totally fine with the bookmark method, that's what I was thinking I had to do. Cheers
  23. Hi. I believe I have finally deciphered how to make this work. However, I cannot access the web gui directly from Unraid's docker menu. Once I remove the network from, say, Sonarr and add it to the binhex-delugevpn network, the "WebUI" option on the Docker screen in UnRAID disappears for Sonarr. If I visit the IP:Port directly in a web browser, it works. However, the containers seem to communicate fine. Sonarr tests fine when I run the "Test" button for the download clients. I'm still working through getting the Indexers working as well as they don't respond at all from Sonarr, but that's another issue I suspect. I just wanted to ask about the missing WebUI link in UnRAID itself for now. Is it intended behavior for the "WebUI" option to disappear from UnRAID's docker menu for any containers directly passed to the binhex-delugevpn network? If so, may I make a suggestion for the official documentation to add this caveat to Q25/A25? Overview: binhex-delugevpn configuration: binhex-sonarr configuration: UnRAID Docker WebUI Missing:
  24. Hi. Have a curious issue. I haven't dug into the CLI on the docker yet or anything, just checking logs from Unraid's web GUI for the docker container. When it starts, I get this spewing over and over and the container never comes up. Any ideas? I haven't rebooted the container since the last update. I rebooted my whole unraid server just last night. Since the container auto-updates I'm not quite sure when the actual container was last restarted, so if this is an existing issue with a known workaround please point me to that direction! ::: Starting docker specific checks & setup for docker pihole/pihole [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] 01-resolver-resolv: applying... [fix-attrs.d] 01-resolver-resolv: exited 0. [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 20-start.sh: executing... ::: Starting docker specific checks & setup for docker pihole/pihole Error: Unable to update package cache. Please try "apt-get update"[cont-init.d] 20-start.sh: exited 1. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] waiting for services. [s6-finish] sending all processes the TERM signal. [s6-finish] sending all processes the KILL signal and exiting. Edit: Nevermind, seems the maintainer isn't 'maintaining' anymore. I'll try this route. Cheers for the tip @hoodust Final edit - Yep, that fixed it. All I did was: Shut down this container Go to Apps Search for "pihole dot doh" Installed FlippinTurt's fork. I used the same IP and password as my current container on the container configuration screen. Everything else was defaulted correctly. Started FlippinTurt's container In a Borat voice, "Great success" Easy peasy.
  25. Ooh thanks. That probably just saved me 3 hours of hand-wrenching and headbanging!