Xaero

Members
  • Posts

    400
  • Joined

  • Last visited

  • Days Won

    2

Xaero last won the day on July 19 2019

Xaero had the most liked content!

Recent Profile Visitors

5332 profile views

Xaero's Achievements

Enthusiast

Enthusiast (6/14)

116

Reputation

1

Community Answers

  1. This is worth replying to, and I noticed that nobody had yet. The concern isn't exactly that your password would be insecure against an attacker; but rather that the Unraid WebUI does not undergo regular penetration testing and security auditing, and as such should not be considered hardened against other attacks. These attacks could bypass the need for a password entirely, which is a much bigger concern. 2FA systems, when implemented correctly, would prevent this type of attack, but still would not make it safe to expose the WebUI directly to the internet. Since it isn't audited and hardened, and has endpoints that directly interact with the OS, it's likely that an attacker could easily find a surface that allows them read/write access to the filesystem as the root user, and the ability to remotely execute arbitrary code, including opening a reverse SSH tunnel to their local machine giving them full terminal access to your server without ever having to know a username or password. As far as the effort required - it's going to vary greatly, but many of these types of vulnerabilities hackers have written automated toolkits that scan and exploit these vulnerabilities for them with no interaction required on their part. TL;DR: Don't expose your WebUI to the internet. This has been stressed heavily by both Limetech and knowledgeable members of the community for a reason. Extend this further to NEVER expose a system with ONLY an administrator or system level account to the internet. P.S. If I am wrong on the regular security auditing, please do let me know and I will remove that claim from this post, but as far as I am aware and Limetech has made public knowledge there is no such testing done, which is fine for a system that does not get exposed to the internet.
  2. I was able to provide screenshots via PM - updating this thread just so it's public knowledge until it can be fixed properly upstream. The route cause of the problem is that I am using a non-standard port for my Unraid WebUI so that another service can use 80 and 443. The script inside the page for autoconnecting concatenates the port with a ':' since my URL already has a ':' present this results in an invalid URL. As temporary workaround I edited /usr/local/emhttp/plugins/dynamix.vm.manager/spice.html line 115 from this: uri = scheme + host + ":" + port; to this: uri = scheme + host + port; Note that this won't persist through reboots, though you can add a simple `sed` line to remove that from the script. Thanks to @SimonF for the help!
  3. VNC works fine for the VMs that aren't using SPICE.
  4. That's what the MyServers/SSL setup automatically entered. I didn't add anything to it, previously it was only the hash, at one point the certificate for that URL lapsed and both the server's autoredirect and the certificate changed to this URL. Settings from above are here: PS Thanks for pointing out I missed the full hash was in the snippet, corrected that. Also, I notice one very important difference between your setup and mine: I am running the WebUI on 5000 for HTTP and 5001 for HTTPS because I have NGINX Proxy manager running on 80 and 443.
  5. Only if you expose those ports to the internet are they exposed - and depending on how you configure things, connecting to WireGuard doesn't expose the WebUI. That said, WireGuard is also a passive technology, there's no "listening" service that is going to reply to a request, this is of course security via obscurity, but also means that most attackers aren't going to be privy to your use of WireGuard just by traditional port scanning attacks, and even if they were, they'd have to have the correct RSA tokens to authenticate. Barring a pretty egregious error on WireGuard's part security wise, it'd be an incredibly poor attack vector even for a skilled attacker.
  6. I didn't even notice this was added, but I do use a SPICE VM which is UEFI Secure Boot with BitLocker authentication on startup. I went to start the VM and noticed the Start With Console (SPICE) option and was shocked to see both the start with console option, and the spice support! Cool stuff! That said, the URL the SPICE WebClient is trying to use for the websocket connection is invalid: It's adding a `:` between the host address and the path, presumably because the client is expecting a port instead of a path there. Appreciate the effort in getting better SPICE support since it does help a lot with dynamic resolution scaling being native to the protocol. This should be a (hopefully) easy fix since it should just require deleting that `:` from the URL structure. I'm gonna try testing this at some point hopefully.
  7. ELF headers belong on Linux executables and linkable libraries, though - and it's not going to be human readable, it's binary data - if you open it in a text editor it is going to be gibberish because that's not how it is meant to be processed.
  8. There's so much misinformation/disinformation in your post that its hard to pick a place to start, and I'd almost think it was satire. For example, chmod, chroot, b2sum are all part of the stock unraid release. Nothing additional was downloaded for those. Tmux was the first and only tool you listed that wasn't part of unraid stock, and could have easily been downloading following tutorials that used it. Additionally, Hugepages are simply either supported by the kernel or not supported by the kernel, they don't really get "enabled or disabled" per se, but can be adjusted. 2048kb Hugepagesize is default, and the current release of unraid uses a kernel that supports hugepages. "/" is the root directory. "/root" is the root user's home directory. People mess this stuff up all the time because we use the term root and the username root a lot. To segregate it, root is the superuser ("su"), and the root directory is the base directory of the filesystem ("/"). /bin and /sbin are unique directories, unless one was linked on top of the other there probably wasn't much cause for concern. Certain objects in /sbin will just be symlinks to /bin/executablename and this is by design. Without detailed information on what the symptoms were, what the changes being made were, and log output it's kind of difficult to say if you actually had a compromised system, or just had something configured incorrectly.
  9. It doesn't kill the unraid GUI, but it does latch the power state to P0 indefinitely as soon as steam starts within the docker. The only way I can get it back to a lower state is to kill everything using the card and then start a new x server (either in unraid or the docker) or by starting persistenced.
  10. I've been toying around with steam-headless docker and the nvidia driver package for a bit. Without nvidia-persistenced started, as soon as `steam` runs within the docker the power state is P0 and never leaves it. Even after closing the docker down, the power state remains at P0, killing all handles with `fuser -kv /dev/dri/card0` also kills the unraid X server and leaves nothing to manage the power state, so it's still at P0 - at least until I `startx` which grabs the card again and shifts it back to P8. With the daemon running, steam doesn't lock it to P0 - instead the frequency scales dynamically as would be expected, initially going to P5 pulling 10w (instead of the 20w at P0 idle) and then eventually settling back down to P8 and pulling 5W or less. Before starting nvidia-persistenced: After starting nvidia-persistenced: The effect is immediate as can be seen above. Killing the persistence daemon results in the power state being locked at wherever it was left until the next application requests a higher power state. EDIT: Turns out there is one more caveat to this, and maybe an undesired effect. The nvidia-persistenced keeps the driver loaded even with no application using it (so if zero processes are using the GPU) which does keep the power consumption higher than if there were no driver loaded at all. I don't know why this also allows the power state to drop for steam however. Just what happens in practice. I'm sure others can test and either validate or invalidate my findings.
  11. You would be correct in learning something new. I never even would have guessed that ".local" suffix specifically had different handling via zerconf/avahi/multicast. Neat! Yeah, don't really feel I should be breaking zeroconf by manually adding those entries to DNS. What I've got going for now works. I've next go to tackle taking my ISP's gateway out of the picture because it explicitly does not support certain things (changing the DNS and NAT loopback being the important ones for me.)
  12. That's how it came set up out of the box - and it was unable to resolve hostnames. Even specifying a dns using the --dns flag in docker it would not resolve hostnames. I believe there is something missing in the br0 config for this to work outside of dns as `curl <DNS IP>` which is on my local subnet results in no route to host - which is odd since the br0 network and my local subnet are the same scope (192.168.1.x). Either way host works and all my services are reachable how I want them to be again.
  13. The octopi instance is running on a raspberry pi and is reachable on my network as "octopi.local" I'm wanting to allow remote access to it through nginx proxy manager as well as the other services on my network that aren't all on my server. I ended up switching to HOST networking, moving the unraid WebUI to Port 5000/5001 and then pointing my router to my unraid's hostname. After switching to host network mode hostname resolution works properly, and I have now set it up so that I can access all of my services. I also added an entry for my local unraid hostname and ip address to redirect to the unraid WebUI so I can use those like I always have transparently.
  14. Trying to switch to this docker from the unofficial one after many moons. I'm not changing any of the default options as such it is running on the custom (br0) network and has an ip on my local subnet. I am able to access the WebUI and set up a couple of test domains. Both of these error with 502. Looking at the access log: 2022/08/10 19:41:24 [error] 810#810: *104 octopi.local could not be resolved (3: Host not found), I can't resolve any address in my local network, trying to curl octopi.local results in could not resolve host: octopi.local. Using curl on the IP works fine. Additionally, I receive no route to host when trying to refer to the unraid server, so I can't see/forward to any dockers hosted on that server. Not sure how best to approach this. (For example, I have a service running at 192.168.1.72:8443, I get "no route to host" for 192.168.1.72 - which is my unraid box)
  15. So I've been having an issue since installing a new SAS HBA (9305-24i in IT mode) one of the ports is flakey and two disks connected to that port will have UDMA CRC errors tick up (1st and 4th disks on the breakout). I confirmed it wasn't the cable by replacing with a new cable. I confirmed its not the disks by swapping that entire branch with another. The issue follows the port on the HBA. This has left me in a "precarious" situation where one of my parity disks is disabled and one data disk is disabled. The issue was initially resolved after swapping the ports on the HBA so I chocked it up to "Maybe it was just a loose seat" and now I'm nowhere near the server to do any maintenance, and there's nobody else who can do it for me. I plan on powering off the server until a new HBA arrives to replace this one - but until I am able to migrate some services around I can't really do that. Since I have dual-parity, the array is still emulated, but if I lose any one disk I'm kind of SOL on recovery, correct? So far this is the only indicated hardware issue I have.