Xaero

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by Xaero

  1. Looking for options here. Been browsing the net for a while, and figured if any community had a nice concrete answer it would be this one. Authelia falls onto Duo for the PUSH side of things so it's out. Most of the self-hosted options are TOTP.

    Typing in a TOTP code frequently gets old, and I've come to like the MSFT authenticator I've been using at work recently but don't want to outsource my private network. 
     

    It's also nice because if I'm not trying to log in and I get a push notification I know something's awry.

  2. In theory, anything added to the smb-override.conf with my example should override anything already set in the smb.conf, since it's under a [global] header.
    `chkconfig` would verify that. I personally only use it to override settings within shares, and not parent samba settings. You shouldn't need to restart samba with the change being made in the `go` file since the samba service isn't started until after the array start, and the smb.conf file is created long before the disks are mounted, even with an unencrypted array.

     

    Either way, I'm glad this helped you find a solution that works for you, and I'm glad you shared the result of your efforts. More options. More open. More flexibility!

    • Thanks 1
  3. 1 hour ago, bmartino1 said:

     

    I would like to request a Major overhead to Samba. I would like to have lime tech Unraid implement a disable Unraids Default smb.conf

    this can be accomplished by having another line in smb.conf with include unraid-smb.conf located under the config folder.

     

    The smb.conf would only have included conf file for exta-conf and unraid-smb.conf and shares.conf

     

    this way we can add a GUI option that would # out unraid-smb.conf with a warning for advance users only.
    Then advance users can add their own 

    [global] option to the extra config in the web gui.

     

    I would just delete the smb.conf under /boot but its a persistent file. Otherwise, as per other forum clients. extra should by default trump unraids default options.

     

     

    I ended up implementing this my own way inside of Unraid since I needed the additional functionality this brings. It's pretty easy to do, actually.

    In /boot/config/go, I added this line:
     

    until [ -f /etc/samba/smb.conf ]; do sleep 1; done && echo "\tinclude = /boot/config/smb-override.conf" >> /etc/samba/smb.conf

     

    Then I added my custom samba changes to /etc/samba/smb.conf.

     

    What the above does is:

    1. Wait for Unraid's smb.conf to be present in the ramdisk (realistically it already should be by the time this executes - but we need to be certain so we check, and wait if not).
    2. Append a line saying to include /boot/config/smb-override.conf after the include for smb-shares.conf which is generated by unraid.

    Samba processes these config files from top to bottom, and includes are processed inline. This means anything declared in the first include happens there, then the second, then the third, and each of those respective files is processed from top to bottom completely when samba stitches the configurations together. This is good for us, since we can now override any per-share settings we want to, or create additional shares outside of what Unraid provides.

     

    • Like 1
  4. On 7/25/2023 at 2:27 PM, Smooth Beaver said:

    @Xaero Thank you!!!


    I'm silly, and forget things sometimes, but you could also use the script feature of the unassigned devices plugin to handle the starting of the plex container - it has a section where you can define a script to run when a path is mounted, you could use this to start plex when the path is mounted. Both ways work, and each has their advantages and disadvantages.

  5. I'll put the TL;DR up front:

    I need OpenGL and preferably at least h.264 transcoding acceleration in a Windows 10 VM. I need to be able to remotely access this VM and still have that acceleration. I cannot use RDP or VNC for this. I may be able to use Looking Glass - but I'm not sure as they block lots of remote access tools where I work.


    And the more lengthy description afterward:
     

    I've been running numerous Windows and Linux VM's on my unraid box, but haven't needed actual hardware accelerated graphics for any of them. Presently my server's configuration is this:
    2x Xeon e5-2658v3

    2x 1tb NVME cache pool (VM storage is here as well)
    128GB RAM
    nvidia Quadro P2000 for transcoding

    My VM's are currently:
    - Windows 10 remote access VM

    - Home Assistant

    - OPNSense (not actually using for my home network yet, just toying with the idea)

    - My work VM which runs Windows 10 21H2 Enterprise provided by my company, and whose software is "fairly restricted"

     

    The last VM is the one that I suddenly need acceleration on. Until now QXL has been "sufficient" since most of my day to day is office style productivity - at least in terms of the video acceleration. I now have the need to be able to run h.264 transcodes and handle OpenGL Acceleration in my work VM, because I'm now going to additionally be recording and editing video content into the VM using hardware capture and desktop screen recording. Two of the applications I normally use to do that on my laptop at the office won't even open because I don't have OpenGL support.

    My server is in another room. My desktop is what I game/work on - and currently I use Spice to connect to the work VM, because it allows VNC/RDP like functionality like dynamic resolution and clipboard sharing - without me needing to run RDP (explicitly blocked by policy) or VNC (also explicitly blocked by policy). I then place this VM on one of my 3 monitors and I'm off to the races to be productive for the day. I also use USB redirection from Spice to get some meeting peripherals mapped to the VM, but that can be handled another way. Spice cannot forward the video from a secondary graphics device, since it doesn't have access to the framebuffer. There's a guest streaming agent available, but most of what I read states it is both experimental and it's Linux-only. 

    So my options kind of seem like:
    - Pass thru an entire GPU, try to set up spice streaming or looking glass and see if my company fights me on it.

    - Figure out GPU partitioning and pass through resources from a GPU to the guest with a hybrid driver on the host so I can continue to use the P2000 for transcoding for plex as well (This seems like the best option, but I might need to upgrade from the P2000 to something beefier)

     

    Any advice here? I'm not sure how to approach this since I both need "remote" (it's just in another room) access and hardware acceleration.

     

  6. On 7/5/2023 at 4:05 PM, loukaniko85 said:

    Also the default position of X shouldn't be exposed to the internet is invalid. It's all contextual. Of course, don't simply port forward your unraid ui/port externally. As everyone has stated unraid wasn't designed to be directly, externally visible. Keep in mind, the unraid team have implemented their own plugin, to allow 2FA remote access to the dashboard.


    It is perfectly valid and reasonable to disclaim that the Unraid WebUI, nor the underlying Unraid OS be exposed directly to the internet. It is not hardened or pen tested against that environment and should not be exposed to it. Even if they add 2FA, this does not change the fact that the rest of the OS, and WebUI have not been properly audited for exposure to the internet. Couple that with the OS itself only having a root user and it's just a bad idea to put it on the internet.

    If you want remote access to the WebUI - use VPN. VPN are designed from the ground up with the focus on providing secure remote access to machines and networks. They support 2FA. They provide peace of mind knowing that some 0-day exploit for the Unraid WebUI or one of the packages running on Unraid isn't going to compromise your storage solution. 

    The official unraid plugin is also a good option, since it's protected with 2fa, though I cannot personally comment on it's usage.

  7. 3 hours ago, Smooth Beaver said:

    Hi, 

    I currently only run Plex on my server, It is mapped to another server via SMB. How can I get the Plex container to wait to start until the SMB is mounted? I tried the wait option via advanced settings but it appears to control the wait period after that container has started. How do I put a wait period on the first and only container?

     

    Thank you.


    There isn't a built-in way to do this. I would personally use a user script to do this using the user scripts plugin.

    The script itself would be quite simple:
     

    #!/bin/bash
    
    #We'll use a helper functions to keep things easy to read.
    isMounted(){
        findmnt "$1" > /dev/null
    }
    
    isRunning(){
        docker inspect --format '{{json .State.Running}}' "$1"
    }
    
    #We don't want to try and start an already running container.
    #Plex isn't running, let's check if it's sane to start it.
    if ! isRunning "plex"; then
        #Until the path is mounted, we sleep
        until isMounted "/mnt/remotes/MySambaShareName"; do
            sleep 5;
        done
        #Path is mounted, time to wake up and start Plex:
        docker start plex
    fi

     

    You would set that script to run at array start, and you would turn off autostart for the plex container.

    You'll obviously want to change the name of the container and the mount path to fit your environment.

  8. On 4/7/2023 at 8:23 PM, sannitig said:

    wtf is the point of a server then? Don't have users with R/W, don't expose it to the internet.

     

    Why not just say, don't build an unraid server. There, problem solved.

    The problem is any permissions you allow increase attack surface. If a user with full R/W access to a share has their PC compromised and a ransomware application uses that R/W access to encrypt the files, the vulnerability wasn't the fault of the file sharing, but the permissions given to the user.

    Properly segregating user access to shares is not very complicated and its totally possible to do so while mitigating the possible attack surface sufficiently. A bunch of static install media used for making bootable media or PXE boot environments? Don't exactly need R/W access to that share. But what about the logging you ask? Excellent question - by using dynamic mapping you can create a per-client read/write directory automatically. If ransomware wishes to encrypt some installer logs fine, we can just delete those and move on with our day.

    Similarly, you can set up per-user shares fairly easily as well, and have those be read/write. Taking active snapshots of user directories has much less impact than taking active snapshots of the entire filesystem. For example, let's take a look at some of my data sets -- I have an archive of some "gold" images for specific hardware. Each of these images is quite large, and I do have them backed up elsewhere. The total size of these images is around 10tb. By having that entire share read only for everyone except local root access, I've reduced the need/want to snapshot those images ever. I've got an off-site backup should I need it. On the flip side, I have samba set up to point each Wireguard user at a different "Personal Folder" using the IP information for the wireguard client. There's less than 100gb of data across all users in these read/write personal folders. I can afford to take incremental snapshots of these directories and rapidly recover should any of the clients get compromised. Not to mention, since the folder is per-user, only one user would be affected by such an attack.

  9. Kind of a trash attitude man, he's just pointing you in the direction you are likely to get a response from. While there are employees on the forums, and they do help, this isn't exactly a direct inbox. The form on the website on the other hand, is. I'm sure they'll get you sorted out, but it's probably going to require patience, and rectifying mistakes usually does. Accidents happen, no need to fly off the handle when someone is trying to be helpful.

  10. 14 hours ago, JamieV said:

    Why on earth are you implementing archaic complexity requirements? Do you not follow the best practices for Identity - password complexity DOES NOT improve security, in fact there is strong evidence to prove it weakens security. 

     

    Please, please, please, follow best practices:

    Use a minimum password length of at least 12 and maximum at least 64

    Drop complexity requirements

    Check passwords against a compromised password list (e.g. haveibeenpwned.com)

    Encourage the use of pass phrases. "robot banana gunmetal" is a lot more secure than "Pa$$w0rd1" which meets your complexity rules - https://haveibeenpwned.com/Passwords

     

    Don't just take my word for it though:

    https://letmegooglethat.com/?q=modern+password+requirements

     

    These best practices do come with a caveat, using brute force alone, a 12 character password consisting of only upper and lower case English alphabet characers would take just over two years to crack on a single RTX6000. This time more or less scales downward linearly with the number of GPUs you add. This is just brute force. Rainbow table based dictionary attacks can throw the entire English dictionary and all documented first, last, middle, and pet names at the problem in a fraction of that time, and then start concatenating them together for additional attempts. 

    The best practices assume a well designed validation model. Per-user time delay login attempt lockouts (Too many failed attempts! Try again in 15 minutes!) increases the time to crack exponentially. 2FA also practically eliminates brute force as an attack vector. 

    P.S. @ljm42 the site you linked would seem to suggest that cracking even just a 12 characer Upper/Lowercase password is sufficiently complex for most users. It's using some pretty outdated data that doesn't take GPU compute into account, or rainbow tables though.

  11. Just to throw my idea in the ring, I would use responsive layout to show/hide breadcrumbs depending on screen width.
    The easiest way to approach it is making some educated assumptions as to the "most useful" breadcrumbs to display. In general that's going to be:

    Top Level > Direct Parent > Topic

    This can be expanded dynamically to include each parent nest above the direct parent until we reach the top level. In the example of these multi-language subforums the "minimum" breadcrumb ribbon would be:

    HOME > GERMAN > TOPIC

    Unfortunately, I don't think this can be achieved with pure CSS - and would likely require jquery.

  12. On 5/29/2022 at 3:57 PM, Dreeas said:

    Humor me, with a reverse proxy (NGINX and cloudflare) and a long complex password generated from a password manager, how much trouble would a hacker go through just to get access through my exposed WebGUI to my humble server? 


    This is worth replying to, and I noticed that nobody had yet. The concern isn't exactly that your password would be insecure against an attacker; but rather that the Unraid WebUI does not undergo regular penetration testing and security auditing, and as such should not be considered hardened against other attacks. These attacks could bypass the need for a password entirely, which is a much bigger concern. 2FA systems, when implemented correctly, would prevent this type of attack, but still would not make it safe to expose the WebUI directly to the internet.

    Since it isn't audited and hardened, and has endpoints that directly interact with the OS, it's likely that an attacker could easily find a surface that allows them read/write access to the filesystem as the root user, and the ability to remotely execute arbitrary code, including opening a reverse SSH tunnel to their local machine giving them full terminal access to your server without ever having to know a username or password.

    As far as the effort required - it's going to vary greatly, but many of these types of vulnerabilities hackers have written automated toolkits that scan and exploit these vulnerabilities for them with no interaction required on their part.

    TL;DR:

    Don't expose your WebUI to the internet. This has been stressed heavily by both Limetech and knowledgeable members of the community for a reason. Extend this further to NEVER expose a system with ONLY an administrator or system level account to the internet.


    P.S. If I am wrong on the regular security auditing, please do let me know and I will remove that claim from this post, but as far as I am aware and Limetech has made public knowledge there is no such testing done, which is fine for a system that does not get exposed to the internet.

    • Like 1
  13. On 10/5/2022 at 10:06 PM, FixYouDeveloper said:


    While I agree with this standpoint, the Wireguard ports on the server are exposed? It would be prudent to recheck if we need to bump priority of 2FA / 1FA later down the line.

    Apple Passkeys it's really cool - would love this system of auth.

    Just how exposed will unraid servers be in the future? Perhaps we would be looking at defending from compromised internal gateways such as Wireguard, IoT devices, etc.

    Only if you expose those ports to the internet are they exposed - and depending on how you configure things, connecting to WireGuard doesn't expose the WebUI. That said, WireGuard is also a passive technology, there's no "listening" service that is going to reply to a request, this is of course security via obscurity, but also means that most attackers aren't going to be privy to your use of WireGuard just by traditional port scanning attacks, and even if they were, they'd have to have the correct RSA tokens to authenticate. Barring a pretty egregious error on WireGuard's part security wise, it'd be an incredibly poor attack vector even for a skilled attacker.

  14. There's so much misinformation/disinformation in your post that its hard to pick a place to start, and I'd almost think it was satire. 
    For example, chmod, chroot, b2sum are all part of the stock unraid release. Nothing additional was downloaded for those. Tmux was the first and only tool you listed that wasn't part of unraid stock, and could have easily been downloading following tutorials that used it. Additionally, Hugepages are simply either supported by the kernel or not supported by the kernel, they don't really get "enabled or disabled" per se, but can be adjusted. 2048kb Hugepagesize is default, and the current release of unraid uses a kernel that supports hugepages. "/" is the root directory. "/root" is the root user's home directory. People mess this stuff up all the time because we use the term root and the username root a lot. To segregate it, root is the superuser ("su"), and the root directory is the base directory of the filesystem ("/").
    /bin and /sbin are unique directories, unless one was linked on top of the other there probably wasn't much cause for concern. Certain objects in /sbin will just be symlinks to /bin/executablename and this is by design. Without detailed information on what the symptoms were, what the changes being made were, and log output it's kind of difficult to say if you actually had a compromised system, or just had something configured incorrectly. 

    • Thanks 1
  15. I've been toying around with steam-headless docker and the nvidia driver package for a bit. Without nvidia-persistenced started, as soon as `steam` runs within the docker the power state is P0 and never leaves it. Even after closing the docker down, the power state remains at P0, killing all handles with `fuser -kv /dev/dri/card0` also kills the unraid X server and leaves nothing to manage the power state, so it's still at P0 - at least until I `startx` which grabs the card again and shifts it back to P8. With the daemon running, steam doesn't lock it to P0 - instead the frequency scales dynamically as would be expected, initially going to P5 pulling 10w (instead of the 20w at P0 idle) and then eventually settling back down to P8 and pulling 5W or less.

    Before starting nvidia-persistenced:
    image.png.d7fd80df81c59e983e3f0b03dd080a73.png

    After starting nvidia-persistenced:
    image.png.cfcfbfc8b8c334bea30cbcd0859a7431.png

    The effect is immediate as can be seen above. Killing the persistence daemon results in the power state being locked at wherever it was left until the next application requests a higher power state.


    EDIT:

    Turns out there is one more caveat to this, and maybe an undesired effect. The nvidia-persistenced keeps the driver loaded even with no application using it (so if zero processes are using the GPU) which does keep the power consumption higher than if there were no driver loaded at all. I don't know why this also allows the power state to drop for steam however. Just what happens in practice. I'm sure others can test and either validate or invalidate my findings.

  16. 3 hours ago, mgutt said:

    I found the reason. A linux client needs a "zeroconf" setup to be able to resolve .local hostnames. This is done by the avahi-daemon which includes mDNS.

     

    PROBLEM: This daemon is missing in most of the docker containers including NPM and that is the reason why they can not resolve .local hostnames.

     

    More informations:

    https://wiki.debian.org/Avahi

    image.png.d1efbf7b0c91cf805ef710ef1cbaed2f.png

     

    So this .local hostnames are NOT part of your DNS server. Instead a client which supports mDNS, automatically sends a multicast message to 224.0.0.251 through UDP port 5353 to resolve a .local hostname. And as all mDNS clients are listening to this traffic, the target machine answers with its IP (which is then store in a local mDNS cache):

    https://en.wikipedia.org/wiki/Multicast_DNS#Protocol_overview

     

    You could add those .local hostnames manually to your DNS server, but by that you "break" the zeroconf concept because IP changes are not update automatically, instead you need to update DNS entries on your own.

     

    I think we both learned something new 😋



    You would be correct in learning something new. I never even would have guessed that ".local" suffix specifically had different handling via zerconf/avahi/multicast.

    Neat!

    Yeah, don't really feel I should be breaking zeroconf by manually adding those entries to DNS. What I've got going for now works. I've next go to tackle taking my ISP's gateway out of the picture because it explicitly does not support certain things (changing the DNS and NAT loopback being the important ones for me.)

  17. 4 minutes ago, mgutt said:

    Ah ok.

     

    That's my favorite setup as it works flawlessly with IPv6, too.

     

    Only out of interest: Did you even try running NPM in br0? I think this should work, too, as it should be able to reach your local DNS information as it is then part of your local network. In bridge mode NPM runs isolated from your network, so yes, in this network mode it shouldn't be able to resolve any domains from your local network.

    That's how it came set up out of the box - and it was unable to resolve hostnames. Even specifying a dns using the --dns flag in docker it would not resolve hostnames. I believe there is something missing in the br0 config for this to work outside of dns as `curl <DNS IP>` which is on my local subnet results in no route to host - which is odd since the br0 network and my local subnet are the same scope (192.168.1.x). Either way host works and all my services are reachable how I want them to be again.

  18. 12 hours ago, mgutt said:

    This sounds like you want to reach the container by its name? So you created a custom network? Is NPM part of this custom network and the target container as well? As far as I know the br0 network does not contain a DNS server. Only user defined networks have this feature:

    https://stackoverflow.com/a/35691865/318765

     

    I think this is correct.

    The octopi instance is running on a raspberry pi and is reachable on my network as "octopi.local" I'm wanting to allow remote access to it through nginx proxy manager as well as the other services on my network that aren't all on my server. I ended up switching to HOST networking, moving the unraid WebUI to Port 5000/5001 and then pointing my router to my unraid's hostname. After switching to host network mode hostname resolution works properly, and I have now set it up so that I can access all of my services. I also added an entry for my local unraid hostname and ip address to redirect to the unraid WebUI so I can use those like I always have transparently.

  19. Trying to switch to this docker from the unofficial one after many moons. I'm not changing any of the default options as such it is running on the custom (br0) network and has an ip on my local subnet. I am able to access the WebUI and set up a couple of test domains. Both of these error with 502.

    Looking at the access log:
    2022/08/10 19:41:24 [error] 810#810: *104 octopi.local could not be resolved (3: Host not found),

    I can't resolve any address in my local network, trying to curl octopi.local results in could not resolve host: octopi.local. Using curl on the IP works fine. Additionally, I receive no route to host when trying to refer to the unraid server, so I can't see/forward to any dockers hosted on that server. Not sure how best to approach this.

    (For example, I have a service running at 192.168.1.72:8443, I get "no route to host" for 192.168.1.72 - which is my unraid box)

  20. So I've been having an issue since installing a new SAS HBA (9305-24i in IT mode) one of the ports is flakey and two disks connected to that port will have UDMA CRC errors tick up (1st and 4th disks on the breakout). I confirmed it wasn't the cable by replacing with a new cable. I confirmed its not the disks by swapping that entire branch with another. The issue follows the port on the HBA. This has left me in a "precarious" situation where one of my parity disks is disabled and one data disk is disabled. The issue was initially resolved after swapping the ports on the HBA so I chocked it up to "Maybe it was just a loose seat" and now I'm nowhere near the server to do any maintenance, and there's nobody else who can do it for me.


    I plan on powering off the server until a new HBA arrives to replace this one - but until I am able to migrate some services around I can't really do that. Since I have dual-parity, the array is still emulated, but if I lose any one disk I'm kind of SOL on recovery, correct? So far this is the only indicated hardware issue I have.

  21. Anybody able to get the GPU to idle properly with this docker?
    As long as steam is running the lowest power state it will enter is P0 instead of dropping to P8 (2d clocks). This is a pretty substantial power draw difference when the docker isn't actually doing anything - 20W in P0 vs 7W in P8 on my system. I had the debian steam nvidia docker before and was running it without this issue. For now I'm just closing steam within the docker and then manually opening it up again when I want to use it, but thats kind of a PITA and requires manual intervention constantly.