jameson_uk

Members
  • Posts

    78
  • Joined

  • Last visited

Posts posted by jameson_uk

  1. I am currrently running an old Gen 7 HP Microserver that has 4 x 2 TB drives and a 500GB SSD cache drive.  This is now starting to show its age and I was just wondering about upgrade options.   One main consideration is the aging GT710 GPU I have in therre for Plex decoding.   I would love to be able to shift to an iGPU (I am not interested in 4k and the like and is mainly for H.264 108i0p content)

     

    As I guess, as  with others, the ubiquity of streaming services has meant my use as a true NAS device has somewhat deminished and the main use of my server now is for running docker containers (Hoime Assistant, Pihole, Zigbee2MQTT....)   If I didn't still need the NAS at all I would just replace everything with an Alder Lake N100 mini PC with a TDP of 6W should save me some money on running costs and give me a big boost in performance.

     

    All the low power mini PCs seem to come with at most two SATA connectors which makes sense given the small power supply.which rules them out as a direct replacement.

     

    The HP microserver averages about 50W (so about 1.2 kWh per day)  so wondering whether there is any modern upgrade that is going to beat this in terms of power consumption whilst being quite (ideally fanless), give me a performance boost and give me better graphics?

  2. I have my network divided up into a few VLANs.   Primarily trusted stuff vs IOT devices etc.   This all works great and IOT stuff is nicely segregated on VLAN tagged traffic and trusted stuff is untagged.

     

    On the Unraid side I have enabled VLANs in network settings and configured a a new address on the IOT VLAN (and excluded this from the management interfaces). 

    I have some containers I want on the trusted side and this all works well via a bridge network and port mappings of <trusted IP>:<port> so they are only accessible on the trusted network.

     

    I then have a bunch of containers I want on the untrusted side.   I have created a docker bridge network and have all these containers running in there.   Those I want to be accessible have port mappings of <untrusted IP>:<port> and the rest have no mappings at all (these are only accessible to other containers in this docker network-).

     

    All good so far in that some of the containers are not accessible externally (eg. MQTT server which only sits between Zigbee2MQTT and Home Assistant so has no need to be visible elsewhere) and the the containers I do want to be accessible appear on the untrusted VLAN.

     

    Only issue is outbound traffic from containers in this bridge network are able to connect to the trusted LAN.   So whilst inbound traffic is tied to the IOT VLAN, outbound is not.

    I guess this makes sense as the bridge just bridges the docker network to the host (Unraid) and that has a route to the trusted LAN so it can get through.   Is there any way to force outbound traffic back out through the VLAN tagged (virtual) interface tp prevent this??

  3. 1 hour ago, ich777 said:

    May I ask for what do you use the card?

     

    Why do you need the new driver version?

    I use the card for H.264 decoding in Plex.  (Is just about the latest passive card I could find)

     

    I don't need the latest driver I am just wondering if I did need an updated driver whether it would appear in the plugin or not.

  4. How does available versions get populated?
    I am running an old GT-710 so I am on 470.182.03 which shows up under available versions but I see that NVidia released 470.199.02 a few days ago.

    Not in any particular hurry to upgrade but jjust wondering whether updates will show up in the plugin or because it is the old legacy driver whether I need to update it manually.

  5. My array has 4x2TB drives running reiserfs.    As this is no longer supported I have been looking at switching to XFS but just looking at the best way to achieve this.

     

    AIUI the only viable way to do this is to copy the drive contents to another drive, format the original and copy it back.

     

    Just looking at the fastest way to achieve this without spending  a huge amount.

    I have a pretty old HP Microserver which is fairly limited in resources (Array + cache take up all the SATA slots and USB is USB 2.0 only).   There is however an e-sata port.

     

    I have an old USB WD Elements drive which I have hooked up and just running rsync now and it has taken 2 hours to copy 182GB so I an looking at 20 something hours to copy the 1..4 TB that is used on the drive and then 20+ hours to copy it all back and then repeat for the other two data drives.    Isn't the end of world but just wondering if there is a better way to do this?

     

    I did look at connecting up a drive via e-sata but seems there are limited options and most the enclosures you can get are £100+ and I don't want to be spending anywhere near that amount....

     

    Any thoughts on any cheap ways to speed this up?

    Thinking I could crack open the WD external HDD and then take the array offline, replace one of the disks with the WD one and then hopefully get it a bit quicker over SATA (is only SATA 2.0) or perhaps use a USB 3 PCIe card to connect the WD Elements drive??   Any other ideas?

  6. Upgrade mostly went well and the ability to limit admin access by interface seems better and I have been able to remove some of my hacks :)

     

    Docker however seems to have an issue filling up tmfs on /run

     

     

     

    I have been running the same containers for a couple of years without issue but since moving to 6.12 I have had issue with containers crashing and refusing to start.    Adding --nohealthcheck has helped but this is a workaround rather than a solution.

     

    Has something changes around docker using /run in 6.12 ?

  7. 44 minutes ago, S1nglebarrel said:

    Just chiming in that I to have this issue. It's weird though that /run is filling up now when I have been running the same amount of docker containers for over year.

    indeed.    I actually reduced the number of containers I am running when I upgraded to 6.12.    I am assuming the issues are since upgrading tp 6.12?

  8. So coming back to this now (and it appears this might be a cleaner in 6.12).

    First part is adding Unraid onto VLAN and preventing access to the admin stuff (SSH, web GUI etc.).   This was a little pain previously and involved some hacks but seems you can do this relatively easily in 6.12.

     

    So adding Unraid onto VLAN was just a case of defining the VLAN on the network settings and assigning an IP.

    Now there is an Interface Extra section in the config where I set the listening interface to br0 and excluded br0.10

    This means I can only access admin services on the main (br0) IP (and annoyance like the ftp server refusing to stay stopped, SAMBA needing hacks to config files etc appears to all be in the past).

     

    I then created my own docker network

    docker network create --subnet x.x.x.x/24 iot-bridge

    then I went through each of the containers I wanted to link and set their network to iot-bridge.

    On most I removed the exposed ports so they are only accessible by other containers running on the same network.

     

    Final step was to set the exposed ports on the containers I did want accessible to include the IP.    So one thing I have done is put at least one service behind a Nginx reverse proxy (so I could enable TLS).    So on the nginx container I changed the port from 443 to x.x.x.x:443 (x.x.x.x being the IP of the Unraid box on eth0.10).

     

    Now the nginx proxy is only accessible on VLAN 10 and the the container behind it is not directly accessible at all.   I have a few other things like an MQTT server that are purely accessible on the docker network which makes me a lot happier than where I was previously

    • Like 1
    • Thanks 1
  9. 1 hour ago, Squid said:

    There's an issue if you update from within Apps that it causes orphan images to appear.  Harmless since the orphans don't actually take up any space and on a todo list to fix

    It is volumes rather than images.   I only update my containers through unraid UI (that I can think of anyway).

    Looking again now I am not sure whether this is something odd in Portainer.   It was showing lots of unused volumes so I deleted them.   It is now showing two volumes but I have many more than that (Portainer itself shows volumes against containers correctly just not in the volume list 😕)
    Will see if I can establish when they appear and what they relate to

  10. Not sure if this is just a docker feature or whether this is down to how unraid updates containers but I regularly have to go in and seem to end up deleting about 10 unused volumes.

     

    I am guessing that it must be something like when container is updated the old volume gets disconnected and a new one created and the old one then sits there in limbo?

     

    Is there any way to stop this build up of volumes happening?

  11. I have one particular container which seems to have lost it's ability to show the update status.

    It is a standard container pulled from Docker Hub but it has suddenly started showing up a "Not Available".   All the other containers are fine and if I do a force update then the status goes back to "Up to date" but it then just goes back to "Not Available" (Haven't checked when but I assume it is when container restarts).

     

    I have deleted and image and recreated it but the issue is stlll there

     

    Any ideas why this one container is behaving like this?

  12. On 12/30/2022 at 2:37 AM, Greyberry said:

    Is there a way to make a port of a server-socket of a docker container only available for another docker container?

    I have an application behind my reverse proxy, and do not want it to be exposed to my LAN, but only to the reverse proxy, which is also running on the same unraid-machine.

     

    I guess it involves a separate docker-network and fixed ips within it for the docker containers. Can we do this on unraid? Or is there an even easyer solution I do not think of?

    Yes the easiest way to achieve this is to use a separate docker network but it isn't too complicated.   I run Home Assistant, MQTT and Zigbee2MQTT all in the same network but only HA is accessible from the LAN.
     

    I am going off memory now but roughly you just need to open up a terminal window for your server and run

    docker network create <some_name>

    If you want you can also assign a subnet using --subnet=x.x.x.x/y

    After creating you should see the network appear in the network type drop down for the container in Unraid.

     

    Just put whichever containers you want onto that network and they will all be able to talk to each other and you can then control which are accessible via LAN.

    You don't need fixed IPs if using a custom bridge network like this and you can just reference the other containers by name (you can also use advanced options to add a --hostname parameter)

     

    The only thing to be wary of is that I think you need to go into the docker settings in the UI and set the option "Preserve user defined networks" (under advanced settings) to yes else your network will get deleted when you restart

    • Like 1
  13. On 11/4/2021 at 3:59 AM, Steace said:

    I'd like to know if it's more secure to have websites/containers using Unraid Docker

    OR

    Use a VM and setup everything on a Linux Distro with everything set as my Docker containers, witch is a couples of containers proxied though swag, I'm also using Nginx on the Swag container to host some websites.

     

    1. I don't plan of using docker inside the VM, just setup everything manually <-- I love that even if it's way more longer/complicated 🥳
    2. The Distro on the VM will be secured as much as it's possible. 🚀
    3. Everything pass through Cloudflare for the extra layer of protection ☁️

    Are you asking whether it is more secure to expose your docker containers to the internet or use a reverse proxy in a VM?

     

    There isn't really a simple answer as it comes down to how secure your containers are and how things are configured.

     

    WIth a lot of containers not being configured for TLS, opening unnecessary ports and potentially running older web servers with unpatched vulnerabilities I have my services setup via a proxy server (which is actually running as a docker container itself rather than a VM).   The flip side of this is that there is a single point of access so if a vulnerability was found and exploited the hackers would most likely have access to whatever is behind the proxy; that said it is probably more likely that the proxy will get patched regularly.

     

    I have my proxy server and the containers it sits in from inside their own docker network.  This in theory means that any exploit would be limited to only those services rather than exposing the whole of my network and other sensitive data  / devices.

     

    It is all about risk.   The safest / most secure way of doing things it not to connect anything to the internet but that isn't exactly helpful.   I would always assume being hacked is a possibility (even though you should follow best practice and do things like regularly patch, don't reuse passwords, turn off things that aren't used....) and then consider what happens if you did get hacked.

     

     

    • Like 1
  14. 18 minutes ago, sjaak said:

    i cant connect with ssh to unRAID on the second NIC, bind_mgt works fine for me...

    also nginx is only listening on eth0. i don't have ssl enabled so no listening on port 443

    SMB/NFS is not affected with this setting, its not a management thing...

    So when you run 

    netstat -nplt

    what does it say for port 22?

    Before I added ListenAddress x.x.x.x to /boot/config/ssh/sshd_config there was no listen address specified so it was listening on 0.0.0.0:22 (ie. all interfaces).

     

    As for nginx the same goes, I see

    tcp        0      0 x.x.x.x:80       0.0.0.0:*               LISTEN      9182/nginx: master
    tcp6       0      0 :::80                   :::*                    LISTEN      9182/nginx: master

    So whilst it is bound correctly for ipv4 it is still bound to all interfaces for ipv6.

     

    I must admit there is on subtlety I have overlooked in my config in that my second NIC is actually a virtual interface so is eth0.10 but regardless of that netstat was clearly showing sshd listening on 0.0.0.0:22 so I don't see why this wouldn't have worked from a second NIC.

  15. OK I have started playing around with this and 

    BIND_MGT="yes"

    Seems to do the trick for the web interface.

     

    For Samba I added the following lines to [global] section in the SMB config via the UI

      bind interfaces only = yes
      interfaces = lo eth0

    and that seems to work.

     

    SSH however is still listening on my second IP.    I have added a VLAN interface in the network setting and assigned a static IP there.

     

    Now the only things I can see running on this VLAN IP are SSH and rpcbind  (which is a separate question...) and the Docker mappings I have setup

     

     

    Is the above incorrect that BIND_MGT does not limit SSH or is this a bug?

  16. I have in the past had OpenVPN setup to access my LAN remotely and that worked OK but I have been looking at using WireGuard but I can't quite figure out the best way to set this up.

     

    My network is setup across three VLANs with some of the docker containers running on Unraid assigned macvlan addresses on the different VLANs.  I want to have some fine grain control over what can be accessed over the VPN but I am not sure where the routing takes place in this setup.

    n.thumb.png.134c3a7a095a5df9821df513c5c1a250.png

     

    I have tried various settings and I am able to access the Unraid server frontend but I can't seem to figure out to access things and lock this down to specific IP / ports.   In reality I mainly want to give access to some servers on VLAN 2 but the Unraid box doesn't actually have an address on this VLAN (the docler containers are running as macvlan as I only have one NIC) but it would be nice if I was also able to access some boxes on VLAN 1

     

    Currently this is setup as "Remote Access to LAN" and I have setup a static route for the VPN network to the Unraid server IP and this gives me access to everything on one VLAN but I can't seem to get anything else to work.

     

    Anyone got anything similar working?