jameson_uk

Members
  • Posts

    78
  • Joined

  • Last visited

Everything posted by jameson_uk

  1. I am currrently running an old Gen 7 HP Microserver that has 4 x 2 TB drives and a 500GB SSD cache drive. This is now starting to show its age and I was just wondering about upgrade options. One main consideration is the aging GT710 GPU I have in therre for Plex decoding. I would love to be able to shift to an iGPU (I am not interested in 4k and the like and is mainly for H.264 108i0p content) As I guess, as with others, the ubiquity of streaming services has meant my use as a true NAS device has somewhat deminished and the main use of my server now is for running docker containers (Hoime Assistant, Pihole, Zigbee2MQTT....) If I didn't still need the NAS at all I would just replace everything with an Alder Lake N100 mini PC with a TDP of 6W should save me some money on running costs and give me a big boost in performance. All the low power mini PCs seem to come with at most two SATA connectors which makes sense given the small power supply.which rules them out as a direct replacement. The HP microserver averages about 50W (so about 1.2 kWh per day) so wondering whether there is any modern upgrade that is going to beat this in terms of power consumption whilst being quite (ideally fanless), give me a performance boost and give me better graphics?
  2. I have my network divided up into a few VLANs. Primarily trusted stuff vs IOT devices etc. This all works great and IOT stuff is nicely segregated on VLAN tagged traffic and trusted stuff is untagged. On the Unraid side I have enabled VLANs in network settings and configured a a new address on the IOT VLAN (and excluded this from the management interfaces). I have some containers I want on the trusted side and this all works well via a bridge network and port mappings of <trusted IP>:<port> so they are only accessible on the trusted network. I then have a bunch of containers I want on the untrusted side. I have created a docker bridge network and have all these containers running in there. Those I want to be accessible have port mappings of <untrusted IP>:<port> and the rest have no mappings at all (these are only accessible to other containers in this docker network-). All good so far in that some of the containers are not accessible externally (eg. MQTT server which only sits between Zigbee2MQTT and Home Assistant so has no need to be visible elsewhere) and the the containers I do want to be accessible appear on the untrusted VLAN. Only issue is outbound traffic from containers in this bridge network are able to connect to the trusted LAN. So whilst inbound traffic is tied to the IOT VLAN, outbound is not. I guess this makes sense as the bridge just bridges the docker network to the host (Unraid) and that has a route to the trusted LAN so it can get through. Is there any way to force outbound traffic back out through the VLAN tagged (virtual) interface tp prevent this??
  3. I am hoping it fixed /run running out of space ??
  4. I use the card for H.264 decoding in Plex. (Is just about the latest passive card I could find) I don't need the latest driver I am just wondering if I did need an updated driver whether it would appear in the plugin or not.
  5. How does available versions get populated? I am running an old GT-710 so I am on 470.182.03 which shows up under available versions but I see that NVidia released 470.199.02 a few days ago. Not in any particular hurry to upgrade but jjust wondering whether updates will show up in the plugin or because it is the old legacy driver whether I need to update it manually.
  6. My array has 4x2TB drives running reiserfs. As this is no longer supported I have been looking at switching to XFS but just looking at the best way to achieve this. AIUI the only viable way to do this is to copy the drive contents to another drive, format the original and copy it back. Just looking at the fastest way to achieve this without spending a huge amount. I have a pretty old HP Microserver which is fairly limited in resources (Array + cache take up all the SATA slots and USB is USB 2.0 only). There is however an e-sata port. I have an old USB WD Elements drive which I have hooked up and just running rsync now and it has taken 2 hours to copy 182GB so I an looking at 20 something hours to copy the 1..4 TB that is used on the drive and then 20+ hours to copy it all back and then repeat for the other two data drives. Isn't the end of world but just wondering if there is a better way to do this? I did look at connecting up a drive via e-sata but seems there are limited options and most the enclosures you can get are £100+ and I don't want to be spending anywhere near that amount.... Any thoughts on any cheap ways to speed this up? Thinking I could crack open the WD external HDD and then take the array offline, replace one of the disks with the WD one and then hopefully get it a bit quicker over SATA (is only SATA 2.0) or perhaps use a USB 3 PCIe card to connect the WD Elements drive?? Any other ideas?
  7. Upgrade mostly went well and the ability to limit admin access by interface seems better and I have been able to remove some of my hacks Docker however seems to have an issue filling up tmfs on /run I have been running the same containers for a couple of years without issue but since moving to 6.12 I have had issue with containers crashing and refusing to start. Adding --nohealthcheck has helped but this is a workaround rather than a solution. Has something changes around docker using /run in 6.12 ?
  8. indeed. I actually reduced the number of containers I am running when I upgraded to 6.12. I am assuming the issues are since upgrading tp 6.12?
  9. So coming back to this now (and it appears this might be a cleaner in 6.12). First part is adding Unraid onto VLAN and preventing access to the admin stuff (SSH, web GUI etc.). This was a little pain previously and involved some hacks but seems you can do this relatively easily in 6.12. So adding Unraid onto VLAN was just a case of defining the VLAN on the network settings and assigning an IP. Now there is an Interface Extra section in the config where I set the listening interface to br0 and excluded br0.10 This means I can only access admin services on the main (br0) IP (and annoyance like the ftp server refusing to stay stopped, SAMBA needing hacks to config files etc appears to all be in the past). I then created my own docker network docker network create --subnet x.x.x.x/24 iot-bridge then I went through each of the containers I wanted to link and set their network to iot-bridge. On most I removed the exposed ports so they are only accessible by other containers running on the same network. Final step was to set the exposed ports on the containers I did want accessible to include the IP. So one thing I have done is put at least one service behind a Nginx reverse proxy (so I could enable TLS). So on the nginx container I changed the port from 443 to x.x.x.x:443 (x.x.x.x being the IP of the Unraid box on eth0.10). Now the nginx proxy is only accessible on VLAN 10 and the the container behind it is not directly accessible at all. I have a few other things like an MQTT server that are purely accessible on the docker network which makes me a lot happier than where I was previously
  10. The solution lies in No ideal but turning off healthchecks for the containers stops /run filling up.
  11. Having the same issue here. Looks like the fundamental root cause is the /run has run out of space
  12. It is volumes rather than images. I only update my containers through unraid UI (that I can think of anyway). Looking again now I am not sure whether this is something odd in Portainer. It was showing lots of unused volumes so I deleted them. It is now showing two volumes but I have many more than that (Portainer itself shows volumes against containers correctly just not in the volume list 😕) Will see if I can establish when they appear and what they relate to
  13. Not sure if this is just a docker feature or whether this is down to how unraid updates containers but I regularly have to go in and seem to end up deleting about 10 unused volumes. I am guessing that it must be something like when container is updated the old volume gets disconnected and a new one created and the old one then sits there in limbo? Is there any way to stop this build up of volumes happening?
  14. OK I am not sure what format it is in (it has worked fine for ages) https://hub.docker.com/r/koenkk/zigbee2mqtt/ I did however see https://github.com/Koenkk/zigbee2mqtt/pull/16297 which was merged yesterday which seems to link back to https://github.com/docker/buildx/issues/1509 so looks like that is probably the issue. I will try later when I get change
  15. I have one particular container which seems to have lost it's ability to show the update status. It is a standard container pulled from Docker Hub but it has suddenly started showing up a "Not Available". All the other containers are fine and if I do a force update then the status goes back to "Up to date" but it then just goes back to "Not Available" (Haven't checked when but I assume it is when container restarts). I have deleted and image and recreated it but the issue is stlll there Any ideas why this one container is behaving like this?
  16. Yes the easiest way to achieve this is to use a separate docker network but it isn't too complicated. I run Home Assistant, MQTT and Zigbee2MQTT all in the same network but only HA is accessible from the LAN. I am going off memory now but roughly you just need to open up a terminal window for your server and run docker network create <some_name> If you want you can also assign a subnet using --subnet=x.x.x.x/y After creating you should see the network appear in the network type drop down for the container in Unraid. Just put whichever containers you want onto that network and they will all be able to talk to each other and you can then control which are accessible via LAN. You don't need fixed IPs if using a custom bridge network like this and you can just reference the other containers by name (you can also use advanced options to add a --hostname parameter) The only thing to be wary of is that I think you need to go into the docker settings in the UI and set the option "Preserve user defined networks" (under advanced settings) to yes else your network will get deleted when you restart
  17. Bit of a killer for me as I need to run some containers on ports 80/443 on a secondary NIC which was working fine for me with BIND_MGT="yes" but since upgrading to 6.10.1 the Unraid UI binds on ports 80 & 443 on all interfaces. SSH is only running on eth0 but I can't remember whether this was affected by BIND_MGMT or not
  18. Are you asking whether it is more secure to expose your docker containers to the internet or use a reverse proxy in a VM? There isn't really a simple answer as it comes down to how secure your containers are and how things are configured. WIth a lot of containers not being configured for TLS, opening unnecessary ports and potentially running older web servers with unpatched vulnerabilities I have my services setup via a proxy server (which is actually running as a docker container itself rather than a VM). The flip side of this is that there is a single point of access so if a vulnerability was found and exploited the hackers would most likely have access to whatever is behind the proxy; that said it is probably more likely that the proxy will get patched regularly. I have my proxy server and the containers it sits in from inside their own docker network. This in theory means that any exploit would be limited to only those services rather than exposing the whole of my network and other sensitive data / devices. It is all about risk. The safest / most secure way of doing things it not to connect anything to the internet but that isn't exactly helpful. I would always assume being hacked is a possibility (even though you should follow best practice and do things like regularly patch, don't reuse passwords, turn off things that aren't used....) and then consider what happens if you did get hacked.
  19. So when you run netstat -nplt what does it say for port 22? Before I added ListenAddress x.x.x.x to /boot/config/ssh/sshd_config there was no listen address specified so it was listening on 0.0.0.0:22 (ie. all interfaces). As for nginx the same goes, I see tcp 0 0 x.x.x.x:80 0.0.0.0:* LISTEN 9182/nginx: master tcp6 0 0 :::80 :::* LISTEN 9182/nginx: master So whilst it is bound correctly for ipv4 it is still bound to all interfaces for ipv6. I must admit there is on subtlety I have overlooked in my config in that my second NIC is actually a virtual interface so is eth0.10 but regardless of that netstat was clearly showing sshd listening on 0.0.0.0:22 so I don't see why this wouldn't have worked from a second NIC.
  20. In the vain hope that someone might actually look into these issues.... So doing some digging and noticed that BIND_MGT does work for ipv4 but I can see that nginx is listening on ports 80/443 on all ipv6 interfaces. So the setting only appears to be working for ipv4.
  21. Bumping this again and I also noticed that a port was open for rpc.statd My understanding is that both rpcbind and rpc.statd are only needed if you are using NFS so I would have thought it would be better to disable both when NFS is disabled
  22. Seems there are a few changes around access defaults in 6.10 so just bumping this. Particularly with the new cloud functions I would certainly only want to allow SSH on my local network
  23. OK I have started playing around with this and BIND_MGT="yes" Seems to do the trick for the web interface. For Samba I added the following lines to [global] section in the SMB config via the UI bind interfaces only = yes interfaces = lo eth0 and that seems to work. SSH however is still listening on my second IP. I have added a VLAN interface in the network setting and assigned a static IP there. Now the only things I can see running on this VLAN IP are SSH and rpcbind (which is a separate question...) and the Docker mappings I have setup Is the above incorrect that BIND_MGT does not limit SSH or is this a bug?
  24. I have in the past had OpenVPN setup to access my LAN remotely and that worked OK but I have been looking at using WireGuard but I can't quite figure out the best way to set this up. My network is setup across three VLANs with some of the docker containers running on Unraid assigned macvlan addresses on the different VLANs. I want to have some fine grain control over what can be accessed over the VPN but I am not sure where the routing takes place in this setup. I have tried various settings and I am able to access the Unraid server frontend but I can't seem to figure out to access things and lock this down to specific IP / ports. In reality I mainly want to give access to some servers on VLAN 2 but the Unraid box doesn't actually have an address on this VLAN (the docler containers are running as macvlan as I only have one NIC) but it would be nice if I was also able to access some boxes on VLAN 1 Currently this is setup as "Remote Access to LAN" and I have setup a static route for the VPN network to the Unraid server IP and this gives me access to everything on one VLAN but I can't seem to get anything else to work. Anyone got anything similar working?