sphbecker

Members
  • Posts

    23
  • Joined

  • Last visited

sphbecker's Achievements

Noob

Noob (1/14)

1

Reputation

1

Community Answers

  1. Just had this issue happen again on 6.12.4, this time I couldn't do anything to stop the loop but reboot the system.
  2. I also use rsync and had an issue with my GUI crashing, but I never made the connection. For me, the issue seemed to stop when I upgraded to 6.12.3, but that could also be coincidence. I don't put a ton of data on my server to need to sync.
  3. Windows had the same limitation until Windows 10. Even now, enabling long path support requires a registry edit, which implies it isn't a fully supported feature.
  4. This is what I did. Run a command like this to give Unraid a 2nd IP address on the default br0 interface. ip addr add 192.168.1.22/24 dev br0 That will only last until your next reboot, so add the command to the /boot/config/go script to rerun on each reboot; it should be above the command to start emhttp. At that point you have two choices, you can edit your docker container and instead of providing only a port number like 443, you can provide IP and port in this format 192.168.1.22:443. Now that docker will use the new IP address instead of Unraid's IP address. That is probably the easiest way to do it. If you have a lot of dockers working together for a specific purpose, you might want to create a custom docker network that binds to the 2nd IP address. That can be done with this command. Note, that command is permanent, no need to add to the go script. docker network create -o "com.docker.network.bridge.host_binding_ipv4"="192.168.1.22" my-docker-network At that point, any docker container you assign to my-docker-network will use the IP provided with any ports you specify. Important note: there is a bug/limitation in Unraid's docker GUI. The port mapping information shown on the status screen will incorrectly show the ports mapped to the server's IP address, however, they should work correctly using the custom IP address, this is only a display bug.
  5. When publishing a docker container's ports from a bridge network, the syntax 192.168.2.10:8443 can be used instead of 8443 if you want the port published only by a specific IP address on the server. This works as expected. However, the Docker GUI will incorrectly list Unraid's IP address on the port mappings list, regardless of what IP address is actually use by the container. The above in only a minor GUI reporting issue, everything works as expected despite the incorrect mapping information displayed. However, when trying to change Unraid's HTTPS WebUI port in Settings--Management Access, the UI prevents a port from being used that is also used by a Docker Container, even if that Container is using a different IP address and there would be no conflict. The command netstat -tulpn also shows the port in question is not in use on the main IP address. This issue only exists for HTTPS, the HTTP configuration allows it. My example Unraid's IP: 172.16.0.16 Secondary IP: 172.16.0.23 I have an Nginx container in a bridge network publishing port 443 to 172.16.0.23. I would like to set Unraid's WebUI to 443, but it claims the port is already in use even though my use of a different IP address means no conflict exists. As a workaround, I can do it in the other order, set Unraid to use port 443 and then configure Nginx to use 172.16.0.23:443, doing it that way works as expected. Each IP listens on port 443 with the expected service. unraid-diagnostics-20230818-1018.zip
  6. Does anyone know the exact time /boot/config/go runs? I would like to use it to add a secondary IP address to the br0 interface, but would like to understand exactly when in the boot sequence it runs so I can understand what dependency issues I may face. Specifically, I would like to create a custom docker bridge network and bind it to this secondary IP address. I am wondering if the docker service would have already started before the go scripot runs. If so, it will have ignored the com.docker.network.bridge.host_binding_ipv4 setting due to the IP being unknown at that time.
  7. Your best bet for a GUI configuration is to use the Bridge network, which creates an internal subnet for your dockers and allows port mapping from the UNRAID server's IP address to specific dockers. It works great, the only downside is sharing the server's native IP might lead to port number conflicts. You can work around that by using non-standard port numbers, such as 8443 instead of 443, but that can get annoying. If you don't mind dipping into the command line, you can create your own custom docker network using the bridge driver, which will work the same as above, but allows you to bind the docker network with a different local IP address; meaning your dockers could use a different LAN IP address than your server, which sounds like what you are looking for.
  8. I upgraded to 6.12.3 as recommended and haven't had the issue again. Thank you for the suggestion, I should have thought to try that first.
  9. It will work for adding capacity, but the performance will be random. Files that happen to land on the SSD will perform differently than files that land on other disks. Still, no reason it wouldn't work.
  10. It sounds like your plan is to use all new drives in the new system. You could always setup a 2nd USB device with an unRaid trail. Keep in mind that unRaid uses file-level spanning, not block-level stripping, meaning that each drive in your system has a readable standard linux filesystem, either with an incomplete collection of files, but combined represents all of your files. That means you can easily plug the old drives into the new system, don't add them to the new array, just leave them as stand-alone drives and copy the files to their new destinations. Ignore the parody drive, it isn't needed for this. You could also use an Ubuntu live USB to boot the older server and copy the files over the network, but unless you have a 10GB network, that will take far longer.
  11. Just had the same exact problem on 6.12.3. Manually updated 1 container, then realized I had several more and just pressed Update All. Now stuck in an update loop. EDIT: I pressed Stop All, which didn't end the update loop, but then pressed Check of Updates, which after completing the current cycle did seem to stop the loop. I was then able to start the dockers normally. Not a great solution, but it at least prevented a full server reboot.
  12. 2 months ago I upgraded to 6.12.1 from whatever the 6.11 last stable version was (I never installed any of the 6.12 betas). Ever since then the unRAID web admin UI is constantly offline. I don't use the UI daily, so I can't exactly say how long it takes, but the symptom is that the port opens, but does not respond to GET requests. In the unresponsive state, all services, dockers and VMs work fine and SSH access still works, UI seems to be the only thing effected. My solution so far has been to reboot the server via SSH. The web UI then works for a while, but if I come back a few days later it will be unresponsive again and requires another reboot. The server has 32 GB of RAM and all volumes have tons of available space. I only run 2 small VMs and a few small dockers, the server typically uses only about 1/3 of its RAM. Nothing about my configuration has changed in about a year, other than regularly installing updates. Primary question: how should I attempt to troubleshoot and solve this issue? Secondary question: is there an SSH command to restart the web UI service without a full reboot? (I have searched for this and surprisingly have not found an answer)
  13. I have read all release notes of 6.12 but haven't tried it out yet because I prefer to wait for general release. From what I read; it looks like we will have the ability to create a ZFS zpool but the traditional unRaid array remains as it is. I'd like to switch to ZFS, but I have a small system and don't have enough drives to use both. My question; is it reasonable to use a ZFS Pool instead of the unRaid array? Or are there enough things in unRaid that expect the array to be running that doing so would be a major hassle?
  14. I had the same question and don't fully understand your reply. I too am using a custom docker network and setting static IP, but that static IP is within the Docker Network's subnet, so not directly reachable by my network. If I want to make it reachable I need to add port mappings to the docker config, which maps them back to the unRAID host IP address, so I am still limited to only having one system listening on any given port number. Did I misunderstand something about your reply?
  15. Actually you don't really want to use a virtual interface at all for pfSense or OPNsense on unRAID. The native KVM VirtIO NIC driver does not work on FreeBSD guests, so you can only use the Intel or VMWare emulated drivers, and those have a massive performance penalty. That is not theoretical; I tried it and was only able to get about 300mbps with VMWare and 200mbps with Intel. I had an old Intel 4x gigabit server card from another project, I put that in and used SR-IOV to pass 2 of its ports to the VM as PCIe devices so that OPNsense can use them natively. One port is assigned as WAN and plugs into the modem, the other is LAN and plugs into my switch. Another port on that same card is assigned to the unRAID host and plugs into the switch. That might seem silly to have two connections from the same physical system plugged into the same switch, but any other configuration would result in a big performance drop (also, I have gigabit internet and only a gigabit switch, so this config allows one user to potentially max out the internet connection while another maxes out data transfer to unRAID and they will not have to share the same gigabit connection). With proper interaction between your hypervisor and virtual firewall using a virtual interface is fine. I did this exact same setup on HyperV and got full bandwidth on virtual NICs. I am sure a Linux based firewall on unRAID would also work, but not a FreeBSD based firewall.