sphbecker

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by sphbecker

  1. Just had this issue happen again on 6.12.4, this time I couldn't do anything to stop the loop but reboot the system.
  2. I also use rsync and had an issue with my GUI crashing, but I never made the connection. For me, the issue seemed to stop when I upgraded to 6.12.3, but that could also be coincidence. I don't put a ton of data on my server to need to sync.
  3. Windows had the same limitation until Windows 10. Even now, enabling long path support requires a registry edit, which implies it isn't a fully supported feature.
  4. This is what I did. Run a command like this to give Unraid a 2nd IP address on the default br0 interface. ip addr add 192.168.1.22/24 dev br0 That will only last until your next reboot, so add the command to the /boot/config/go script to rerun on each reboot; it should be above the command to start emhttp. At that point you have two choices, you can edit your docker container and instead of providing only a port number like 443, you can provide IP and port in this format 192.168.1.22:443. Now that docker will use the new IP address instead of Unraid's IP address. That is probably the easiest way to do it. If you have a lot of dockers working together for a specific purpose, you might want to create a custom docker network that binds to the 2nd IP address. That can be done with this command. Note, that command is permanent, no need to add to the go script. docker network create -o "com.docker.network.bridge.host_binding_ipv4"="192.168.1.22" my-docker-network At that point, any docker container you assign to my-docker-network will use the IP provided with any ports you specify. Important note: there is a bug/limitation in Unraid's docker GUI. The port mapping information shown on the status screen will incorrectly show the ports mapped to the server's IP address, however, they should work correctly using the custom IP address, this is only a display bug.
  5. When publishing a docker container's ports from a bridge network, the syntax 192.168.2.10:8443 can be used instead of 8443 if you want the port published only by a specific IP address on the server. This works as expected. However, the Docker GUI will incorrectly list Unraid's IP address on the port mappings list, regardless of what IP address is actually use by the container. The above in only a minor GUI reporting issue, everything works as expected despite the incorrect mapping information displayed. However, when trying to change Unraid's HTTPS WebUI port in Settings--Management Access, the UI prevents a port from being used that is also used by a Docker Container, even if that Container is using a different IP address and there would be no conflict. The command netstat -tulpn also shows the port in question is not in use on the main IP address. This issue only exists for HTTPS, the HTTP configuration allows it. My example Unraid's IP: 172.16.0.16 Secondary IP: 172.16.0.23 I have an Nginx container in a bridge network publishing port 443 to 172.16.0.23. I would like to set Unraid's WebUI to 443, but it claims the port is already in use even though my use of a different IP address means no conflict exists. As a workaround, I can do it in the other order, set Unraid to use port 443 and then configure Nginx to use 172.16.0.23:443, doing it that way works as expected. Each IP listens on port 443 with the expected service. unraid-diagnostics-20230818-1018.zip
  6. Does anyone know the exact time /boot/config/go runs? I would like to use it to add a secondary IP address to the br0 interface, but would like to understand exactly when in the boot sequence it runs so I can understand what dependency issues I may face. Specifically, I would like to create a custom docker bridge network and bind it to this secondary IP address. I am wondering if the docker service would have already started before the go scripot runs. If so, it will have ignored the com.docker.network.bridge.host_binding_ipv4 setting due to the IP being unknown at that time.
  7. Your best bet for a GUI configuration is to use the Bridge network, which creates an internal subnet for your dockers and allows port mapping from the UNRAID server's IP address to specific dockers. It works great, the only downside is sharing the server's native IP might lead to port number conflicts. You can work around that by using non-standard port numbers, such as 8443 instead of 443, but that can get annoying. If you don't mind dipping into the command line, you can create your own custom docker network using the bridge driver, which will work the same as above, but allows you to bind the docker network with a different local IP address; meaning your dockers could use a different LAN IP address than your server, which sounds like what you are looking for.
  8. I upgraded to 6.12.3 as recommended and haven't had the issue again. Thank you for the suggestion, I should have thought to try that first.
  9. It will work for adding capacity, but the performance will be random. Files that happen to land on the SSD will perform differently than files that land on other disks. Still, no reason it wouldn't work.
  10. It sounds like your plan is to use all new drives in the new system. You could always setup a 2nd USB device with an unRaid trail. Keep in mind that unRaid uses file-level spanning, not block-level stripping, meaning that each drive in your system has a readable standard linux filesystem, either with an incomplete collection of files, but combined represents all of your files. That means you can easily plug the old drives into the new system, don't add them to the new array, just leave them as stand-alone drives and copy the files to their new destinations. Ignore the parody drive, it isn't needed for this. You could also use an Ubuntu live USB to boot the older server and copy the files over the network, but unless you have a 10GB network, that will take far longer.
  11. Just had the same exact problem on 6.12.3. Manually updated 1 container, then realized I had several more and just pressed Update All. Now stuck in an update loop. EDIT: I pressed Stop All, which didn't end the update loop, but then pressed Check of Updates, which after completing the current cycle did seem to stop the loop. I was then able to start the dockers normally. Not a great solution, but it at least prevented a full server reboot.
  12. 2 months ago I upgraded to 6.12.1 from whatever the 6.11 last stable version was (I never installed any of the 6.12 betas). Ever since then the unRAID web admin UI is constantly offline. I don't use the UI daily, so I can't exactly say how long it takes, but the symptom is that the port opens, but does not respond to GET requests. In the unresponsive state, all services, dockers and VMs work fine and SSH access still works, UI seems to be the only thing effected. My solution so far has been to reboot the server via SSH. The web UI then works for a while, but if I come back a few days later it will be unresponsive again and requires another reboot. The server has 32 GB of RAM and all volumes have tons of available space. I only run 2 small VMs and a few small dockers, the server typically uses only about 1/3 of its RAM. Nothing about my configuration has changed in about a year, other than regularly installing updates. Primary question: how should I attempt to troubleshoot and solve this issue? Secondary question: is there an SSH command to restart the web UI service without a full reboot? (I have searched for this and surprisingly have not found an answer)
  13. I have read all release notes of 6.12 but haven't tried it out yet because I prefer to wait for general release. From what I read; it looks like we will have the ability to create a ZFS zpool but the traditional unRaid array remains as it is. I'd like to switch to ZFS, but I have a small system and don't have enough drives to use both. My question; is it reasonable to use a ZFS Pool instead of the unRaid array? Or are there enough things in unRaid that expect the array to be running that doing so would be a major hassle?
  14. I had the same question and don't fully understand your reply. I too am using a custom docker network and setting static IP, but that static IP is within the Docker Network's subnet, so not directly reachable by my network. If I want to make it reachable I need to add port mappings to the docker config, which maps them back to the unRAID host IP address, so I am still limited to only having one system listening on any given port number. Did I misunderstand something about your reply?
  15. Actually you don't really want to use a virtual interface at all for pfSense or OPNsense on unRAID. The native KVM VirtIO NIC driver does not work on FreeBSD guests, so you can only use the Intel or VMWare emulated drivers, and those have a massive performance penalty. That is not theoretical; I tried it and was only able to get about 300mbps with VMWare and 200mbps with Intel. I had an old Intel 4x gigabit server card from another project, I put that in and used SR-IOV to pass 2 of its ports to the VM as PCIe devices so that OPNsense can use them natively. One port is assigned as WAN and plugs into the modem, the other is LAN and plugs into my switch. Another port on that same card is assigned to the unRAID host and plugs into the switch. That might seem silly to have two connections from the same physical system plugged into the same switch, but any other configuration would result in a big performance drop (also, I have gigabit internet and only a gigabit switch, so this config allows one user to potentially max out the internet connection while another maxes out data transfer to unRAID and they will not have to share the same gigabit connection). With proper interaction between your hypervisor and virtual firewall using a virtual interface is fine. I did this exact same setup on HyperV and got full bandwidth on virtual NICs. I am sure a Linux based firewall on unRAID would also work, but not a FreeBSD based firewall.
  16. This is more of a docker question than an unRaid question because I assume the answer will be running unsupported commands, but I am fine with that, figure this is still a good place to ask. Is it possible to use an IP address other than the unRaid host address for ports? I have a custom docker network with several dockers. I am starting to run into port conflicts, and while I can just change port numbers, I prefer sticking with the standard port numbers for the service in question. Is there a way (either at the Docker level or Docker Network level) to specify a different IP address to export ports to? In other words, assuming my unRaid is 192.168.1.10, is there a way to do this (where .15 is a virtual IP)? 192.168.1.15:443 -- 172.18.0.3:443
  17. All valid points. RAID offers redundancy for better uptime, while backups are for disaster recovery. The main value of RAID is so that systems remain running after a failure so that you don't have to rush to restore from a backup (even if that process is fast and easy). This is actually what I do for a living, so I understand the best practices, I am just trying to figure out the best way to meet them with unRAID. You are correct that OPNsense and UniFi both have super simple backup/restore operations. Home Assistant does have a simple backup/restore, however with the way its plugins work there is some manual setup needed. Basically, you log into the fresh vanilla install, add any plugins you were using (which may be more complicated than it sounds if using unsupported "HACS" plugins) and then you can restore your backup. I prefer a solution that can remain running without me having to do anything, but I guess I am already accepting the unRAID server as a single point of failure, so adding the SSD to the list isn't the end of the world. I don't even have a redundant power supply, so I really can't get on my high-horse about fault tolerance 🙂 Another project I am looking at is finding an old/cheap embedded x86 device with at least 3 NICs to use as a HA failover for OPNsense. It doesn't need to be powerful enough to keep up with full gigabit traffic, but would automatic failover and keep the network running if the OPNsense VM went down for any reason, even a total unRAID failure. Super overkill for a home network, but would be a fun project. The only thing I am unsure of is exactly how CARP's virtual address works, if it maintains a consistent virtual MAC address across both devices, or if it only layer-3 IP address that stays the same and the layer-2 MAC address changes. I am not sure my ISP would play nice with that WAN MAC address changing, that typically requires a modem reboot.
  18. I was talking about VM boot times, not unRaid. Fair, I guess I meant "most powerful configuration I can have with a reasonable amount of effort and money." I don't want to pay for 2 SSDs for this purpose right now. I will monitor the disk spin-down comment, that might be reason enough to pay for SSDs.
  19. Thank you both for your feedback. Yes, I understand unRaid isn't Enterprise class, but being a nerd I enjoy going for the most powerful configuration I can...even if I don't need it :-). My VMs and Dockers (so far) include an OPNsense Firewall, Home Assistant and UniFi controller. None of those use a lot of I/O, so other than slightly slower boot times, I am really not worried about array performance. I'd love to see unRaid move to ZFS...I feel like that would be a huge win for them, allowing them to get a lot of these more advanced features for basically free.
  20. Yes, all those links indirectly confirm my theory about how unRaid uses the cache. Kind of a shame, that is about the least useful way to design a cache. I hoped it worked more like how PrimoCache for Windows works, or how a real storage array uses smart-tiering. I understand the limitation, XFS doesn't have native support for an SSD cache, so unRaid would have to design their own driver-level software, which would be a huge undertaking. Probably better for them to put that effort into switching to ZFS, which does have native support for cache drives. I didn't plan to install 2 SSDs and keeping VMs on non-redundant storage doesn't work for me, but the good news is that my VMs don't need fast I/O so keeping them on a slower HDD based array is fine. Glad I asked before wasting $100 on an SSD, thanks!
  21. I am setting up my first unRaid server and planned to buy a decent spec SSD as a cache. However, after seeing the setting to choose a time of day to write the cache to primary story, that leaves me assuming this is a simple file-level cache, which would be far more limited. Block or byte-level cache would typically write back as soon as possible (while also keeping a copy in cache). Is the cache nothing but a separate filesystem that holds new/updated files until they are written back? If so, then I assume it offers no benefit to VM disk images and doesn't do much to help read performance (unless the file you are reading happens to be in cache from a write that same day). I hoped the SSD cache would function as storage auto-tearing (more frequently used bytes--not files--are on the SSD for rapid access). If it doesn't work that way, if it is basically just a faster staging area for write operations, then I will probably just drop an old 256 GB SSD in and keep my expectations low, instead of buying a high-speed 1 TB SSD.
  22. Awesome post!! I haven't yet upgraded my network to 10G, so I plan to use an older Intel 4 port gigabit NIC (I don't recall the chipset off the top of my head, but I know it is a PCIe 2.0 version). I am planning to run pfSense on a VM, so I want good network throughput. Assuming that NIC even works with SR-IOV, would you suggest I spend the time setting it up, or is there really only a benefit once you move to 10 gig?
  23. So yeah, this will be somewhat of a rant, but I mean it in a constructive way, hoping for improvements and hoping others may know some under-the-covers ways to workaround these limitations. I have worked with professional hypervisors for over a decade, so I understand virtualization very well. Over the years I have run ESXi, HyperV and KVM on my home server, never had a big issue with any of them, I just like like to try out new things. This time around I decided to give unRaid a try, figuring it would have most of the perks of KVM without all of the tinkering. At first glace, its pretty promising. If you just need to run a VM or two, its great. However, I have run into a few seemingly odd limitations with unRaid's VM implementation that seem odd to me and could probably be easily addressed. Here are my complaints so far. Required core pinning. To me this is the worst limitation. Core pinning is for advanced tuning and shouldn't be needed in most uses cases, requiring it is bad. Just let us set the number of vCPU and have the hypervisor allocate as needed. If you only run a few VMs this isn't a huge issue, but it vastly limits scalability and creates the potential for bottlenecks on individual cores even if the system has availability on others. Host network settings are locked down unless you stop all VMs and Dockers. Not sure what the thinking was here, but that is an insane limitation for any modern system. All Guest VM settings are locked while running. Sure, some settings can't be changed while running, but many can on other platforms. The ability to add an additional disk or network card or change network connection without shutting down would be nice. Nearly every other platform supports that...even something as simple as Virtual Box. There seems to be only one virtual network bridge. Probably fine for most use cases, but if you wanted to setup a network lab to test multiple virtual routers, you don't have the ability to create a virtual network segment for each interconnect. A few good things I can say about it. Setup process is surprisingly easy and covers all the needs of nearly all home server operators. Hands down the easiest system I have seen for directly assigning hardware to a VM.