BurntOC

Members
  • Posts

    163
  • Joined

  • Last visited

Everything posted by BurntOC

  1. Last night I took the painful step of just blowing up the networks altogether. I deleted the network.cfg and the local-kv.db for Docker, then set up eth0, updated to 6.12.10, added the other interfaces and Docker networks back, and then set up my dozens of containers all over again. Three hours later I'm up on a current working state on 6.12.10 - still running macvlan as is my preference. I just hope that I get more stability on 6.12 this time than I ever did when I could update through 6.12.5. I'llmark this as the solution, but note that there are others who have had other successful workarounds.
  2. @bonienl I'm really sorry to hear about the other challenges. - understood. @Adam-M I attempted both, and neither worked. In the first scenario, I could see that the appropriate networks were still displayed for each container in the Docker screen , but none were started and attempting to start gave a "execution error". I attempted to edit a container to select the network again but the only one shown besides the builtin (host, IIRC) was bond0. I forgot to take diagnostics. I downgraded, then tried the second approach. Same thing re: the containers not starting and execution error. I forgot to edit to see what they showed, but I'd expect it was similar. I did take a diagnostic, which I've attached here. I've had to downgrade back again because of the importance of these containers to a lot of our daily work. unraid-outer-diagnostics-20240416-1758.zip
  3. Will do. It might be tomorrow before I can test, but I'll try to not let it slide beyond that.
  4. @bonienl With another several weeks passing, and 6.12.10 released I thougth I'd circle back again. Any ideas on how to address this issue?
  5. Just doing another weekly check in @bonienl. I reached out via support and they said you've got this. I take it at this point you guys have verified there's an error in the upgrade (docker network downgrade, I guess, I dunno anymore) process and are trying to figure out a way to address in 6.12.9 or 6.13?
  6. Good to hear. Can you recreate @bonienl? It would be nice to confirm before I have to do another production upgrade test and possible rollback. I'm ready to get off 6.11.5 but stopped dead on this server.
  7. Yep, you're right - thanks!
  8. Ah, attempted to manually execute the install script you referenced and it returns: Version is for qemu.d 6.12.8 No other output given.
  9. This server, and the other one I referenced, are on 6.12.8. I checked the usb.ini file and I think it looks okay (pasted code block below). I'm not sure I follow on the USB Manager code block. Are you suggesting I copy the code from one or both of the replies above into a manually created path and file, then assign the executable permission? If I uninstalled the plugin altogether and reinstalled, would I be able to get it to automatically create the startup stuff and restore my current assignments? [001/002] connected = "" bus = 001 dev = 002 ID_VENDOR_FROM_DATABASE = "Ports=4 Power=0mA " ID_VENDOR_ID = "0bda" ID_MODEL = "" ID_MODEL_ID = 5411 USBPort = "1-1" class = "hub" parents = "usb1,0000:00:15.0,pci0000:00" ID_SERIAL = "Generic_4-Port_USB_2.0_Hub" isSerial = "" isSerialPath = "" [001/004] connected = 1 bus = 001 dev = 004 ID_VENDOR_FROM_DATABASE = "Silicon Labs" ID_VENDOR_ID = "10c4" ID_MODEL = "CP2102N_USB_to_UART_Bridge_Controller" ID_MODEL_ID = "ea60" USBPort = "1-1.1" class = "interface" parents = "1-1,usb1,0000:00:15.0,pci0000:00" ID_SERIAL = "Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_5e2708b05fbcea11944693e368aed703" isSerial = 1 isSerialPath = "usb-Silicon_Labs_CP2102N_USB_to_UART_Bridge_Controller_5e2708b05fbcea11944693e368aed703-if00-port0" virsherror = "" VM = "Home Assistant" virsh = "Device attached successfully " connectmethod = "Manual" connectmap = "Device" [001/006] connected = 1 bus = 001 dev = 006 ID_VENDOR_FROM_DATABASE = "Silicon Labs" ID_VENDOR_ID = "10c4" ID_MODEL = "SkyConnect_v1.0" ID_MODEL_ID = "ea60" USBPort = "1-1.2" class = "interface" parents = "1-1,usb1,0000:00:15.0,pci0000:00" ID_SERIAL = "Nabu_Casa_SkyConnect_v1.0_9c33dedc57e2ed11b233ec5162c613ac" isSerial = 1 isSerialPath = "usb-Nabu_Casa_SkyConnect_v1.0_9c33dedc57e2ed11b233ec5162c613ac-if00-port0" virsherror = "" VM = "Home Assistant" virsh = "Device attached successfully " connectmethod = "Manual" connectmap = "Device" [002/001] connected = "" bus = 002 dev = 001 ID_VENDOR_FROM_DATABASE = "Ports=7 Power=0mA " ID_VENDOR_ID = "1d6b" ID_MODEL = "" ID_MODEL_ID = 0003 USBPort = "2-0" class = "roothub" parents = "0000:00:15.0,pci0000:00" ID_SERIAL = "Linux_6.1.74-Unraid_xhci-hcd_xHCI_Host_Controller_0000:00:15.0" isSerial = "" isSerialPath = "" [002/002] connected = "" bus = 002 dev = 002 ID_VENDOR_FROM_DATABASE = "Ports=4 Power=0mA " ID_VENDOR_ID = "0bda" ID_MODEL = "" ID_MODEL_ID = 0411 USBPort = "2-1" class = "hub" parents = "usb2,0000:00:15.0,pci0000:00" ID_SERIAL = "Generic_4-Port_USB_3.0_Hub" isSerial = "" isSerialPath = "" [002/003] connected = "" bus = 002 dev = 003 ID_VENDOR_FROM_DATABASE = "Ports=2 Power=0mA " ID_VENDOR_ID = "0bda" ID_MODEL = "" ID_MODEL_ID = 0415 USBPort = "2-6" class = "hub" parents = "usb2,0000:00:15.0,pci0000:00" ID_SERIAL = "Generic_2-Port_USB_3.0_Hub" isSerial = "" isSerialPath = "" [004/001] ishub = 1 connected = "" parents = "vhci_hcd.0,platform" bus = 004 dev = 001 ID_VENDOR_FROM_DATABASE = "Linux Foundation" ID_VENDOR_ID = "1d6b" ID_MODEL = "USB_IP_Virtual_Host_Controller" ID_MODEL_ID = 0003 USBPort = "usb4" class = "" ID_SERIAL = "Linux_6.1.74-Unraid_vhci_hcd_USB_IP_Virtual_Host_Controller_vhci_hcd.0" isSerial = "" isSerialPath = "" bNumInterfaces = ""
  10. @SimonF Thanks for the quick response. Under hooks I don't have any subfolders - qemu.d or otherwise - so that cat command is giving me an error. Only thing there atm is a qemu file. I checked one of my other servers and it actually doesn't even have the hooks folder under libvirt (but it's not running USB Manager either, just an FYI).
  11. Anyone having a problem with devices auto-attaching on server or VM start? I've had it configured that way since Day 1 and it worked great for almost a year, but the last couple of months it seems to ignore that setting and I still have to go in to click "VM ATTACH" to get my Zwave and Zigbee sticks to show up in my HA VM. Maybe something got corrupted when I was tweaking to switch my Zigbee stick out for a newer one with a better chipset. Open to any fixes, even blowing it away and starting over but IIRC I tried that from the GUI about 6 weeks ago and it didn't resolve this for me.
  12. Thanks for looking at it. I do believe there is some real "bug" here - it's always worked/upgraded fine before but as you said I know there were some changes. Odd that my other similar one works. I'd stumblined on some discussion of deleting local-kv.db to reset the docker network stuff but I didn't want to go near doing stuff like that if I can help it - unless y'all advise me to. We all use stuff running off this server most of the day so it's impactful if I screw it up.
  13. @bonienl I just upgraded again and the problem remains. Downgraded to get my containers back up and I forgot to run a "docker network ls", but I did do a screencap of the docker settings (think it looks the same except 6.12.8 up top), a screencap of me trying to edit the network on on existing container (confirmed that I only get those options on any new containers as well), and I did run a diagnostics. Hope this helps ID the cause. unraid-outer-diagnostics-20240223-0947.zip
  14. Will do. It will probably be this evening to keep a few of my services up this afternoon.
  15. Okay, I was away on business for several days and just used it to step away from this for a bit. I'm back and I've really got to ask - seriously? I have 3 paid Unraid server licenses, I attached diagnostics, and marked this as an Urgent issue, and in 5 days I can't get an acknowledgement and some help? Love Unraid and @limetech, but really guys, that's crap. I have a server stuck on 6.11.5, a version you've deprecated CA and functionality for to presumably encourage upgrades, and no one can address an issue where the upgrade blows away multiple networks?
  16. Never seen this before. I have 3 servers, most running Unraid since 6.9 or so. All have multiple interfaces with some interfaces having VLANs. Each runs macvlan, with bridging off on all interfaces. I assign the appropriate subinterface and a static IP for each container. Two have stayed pretty current on 6.12.x and went from 6.12.6 to 6.12.8 just fine. One 11th gen Intel has stayed on 6.11.5 because 6.12 was causing instability. I decided to bite the bullet and try 6.12.8 and for the first time ever the upgrade has removed my two custom networks (bond2.60 and eth4.70) from the containers. Actually the ones on eth4.70 show that on the Docker screen and the others reverted to None. Regardless, when I click into them to select the Network type the only options listed are the standard Bridge, Host, and None options. Docker settings still shows the custom networks just as they should and have. Diagnostics attached. Reverting to 6.11.5 left the jacked up "None" items, but at least I can select the custom networks as I should. As it stands I have no path to upgrade beyond 6.11.5 for this server. FWIW, I tried ipvlan and that didn't help. Until I'm clearer on the security and visibility implications of macvlan and ipvlan I'd like to stay on the former unless the instability returns. UPDATE - removing diagnostics now for some security hygiene and the fact that that it's been some time so I finally recreated all the server and docker network stuff from scratch.
  17. @Vynlovanth Did you ever figure this out? Mine go missing as well during updates from 6.11.5 and it's driving me mad.
  18. UPDATE - Nope, switching to macvlan to do the update still results in Unraid acting like that eth4.40 interface isn't there. I'm sure if I turned bridging on the interface it would show up, but I didn't have to do that before so....
  19. Okay, so happy to report that after a hard power cycle I was able to get it to boot. I was surprised to see that despite it saying it was going to take me down only to 6.12.4 it did in fact restore me to 6.11.5. The bad news is since it updated my CA plugin it now won't load it because I'm not on 6.12.x, but I was able to paste the URL in for the 9.30.2023 version to get it back. Anyone got ideas oon how to work my way out of whatever the heck happened to my networks and Docker configs during the upgrade? I just tried going to 6.12.6 first and it has the same problem. I'm guessing, as I've attempted this before, that if I leave it macvlan and do the upgrade it would work, then maybe I can try to switch to ipvlan, but the whole thing smacks of some sort of issue with the migration script.