johnny2678 Posted September 5, 2023 Share Posted September 5, 2023 (edited) Background: since I started running unraid 3 years ago, I access VMs via Microsoft RDP and access my unraid docker services from that VM over RDP  Summary: Containers can't be accessed by VMs with VM set to vhost0, but VMs can access services running on other VMs VMs can't be access by RDP (MS RDP) with VM set to vbri0, but containers can be accessed by vms overnight, CPU load goes to 1000 forcing a hard shutdown  I don't like to change my settings once I have things working, but I tried upgrading to 6.12.3 and bumped into the macvlan issues so I rolled back to 6.11.5. Then, I decided to give 6.12.4 a shot since it was supposed to fix the macvlan issue.  Made the changes outlined here: bonding = yes bridging = no host access = yes  1st boot on 6.12.4, everything ran fine for a couple of days and I thought the macvlan problem was solved. Then I woke up this morning to an unresponsive server (see top screenshot).  Had to hard reset. 💩  When it came back up, I ran into the conditions in the summary above where I couldn't access docker containers from a VM like I have been doing for years. I really don't want to roll back to 6.11.5 again but might have to. Hoping someone here can see something that I missed... TIA.  Questions: What setting do I need to make on 6.12.4 to access docker containers from a VM? What is causing my server to go unresponsive forcing a hard reset?   15620-beast-diagnostics-20230905-0809.zip Edited September 5, 2023 by johnny2678 Quote Link to comment
johnny2678 Posted September 5, 2023 Author Share Posted September 5, 2023 Just realized this is a Sev1 issue in my house because I have WeeWX running in an unraid VM sending all house temperature data to MQTT running in an unraid docker container. Been working this way for years but now on 6.12.4 the VM can't see the container.  My AC trips on/off based on that temperature data getting to the MQTT container. I live in FL. Will have to revert to 11.5 Quote Link to comment
Solution johnny2678 Posted September 5, 2023 Author Solution Share Posted September 5, 2023 (edited) I shut down docker/vms services and went through settings again. Turned off bonding and restarted. I had one nic card unplugged because I need to recable. Thought bonding would just use the other line, but maybe not. edit: just to clarify, I was running a bond with one cable from 6.8.x and no issues until 6.12.4  Either way, VMs can see container services now that I'm just running on ETH0. crisis averted... for now.   Wondering if this also explains the CPU pegged at 1000? Edited September 5, 2023 by johnny2678 clarity Quote Link to comment
johnny2678 Posted September 7, 2023 Author Share Posted September 7, 2023 On 9/5/2023 at 10:43 AM, johnny2678 said: Wondering if this also explains the CPU pegged at 1000?  Nope - CPU pegged again last night and had to hard shutdown. Back to 6.11.5.   Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.