chaosclarity

Members
  • Posts

    27
  • Joined

  • Last visited

chaosclarity's Achievements

Newbie

Newbie (1/14)

5

Reputation

  1. Preface: Server 1 had a psu die, which took out the cpu. I replaced the psu and cpu (switched from a Ryzen 3700X -> 5600G).. ok boots up now and Unraid back in business. Server 2 was working perfectly fine during this whole time, no issues. Now, tonight I notice that both servers Active Directory Join Status is "Not Joined".. I have 2 domain controllers, 1 on each unraid server. Ok, simple enough, I type in the admin password and click "Join"... they both sit there waiting, waiting, waiting for several minutes.. then I noticed both Unraid servers not responding to web gui any more. I press the power button on both to gracefully shut them down to avoid a parity check. They both shut down, I bring them back up and now neither one of them will let me in to the VM tab without freezing the web gui input. The dashboard screen won't show and Docker or VM info. Albeit, the Docker tab works fine. I noticed Server 1 will eventually show VM and Docker info on Dashboard, some VM's are indeed running (from autostart), but when I click on one of them to start, it took FOREVER... eventually it did start. Server 2, haven't gotten any VM or Docker info on Dashboard to show. I attached my diagnostic logs for both servers if that helps. jupiter-diagnostics-20221015-2152.zip tower-diagnostics-20221015-2151.zip
  2. I have one of my DNS servers down at the moment, which just so happens to be the 1st DNS server configured in Unraid's Network Settings. I have the 2nd and 3rd entries populated with working DNS servers, however I have noticed some Docker containers taking forever to resolve things and/or failing, then working later on. Is this normal behavior? Should I just stop the Unraid server and swap out the 1st (or just remove it) so Unraid doesn't attempt to use it? I also fail to see the point in having multiple if it cannot quickly failover to a 2nd or 3rd.
  3. I was able to get it working again. I was originally using a Windows 11 VM, but now tried with Windows 10. I honestly don't think it mattered which version of windows (10 vs 11), because what I noticed is that my XML configuration was reverting when I would try adding passthrough devices (usb controller), thus breaking the gpu passthrough configuration. If you have a blinking cursor, this is what you want on the console output screen. It should "freeze" or stop scrolling output right at the pci vga device and then show a blinking cursor. Once you start the VM, it takes over the screen output, but you will never see the bios/boot of Windows, just the windows login screen will suddenly appear.
  4. Reviving this from the dead. When you guys say it's "freezing" the output, in my case I still get a blinking cursor, so it doesn't appear to be frozen. I'm trying to passthrough an iGPU and I got it working briefly, but once I rebooted the unraid box with hdmi plugged in, I can no longer get it working for some odd reason. All I get is this console output, and when I start the VM it still shows this console output from Unraid.
  5. Not sure what's going on with it. I can't even get the Unraid host to release the iGPU. All I get is the console output. I've added video=efifb:off to my syslinux but it doesn't seem to release. I get the boot console and a blinking cursor when it's done. When I start my VM, it does start but it never "takes over" the HDMI output and I still see the Unraid console.
  6. Ok, another issue. I restarted my Unraid server with the HDMI plugged in, and naturally I see the unraid console while booting up. When I try to start the VM which has the passthrough enabled, the VM says it's Started but never boots and I still see the Unraid console on the screen plugged in via HDMI.
  7. Mine worked up to the point of installing the AMD drivers and radeonresetbugfix. I was able to install the AMD driver just fine, then the AMD driver had a 2min countdown to restart my VM, so I used task manager and killed the install to prevent that. Then, I installed the radeonresetbug service, waited until it went in to Started state. Then, I rebooted the VM and now it won't come back up with a display any more. It just shows a green garbled mess. Not sure if it's related to the radeonresetbugfix service or what. Edit: I logged back in via RDP, the AMD driver finished it's install and it all started working as expected.
  8. Doh, I powered off the server completely, unplugged. Upon checking to add the dropped drive back to the cache pool, it is now gone for good.
  9. Well, yesterday it dropped again. Attached diagnostics. But I'm almost thinking the drive is faulty. tower-diagnostics-20220712-0727.zip
  10. I have a few VM's where I moved them from a previous vmware esx server over to Unraid and kept the vmdk disk format. I noticed today that when the VM is powered on, I can see the disk size clearly in the web gui (40GB, for example). When I power the VM off to edit the disk size in the web gui, the value turns to 7,104,838T and while I can edit the field, it just won't apply (tried 60GB, for example). Is this a limitation of the vmdk disk format and do I need to convert them? Thanks
  11. I've been searching if this feature exists. I'm a vmware guy at heart, and from that experience it is possible to just click-click-click and boom the cd/dvd drive iso is "ejected" from a running VM. But in Unraid, all these settings are grayed out while it's running. Am I missing something, or this seemingly simple feature really not possible in Unraid? Thanks
  12. I finally got around to power cycling the server. The m.2 came back this time around and was added to the cache pool again. Hopefully no more issues...
  13. I got this working - I just had to type some text in to the telegram chat and it started working.
  14. I think it was getting corrupted somehow. I was using the Dynamix File Manager plugin and uploading the ISO from my machine to Unraid box. Instead I used qbittorrent to download the ISO this time and now it boots fine.