TechGeek01

Members
  • Content Count

    15
  • Joined

  • Last visited

Community Reputation

3 Neutral

About TechGeek01

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Okay, so gracefully stopping the VM did not work. Only thing that would actually kill it was a force stop. Usually a clean stop seems to send an ACPI shutdown command, and will force kill if it's still running after a minute or so. Anyway, force stopping this time also led to the same stuck frame on the monitor, even after the VM was powered off (jnstead of the expected "no signal"). What I did notice is that unplugging the VGA cable from the card, and plugging back in does not reset this, so the detection of a monitor here makes no difference, and it seems to be the GPU that's frozen. This i
  2. Yeah, I know it's a hard power off to the VM. But displays require an active signal, correct? That is, the monitor won't continue to display when there's no signal, and has to be constantly told what to output. So how on earth, after killing and even nuking the VM, should the card still be telling it to display the same last frame of video the VM was working with? Actually, I wasn't able to get USB passthrough of keyboard and mouse working. I force stopped the VM once I realized I needed keyboard and mouse, and then edited the VM, and it wasn't responding to input. I'm not sure if
  3. I've never done GPU passthrough before, so I don't know if this happens all the time, or if it's an issue with my setup, or if it's an rc2 bug (leaning towards the latter, but I have no idea). I have an old GeForce 210 I threw in to pass through to a VM, and I noticed that when I stopped the VM, I still had video. To my knowledge, monitors only display what they're actively told to, so if there's nothing utilizing the GPU, there should be no signal to the monitor, as it shouldn't be telling the monitor to do anything. Anyway, after stopping, and even completely nuking the VM and di
  4. Yeah, at least the ability to say "don't assign an IP" would be nice. Cause if you set untagged as no static IP, your option is "automatic" which then assigns a 169 address, and then it seems that even though I have a static assigned to a VLAN, and a default gateway set on said VLAN, it tries to use the 169 untagged interface, and "can't reach the internet"
  5. IPMI/system log shows nothing unusual, unfortunately. Memtest completely freaked on the bit fade test. Like, millions of errors in the first several hundred MB, so I'm currently in the process of finding what I hope is a bad stick, and not a slot or something.
  6. In the last couple months, I moved Unraid over from a Dell R510 to a Supermicro build, and since then, I see occasional warnings about machine check errors. Dec 19 16:49:16 helium kernel: mce: [Hardware Error]: Machine check events logged Dec 19 16:49:16 helium kernel: EDAC sbridge MC0: HANDLING MCE MEMORY ERROR Dec 19 16:49:16 helium kernel: EDAC sbridge MC0: CPU 6: Machine Check Event: 0 Bank 10: 8c000046000800c1 Dec 19 16:49:16 helium kernel: EDAC sbridge MC0: TSC 51ce458bc87a8 Dec 19 16:49:16 helium kernel: EDAC sbridge MC0: ADDR c5c6ea000 Dec 19 16:49:16 helium kernel: EDAC sbridge MC
  7. In the last couple months, I moved Unraid over from a Dell R510 to a Supermicro build, and since then, I see occasional warnings about machine check errors. Dec 19 16:49:16 helium kernel: mce: [Hardware Error]: Machine check events logged Dec 19 16:49:16 helium kernel: EDAC sbridge MC0: HANDLING MCE MEMORY ERROR Dec 19 16:49:16 helium kernel: EDAC sbridge MC0: CPU 6: Machine Check Event: 0 Bank 10: 8c000046000800c1 Dec 19 16:49:16 helium kernel: EDAC sbridge MC0: TSC 51ce458bc87a8 Dec 19 16:49:16 helium kernel: EDAC sbridge MC0: ADDR c5c6ea000 Dec 19 16:49:16 helium kernel: EDAC sbridge MC
  8. This is an issue on RC1. I have not tested on RC2, since my networking setup has since changed, but if you didn't know about this, it probably applies to RC2 as well. Firstly, if we're enabling VLANs, and put an IP on a VLAN, but don't want an IP on the whole untagged interface, there probably should be an option for "no IP" rather than just letting "automatic" give it a 169 address. It's a minor annoyance that leads to the 169 address showing in the top right corner, as opposed to the real, set IP in the VLAN interface. The problem this creates is that in this scenario
  9. I know this has been an issue for a while. I'm not sure if it's a hardware thing, or if it's an Unraid thing, or a bit of both, but it won't boot UEFI. Made a test USB of the trial of beta 35 and booted an R510. Booting in BIOS works fine, but despite having made the USB with the "allow UEFI boot" option checked, when it gets to the splash screen to select GUI mode or CLI mode at boot, whatever the selected boot option is, I get a "bad file number" error when booting, and it keeps trying and failing every couple of seconds. UEFI was enabled on the R510, and it should in theory be s
  10. Totally missed that statement above. So then with autostart on my array disabled, the programmatically intended behavior is that VMs don't autostart, correct? Now, Docker containers also have an autostart option, and when I manually start the array on boot cause auto array start is disabled, the Docker containers autostart themselves once the array is running. Surely, the expected and proper behavior should be that VMs should also follow that pattern? The array has to be started to even see a list of VMs or Docker containers, so there's no way to even manually start the
  11. I just dumped a diagnostic ZIP. Sounds like a weird edge case or something. helium-diagnostics-20201118-1647.zip
  12. I haven't yet spun up a second server/instance to test the beta, but wanted to ask. Has the autostart VM issue been fixed from 6.8.3? I have no idea if this hasn't been addressed, or if it was fixed in this, or a previous beta, but on 6.8.3, "autostart" VMs don't actually autostart on boot. I have to manually start them.
  13. I don't use GUI mode often, but I usually set it to boot there by default, so that on the off chance I can't get to the web GUI if it locked up, or if there's a network problem, I can reconfigure it and such. I actually had to downgrade the BMC to 3.77 because 3.80 and up didn't work in GUI mode. When I moved to this new Supermicro server from the Dell R510, the network changed, so I wouldn't have been able to get to it on another computer. It actually looks like it boots normally, and then as soon as the scrolling text goes away and you're supposed to be dumped at the login screen
  14. Thanks for the addition of the Aspeed driver! On that note, I have a Supermicro X10-DRi running Unraid, and with the latest firmware for the BMC, I can't boot into GUI mode. I just get a black screen after all the scrolling text instead of the login screen that should show up. I actually had to revert to BMC firmware 3.77, as 3.80 and up do not work with GUI mode. This was failing for me on Unraid 6.8.3 after migrating the USB from my old server. Are you guys by chance able to confirm that the latest Aspeed driver you've included works in GUI mode on this latest firmwar
  15. Let me start out by saying, I realize that this doesn't apply as much in enterprise situations. However, given that Unraid has the ability to add arbitrary drives to the array, it's much more common in the homelab community than something like FreeNAS, which pretty much requires you already have all the drives you're using. I believe the homelab community, while not all of Unraid, is a large part of the Unraid community, and would benefit here. A lot of us in this homelab community need, or want, a cheap way to back up data offsite, and probably have other hard drives l