Jump to content

TechGeek01

Members
  • Posts

    29
  • Joined

  • Last visited

Report Comments posted by TechGeek01

  1. 12 minutes ago, ich777 said:

    But you can also connect it to the Nvidia Card since it can output the GUI and also use it for transcoding in whatever.

    I actually do not have adapters for mini DisplayPort, so that's not happening currently 😛

     

    13 minutes ago, ich777 said:

    Or have you already tried to reboot without the Nvidia Driver Plugin installed if not try that and I will think about again to implement to disable to create a xorg.conf file.

    Yeah tried that already. Boots to GUI fine without the plugin installed. So something changed either in 6.9.1 or in the plugin.

     

    Not a huge deal having it be headless for a bit since I can get to the web interface just fine, but it would be nice to have that fixed 😛.

     

    Keep up the great work, man!

    • Like 1
  2. 10 minutes ago, ich777 said:

    First things first if you quote someone you have to actually click on the name otherwise the person get's no notification

    Good to know!

     

    10 minutes ago, ich777 said:

    May I ask to which card the monitor is connected to?

    Connected to integrated graphics. The Quadro is solely for Plex transcoding.

     

    12 minutes ago, ich777 said:

    But from what I see from your Diagnostics you already done that.

    Yup, needed to to get onboard graphics working in 6.9 with the new IPMI firmware that uses the newer Aspeed drivers.

     

    For the record, I was not getting a black screen before. This was the first time I rebooted since installing on 6.9.1 (was having issues with it not seeing the card after the upgrade, so uninstalled and reinstall fixed that, but I never rebooted after reinstalling since it's not required) so it was the first time I noticed this issue. Any ideas why I'm getting the black screen now even after uninstalling and reinstalling the Nvidia plugin?

  3. @ich177 Just wanted to chime in. I'm in 2021.03.12 of the Nvidia plugin, and even with the Intel GPU Top plugin installed, no bueno. Still get a blank screen with a non-blinking white cursor.

     

    I've tried removing and reinstalling the Nvidia plugin to no avail, since I thought that might be part of the issue. Motherboard is a Supermicro X10DRi, which is Aspeed, I think 2400-based.

     

    For the record, this is the same problem I had with 6.8.3 on the latest IPMI firmware. Since 3.80, they updated the graphics drivers on the board, so I had to downgrade to 3.77 firmware to get GUI mode to work in 6.8.3. Since 6.9's latest drivers, I've been able to run on 3.88 IPMI firmware just fine. Only after this upgrade to 6.9.1, and the first reboot after installing the Nvidia plugin do I seem to have this problem again.

     

    Given that Unraid updated drivers for Aspeed, this shouldn't be a driver issue with the drivers not actually being there, right? This is just some sort of precedence thing? Any ideas on a fix here?

    helium-diagnostics-20210315-1709.zip

  4. 46 minutes ago, cholzer said:

    increased CPU load every 10 seconds!?

    Intel Xeon E5-2620 v3

    my UnRAID is bored 90% of the day.
    there are no dockers, no VM's and no one on the LAN accesses the shares.
     

    With 6.8.3, prior to upgrading to RC2, the CPU load was mostly at 0-1%, sometimes there was a spike to 5% on a core.
    Now with RC2 I see multiple CPU cores spike to to 15% every 10 seconds..... (all array disks spun down, no one using it for anything).

    nas-diagnostics-20210122-0730.zip 101.77 kB · 0 downloads

    I'm not seeing perhaps the same level of increased activity. I do see increased activity on the CPU every 10-15 seconds or so, on dual E5620 v3. I've seen this happen on 6.8.3 as well, both in the form of one core spiking to 100% usage for a second, and then going back down. End result is that overall usage might go from 10 to 15% for a second.

     

    I've never seen this system not do this, so presumably this is normal behavior, but posting diagnostics in case it's not, and it could lead to someone finding something. If whether this is normal behavior or not could be clarified, that would be awesome.

    helium-diagnostics-20210122-0114.zip

  5. 13 hours ago, TechGeek01 said:

    I'll give that a new test in the morning to rule that out.

    Okay, so gracefully stopping the VM did not work. Only thing that would actually kill it was a force stop. Usually a clean stop seems to send an ACPI shutdown command, and will force kill if it's still running after a minute or so. Anyway, force stopping this time also led to the same stuck frame on the monitor, even after the VM was powered off (jnstead of the expected "no  signal"). What I did notice is that unplugging the VGA cable from the card, and plugging back in does not reset this, so the detection of a monitor here makes no difference, and it seems to be the GPU that's frozen. This is fixed by restarting the VM, so the card isn't stuck, it's just being told to keep displaying an image after the VM is stopped, which might not be the desired effect.

     

    Also, I notice USB pasthrough isn't working on my system properly. Have a Dell R510 with a Logitech K120, and an M325 plugged in. Both work in Unraid itself, but they seem to control Unraid, and aren't passed through to the VM. This was with them on the front USBs of the server, so I don't know if that's an Unraid thing, or if it's just a grouping thing not being able to pass through. FWIW, the Unraid USB is on the same controller board as those front USBs (the internal USB on the R510 is on that front control board, so presumably the same USB controller). Rear USBs directly on the motherboard I/O had the same effect, controlling the host, not being passed into the VM.

     

    Additionally, editing a VM to change the USB device leads to an error "unable to find device" on power on. The ID it gives is for the new device, so it's letting go of the old device config properly, though it seems like it's unable to find and bind to a new USB device, despite it being in the USB passthrough list, and being checked. Deleting and recreating the VM fixes this, as it works fine when creating, just not editing.

  6. 8 hours ago, trurl said:

    Haven't tried this myself but doesn't seem surprising. Force stop is sort of a hard poweroff as far as the VM is concerned but not as far as the hardware is concerned.

    Yeah, I know it's a hard power off to the VM. But displays require an active signal, correct? That is, the monitor won't continue to display when there's no signal, and has to be constantly told what to output. So how on earth, after killing and even nuking the VM, should the card still be telling it to display the same last frame of video the VM was working with?

     

    Actually, I wasn't able to get USB passthrough of keyboard and mouse working. I force stopped the VM once I realized I needed keyboard and mouse, and then edited the VM, and it wasn't responding to input. I'm not sure if that's a USB passthrough thing (either a bug, or something to do with IOMMU grouping, I have no idea), or if that was just the GPU still stuck on the last frame of video even after restarting it. I'll give that a new test in the morning to rule that out.

     

    8 hours ago, trurl said:

      Does it behave any differently with another release?

    My main Unraid server on 6.8.3 uses low profile cards. I unfortunately don't have anything that'll fit in there to test. Was *planning* on getting a low profile Quadro, but I haven't bought yet since I wanna make sure this'll work. Side note, would I be able to use said Quadro for HW acceleration in a Plex Docker?

  7. I've never done GPU passthrough before, so I don't know if this happens all the time, or if it's an issue with my setup, or if it's an rc2 bug (leaning towards the latter, but I have no idea). I have an old GeForce 210 I threw in to pass through to a VM, and I noticed that when I stopped the VM, I still had video.

     

    To my knowledge, monitors only display what they're actively told to, so if there's nothing utilizing the GPU, there should be no signal to the monitor, as it shouldn't be telling the monitor to do anything. Anyway, after stopping, and even completely nuking the VM and disks, it still persisted. Was using the VGA output on the card, and the monitor constantly displayed the last thing that was on screen when I shut it down (setup screen of a Windows ISO) as if it was actively being told to display it still.

     

    I did not check if stopping the VM properly avoids this. This was using the force stop option.

  8. 22 hours ago, ken-ji said:

    I suppose a missing feature of unraid is the ability for us to define/tweak the desired table, including overrides for DHCP enabled interfaces and other suck configurations.

    Yeah, at least the ability to say "don't assign an IP" would be nice. Cause if you set untagged as no static IP, your option is "automatic" which then assigns a 169 address, and then it seems that even though I have a static assigned to a VLAN, and a default gateway set on said VLAN, it tries to use the 169 untagged interface, and "can't reach the internet"

  9. This is an issue on RC1. I have not tested on RC2, since my networking setup has since changed, but if you didn't know about this, it probably applies to RC2 as well.

     

    Firstly, if we're enabling VLANs, and put an IP on a VLAN, but don't want an IP on the whole untagged interface, there probably should be an option for "no IP" rather than just letting "automatic" give it a 169 address. It's a minor annoyance that leads to the 169 address showing in the top right corner, as opposed to the real, set IP in the VLAN interface.

     

    The problem this creates is that in this scenario, where there's an auto 169 address on the untagged interface, and the "real" accessible web UI IP on a tagged VLAN on the same interface, installing things like plugins fails, because the 169 address has no internet connection, so Unraid can't connect to the internet.

     

    My guess is the best way to handle that would be to try the next IP in line on a VLAN should the main interface IP fail to resolve internet, until it hits one that works, instead of just giving up after the first fail. And/or let us "disable" IP assignment on an interface, or subinterface, so that instead of trying the untagged VLAN on a port, it knows to skip it, and use the tagged VLAN IP instead.

  10. I know this has been an issue for a while. I'm not sure if it's a hardware thing, or if it's an Unraid thing, or a bit of both, but it won't boot UEFI. Made a test USB of the trial of beta 35 and booted an R510. Booting in BIOS works fine, but despite having made the USB with the "allow UEFI boot" option checked, when it gets to the splash screen to select GUI mode or CLI mode at boot, whatever the selected boot option is, I get a "bad file number" error when booting, and it keeps trying and failing every couple of seconds.

     

    UEFI was enabled on the R510, and it should in theory be supported, given that the USB was made with that in mind, but all options at the Unraid boot screen still do this. Would be awesome if it could be fixed, but what exactly is the underlying cause for this?

  11. 30 minutes ago, JorgeB said:

    Like mentioned above, VMs will only auto-start if array auto-start is enable, diags confirm it isn't.

    Totally missed that statement above. So then with autostart on my array disabled, the programmatically intended behavior is that VMs don't autostart, correct?

     

    Now, Docker containers also have an autostart option, and when I manually start the array on boot cause auto array start is disabled, the Docker containers autostart themselves once the array is running. Surely, the expected and proper behavior should be that VMs should also follow that pattern?

     

    The array has to be started to even see a list of VMs or Docker containers, so there's no way to even manually start them before the array is already running, meaning that whether the array is started manually or automatically should be entirely irrelevant to both of those autostarts. Can a change be made so that even when starting the array manually, both Docker and VMs respect the chosen autostart options?

  12. I haven't yet spun up a second server/instance to test the beta, but wanted to ask. Has the autostart VM issue been fixed from 6.8.3?

     

    I have no idea if this hasn't been addressed, or if it was fixed in this, or a previous beta, but on 6.8.3, "autostart" VMs don't actually autostart on boot. I have to manually start them.

  13. 10 hours ago, civic95man said:

    That's good to know, and good to point out.  I don't boot into the GUI mode but it is a nice option if required so I'd like to have that option.  I run a X10SRA-F and remember seeing a note that a new VGA driver was required when updating the BMC to 3.80 or later (I'm on 3.88 now)

    I don't use GUI mode often, but I usually set it to boot there by default, so that on the off chance I can't get to the web GUI if it locked up, or if there's a network problem, I can reconfigure it and such.

     

    I actually had to downgrade the BMC to 3.77 because 3.80 and up didn't work in GUI mode. When I moved to this new Supermicro server from the Dell R510, the network changed, so I wouldn't have been able to get to it on another computer. It actually looks like it boots normally, and then as soon as the scrolling text goes away and you're supposed to be dumped at the login screen, it's just black.

     

    So yeah, if that latest driver could be included with the rest of the Aspeed stuff if it's not already, and verified to work on updated 3.88 on an X10 board, that would be awesome!

    • Like 1
  14. Thanks for the addition of the Aspeed driver!

     

    On that note, I have a Supermicro X10-DRi running Unraid, and with the latest firmware for the BMC, I can't boot into GUI mode. I just get a black screen after all the scrolling text instead of the login screen that should show up. I actually had to revert to BMC firmware 3.77, as 3.80 and up do not work with GUI mode. This was failing for me on Unraid 6.8.3 after migrating the USB from my old server.

     

    Are you guys by chance able to confirm that the latest Aspeed driver you've included works in GUI mode on this latest firmware for the board?

     

    For reference, X10 is based on the ASPEED AST 2400 controller. Seems like this same latest driver is required for almost all X10 motherboards, so if you could confirm that a Supermicro X10 board with IPMI firmware 3.80 or later can successfully boot into GUI mode, that would be amazing.

    • Like 2
×
×
  • Create New...