Jump to content

Hankanman

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by Hankanman

  1. Okay, I have RMA'd the motherboard (only new component) and also realised I wanted more capability and have previously run unraid on an x570 board without issue before, will arrive tomorrow or day after, when it arrives I will run in Safe mode overnight. Something else to mention, before moving, it was running on Intel hardware. While I wait for the new Ryzen board I have switched back with no issues since. I have seen that moving between hardware upgrades is "plug and play" which it essentially was, but not seen someone switch between Intel and AMD hardware

  2. Update

     

    Have Tried Suggestions found on forums here:

    https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-819173

     

    Updated Global C-State control to disabled in UEFI
    Set Power Supply Idle Control to Typical Current Idle in UEFI

     

    New Mother board and known good processor and RAM, current specs:

    4x WD RED 4TB (3 in array, 1 parity)
    1x WB Black 1TB NVMe (cache)
    ASRock B550 Steel Legend
    Ryzen 3800x
    2x 8GB Corsair Vengeance LPX in dual channel

     

    All other UEFI settings stock aside from boot device and CSM disabled

    Lock up occurs as soon as 10mins after boot or longest it has been up is ~12 hours

     

    I've attached the last two lock ups and diagnostics dumps

    IMG_20210304_145222.jpg

    IMG_20210302_122223.jpg

    endeavor-diagnostics-20210304-0931.zip

  3. Hi All, just moved my unraid install and disks over to some new hardware. First issue was getting it to boot which was resolved using this post: https://forums.unraid.net/topic/74419-tried-to-upgrade-from-653-to-66-and-wont-boot-up-after-reboot/?tab=comments#comment-710968

     

    Then I needed to remove a disk from the array (yes I probably should have done that before moving it)

    Followed these instructions to shrink array: https://wiki.unraid.net/Shrink_array#For_unRAID_v6.0_and_6.1_and_possibly_5.0 ( i tried the newer method but script didn't want to run, despite being sure i had done everything right)

     

    I am running 6.9.0-rc2

    I created the new array and put my drives back in, started it up and it began parity syncing, all good, left it overnight to finish and came back to the web UI showing 90.1% done and unraid had crashed with the following on screen:

     

    IMG_20210302_081858.thumb.jpg.17a89ca0a828693ad82d629edc1d7782.jpg

     

    Reset machine and it has started parity sync again, data is all there by the looks of it, but need to make sure this doesn't happen again...

  4. Hi I scratched my head on this for a while, there was a solution, essentially unraid doesn't release the Primary GPU for use by a VM:

     

    First test with the following commands via ssh with your VM off:

     

    echo 0 > /sys/class/vtconsole/vtcon0/bind
    echo 0 > /sys/class/vtconsole/vtcon1/bind
    echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind

     

    Then try to boot your vm with the gpu passed through. if it boots successfully you had the same issue as me. Then install user scripts from community applications and set this to run as a little script when the array starts. even works with auto starting VMs.

    • Like 1
    • Thanks 1
  5. Anyone got this working with RTX? I have:

    MSI MEG X570 ACE

    Ryzen 3800X

    MSI Gaming X Trio RTX 2080 SUPER

     

    Looking to passthrough the RTX 2080 Super to VM as a single GPU, have tried it all, modded vbios, Q35 chipset, stubbed the usb controllers and passthrough.

     

    If in the second pci slot there's no problem, with a GTX 970 in the primary. But i want to run the super at full x16 speed.

     

    With the attached config and steps taken i get a screen output at 800x600 and i am unable to install the drivers in Windows 10 (Also booted outside of Unraid and pre-installed the drivers, but they take no effect in the VM)

     

    I am getting the following in the VM log:

    2019-07-31T09:48:52.886459Z qemu-system-x86_64: vfio_region_write(0000:2d:00.0:region3+0x14290, 0x67ab0e0d,8) failed: Device or resource busy

    Seems that unraid isn't fully releasing the card so far as I can tell, my only thought would be running unraid truely headless with no gfx at all, so it has no ability to interfere with the card, but don't know if that's possible, and of course there is no way to diagnose then if you have no network. Also FYI I am booting the VM directly from the NVMe drive, with no vdisks.

     

    I have attached my VM XML to save the length of my post :P

    Win10.xml

×
×
  • Create New...