Jump to content

Warrentheo

Members
  • Content Count

    301
  • Joined

  • Last visited

Community Reputation

30 Good

About Warrentheo

  • Rank
    Advanced Member
  • Birthday October 5

Converted

  • Gender
    Male
  • Location
    Earth
  • Personal Text
    Currently running:

    unRaid 6.4.1 Pro License Since 02/02/2018

    Asus IX Hero with i7-7700K and 64GB RAM

    Windows 10 Gameing VM with GTX 1070 passthrough

    5HD's with m.2 Raid-0 Cache

Recent Profile Visitors

1120 profile views
  1. I just got notified that the plugin is not known to the community apps plugin or the fix common problems plugin, and when I search for it now, it doesn't show up in the list of plugins that can be downloaded.... When I search for "NVIDIA" in the community apps plugin, it no longer shows up... Rebooting the server, and uninstalling the GPU statistics plugin changed nothing... Did this plugin just go unsupported? or is there a glitch in the community apps plugin? if it is now unsupported, I will be very sad to see it go... 😢 Still thank you to the author no matter which is the case...
  2. It is a 2Tb pool, 80% full, and was originally in raid0, the only issue it had is being booted once with one drive missing... I am trying to see if it is possible to start it again in raid0, or is the array already corrupted?
  3. That is the correct, the question is is it possible to skip the conversion to the normal raid1 mode, and leave it in the raid0 mode...
  4. Hello, I recently moved locations, and during my move, on of the cache drives that I have in a raid0 btrfs pool came disconnected... Array got started with only one of the 2 drives... After troubleshooting, I discovered the issue, and stopped the array, on the main screen the drive had become "unassigned"... When I re-add the drive, UnRaid now warns me that the pool will be formatted when the array is started (I think it is trying to boot back in normal mirror raid mode, and then will make me switch it back to a raid0 array) Is there a way to just re-add the drive in the original raid0? or am I forced to format the drives? qw-diagnostics-20200310-0910.zip
  5. I don't think Windows even thinks in the terms of IOMMU groups, most likely you will need to temp boot off of a Linux Live USB/DVD image... The commands are: for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*} printf ' IOMMU Group %s ' "$n" lspci -nns "${d##*/}" done You have the setup I am looking into getting as well, so that info would be useful 😄
  6. I also have this bug, rc5 to rc6, VM's show fine until one gets launched (either from GUI or CMDL), then it goes blank... I only have one VM, and so I am willing to rebuild the libvirt partition for testing if needed... qw-diagnostics-20191118-1759.zip
  7. Yah, I also have been looking into going this direction, was looking at a Crosshair VIII board, just need to know the IOMMU groups...
  8. Install the Community App Plugin, then install the version of Virt Manager maintained by djaydev (The other version is no longer maintained)... This will give you all the things you think are missing...
  9. Back up and running, so far no issues, I appreciate your help... Now we just need to get UnRaid updated to show a shares cache status on the main share page 😛
  10. Thank you for the reply, I will update with the results when it is completed 🙂
  11. I am about to swap out my M.2 Raid-0 and upgrade the drives in prep to swapping to X570/PCI-e 4.0... To that end, I need to completely remove all support for the cache from all shares and VM's, migrate everything to the main spinning array so I can nuke the current array so it can be sold, then setup the new array like a new setup... I want to make the move without any data mishaps, and was hoping to get some feedback on the steps needed to make sure that the move goes smoothly... Any tips would be appreciated... I am sorry if this is repeated somewhere else, I have been unable to see anything else related to this out there...
  12. I have an EVGA GTX1070, and have 6.7.3-rc2 installed... Has not given me an issue, I also would not expect the slightly updated linux kernel that came in rc1 to cause that sort of issue... Not much has changed in rc 1 & 2...
  13. an MSI GT240 should not need the ROM file added for it, only the newer GTX 10 series and RTX cards should need that... Older cards should boot with no special consideration... You do still need to pass the entire IOMMU group, which means you need to make sure you also pass through: 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GT215 [GeForce GT 240] [10de:0ca3] (rev a2) Subsystem: Micro-Star International Co., Ltd. [MSI] GT215 [GeForce GT 240] [1462:1995] 01:00.1 Audio device [0403]: NVIDIA Corporation High Definition Audio Controller [10de:0be4] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] High Definition Audio Controller [1462:1995] Don't know if you did or not... I would recommend moving this to the troubleshooting thread though, since this should not be an issue with 6.7.3 (mine is working fine, and nothing changed in 6.7.3 should affect this (updated kernel and docker revert))... If you do need to repost this, I would recommend that you include your VM XML file with the diagnostics as well...
  14. <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0a' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/domains/msi1030.dump'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> Try removing xvga='yes' from the XML line for your card... From what I have read, this setting is no longer supported, so this is my best guess... Next best guess is to ask where you got the Video BIOS file for the card, try re-dumping it directly from the UnRaid commandline for this specific card if it was not gotten from there in the first place... Third best option, try using the Nvidia plugin for UnRaid (Read the documentation for it before you do) And last option is "Create" a new VM with the same options as the current one, and just point it to the same drives and image files as the current one, this resets all the settings and sometimes fixes some of these types of issues... This is what the line for my EVGA GTX1070 passthrough reads like: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/cache/system/EVGA_GTX1070_FTW_DT_DUMPFROMUNRAIDCOMMANDLINE.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x01' function='0x0'/> </hostdev> Edit: This is minor, but renaming your VBIOS file from ".dump" to ".rom" lets is show up in the VM Creation WebUI for UnRaid...
  15. Wow, that is quite the changelog! Awesome! Will begin testing immediately...