TheTechnoPilot

Members
  • Content Count

    30
  • Joined

  • Last visited

Community Reputation

1 Neutral

About TheTechnoPilot

  • Rank
    Newbie

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So sadly, neither of those, nor direct override of passing the hardware IDs to the VFIO driver on boot has had any effect on the boot error... @SpaceInvaderOne is there any chance you might be able to chime in with any thoughts? The VM boot log: -chardev socket,id=charmonitor,fd=24,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x2 \ -
  2. So, interestingly, adding that line after my ACS overrides didn't disable video during loading on reboot. Am I using it right? Interestingly, when I tried to start the VM again at that point, it did yank the card from UnRAID's command-line but Win10 didn't seem to grab it and the boot stalled with the same error and single stuck pinned logical core out of the 10 pairs assigned.
  3. Didn't really think it would fix this issue, just a worthwhile upgrade for me while I'm going through all this to improve overall compatibility across the board for my setup was my thinking.
  4. You are 100% correct and have no need or desire to run UnRaid in GUI mode, it was just to confirm the new card was functioning without issue. Normally I run it without and my only loss I feel doing this is seeing the IP address on bootup when working off a new network (use it for work on the sets of feature films). I'll give this a try. Would you say though this behaviour for grabbing control would be graphics card dependent? As it had no issue previously when using the GTX970.
  5. I've not yet, but I'm having general VM issues I need to sort through first before diving deeper into working on getting it working in MacOS. In trying to find solutions to this weird issue I stumbled upon the reset fix issue and seeing your work also seems to deal with board audio pass-through issues along with Ryzen sensors and I haven't bothered upgrading from 6.7.2 yet, this seemed like a good thing to look at and reason to upgrade. Right now my MacOS VM is still running High Sierra and want to get that going first ideally on the new card, but need to fix this basic pass
  6. Hey there bud, bit of a noobie here when it comes to this level. Running a 3900X on an ROG B450-I board with a Strix Vega 64 and still back on 6.7.2. Looks like upgrading to your build is my best bet now having just moved to the Vega 64 and discovering it is not a smooth sailing as I hoped (MacOS VMs so needed to keep to the Radeon camp). Sorry for asking such a basic question, but perhaps it could also help others, what is my best bet for upgrading to your specific build with these fixes as I've not done such an upgrade before.
  7. Hi there @johngc, seems like where you are hanging looking at your XML is: <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/988cac7a-49bb-4a9e-844a-f791ce1ffb0d_VARS-pure-efi.fd</nvram> which should based on what seems to be your file structure actually read: <loader readonly='yes' type='pflash'>/mnt/disks/VMs/MacinaboxCatalina/ovmf/OVMF_CODE.fd</loader> <nvram>/mnt/disks/VMs/MacinaboxCatalina/ovmf/OVMF_VARS.fd</nvram> this should hopefully ge
  8. Hey all, So I just picked up an ASUS ROG STRIX Radeon Vega 64 to replace my older Gigabyte GTX970 Mini in my UnRAID 6.7.2 build. I specifically went this route because I extensively use MacOS and wanted to move to a natively supported card for that VM so I didn't have to deal with the stupidities of trying to get an NVidia card supported in MacOS overall (after a few changes to the hardware which changed PCI express port mappings I was unable to get the GTX970 recognized in MacOS, but came up without any issue and fully functional in my Windows10 VM with the same XML settings for
  9. Thank you so much @johnnie.black and @trurl for all your help! I am back up and running with all my data fully parity synced and now just working on getting it all backed-up through a VM back into the cloud! While initially this all made me feel like my array was fragile, I now know that it is even more robust then I realized and have learned so much for going forward! Thanks again for stepping in and making that the case!
  10. By that you just mean using new configuration but not flagging it as parity valid, correct?
  11. OMG, @trurl & @johnnie.black I figured out what an idiot I am! I accidentally somehow managed to put the wrong 8TB Barracuda drive into my case and therefore told the system to add what actually was my old Disk2 to the array as Disk4! Okay so that I don't bork everything, please confirm my understanding that at this point is that I should be able to go back through and use New Configuration (parity valid) option to correct this, putting the real Disk4 back into the array and then once that is done, due a full parity check (with write corrections to disk) to get myself back up and running
  12. Honestly not at that folder size since my server should be honestly at just over 30TB (77%) space utilization and was at shut-down, now it is only showing about 28.5TB (74%). So while I wish that was the case, I don't suspect it being a possible explanation unfortunately... Oh and also even if somehow someone (no authorized users though besides myself on the current network) deleted the files on the share, I am also using recyclebin with manual emptying only, so it should still be taking up the space on the array.
  13. That's what I'm thinking, it seems very odd for the data to be missing and while admittedly I only looked in the share for it (so perhaps not being recognized for being part of that share), the total size of used space on the disk doesn't support unraid thinking it is there in the native disk file system either though... I'm wondering if potentially the filesystem on that drive got damaged and in essence lost the pointing to that folder and no longer considers it used space. That's the only explanation I can imagine for loosing essentially one whole folder from what I can tell. I
  14. No, however, considering the available free space listing of the disk outside the array (before I added it back), it now seems wrong when I think back as it had 2.9TB free when it really should only be at about 1.3TB and this is the same when mounted in the array. Honestly with all that I have done in the last couple days though, having yet done a full parity check, I worry the existing parity is probably already wrong at this point. This is also why I preferred to bet on the original driving being intact.
  15. Those 8 hrs I slept, I followed the new configuration, trusted parity instruction as the only change from having a functional full array to an unfunctional one last night was a reboot and accidental array start while seemingly missing a disk (not using ControlR to start the array again since I didn't pickup on the issue in its interface).