ne10g

Members
  • Posts

    58
  • Joined

  • Last visited

Everything posted by ne10g

  1. It was yes. Managed to sort it, no matter what settings I changed, nothing showed. I re-flashed the BIOS as a last gasp attempt (same version) and now it shows. Very strange (same BIOS version as I had with older versions of UNRAID) Gramlins in the BIOS somewhere I guess?
  2. Anyone else having issues with NVME drives since the 6.9.X releases? ASUS X299 TUF MK2 Motherboard ASUS Hyper M.2 X16 card (2x NVME drives inside this (1x Seagate FireCuda 510 and 1x Sabrent Rocket Q - BOTH detected ok) 1x ADATA 2TB XPG Spectrix S40G (connected direct to the motherboard and is NOT shown in UNRAID since 6.9.x releases....) Anyone know what could cause this? Works fine in Windows, Ubuntu 20.x etc...
  3. EDIT: I was wrong above. To get it back I need to POWER OFF the UNRAID server, then power back on. Simple restarting does not even give me access to the NVME with windows on. So if it crashes, just done it right now, I need to power OFF the server, then back ON and then, and only then, can I access the NVME drive again.
  4. Hi all! Strange issue occurring here. Looking for advice on what info would help diagnose it if you don't mind? System: ASUS X299 Tuf MK 2 Motherboard 128GB DDR4 RAM 1 x 2 TB NVME (connected direct to motherboard) 2 x 2TB NVME drives connected to an ASUS Hyper M.2 X16 PCIE card (2 ports used - 2 free) (connected to x16 port 2 and set to DATA in BIOS) 1 x NVIDIA 3090 (connected to x16 Port 1) 1 x PCIe 10Gbps Intel LAN card 1 x PCIe WiFi/Bluetooth card Symptoms: Windows 10 Pro is installed bare metal on the 2TB NVME that is connected direct to the MB. All drivers installed, all functions perfect when booted direct. In UNRAID I stub the 3090 and the 2TB NVME which is connected direct to the MB. I pass them both through to the Windows 10 VM. I amend the XML and add multifunction to the GPU device and change the bus to be the same for Audio and Video. I pass through the BIOS which I dumped with GPUz and then stripped out the headers. All works perfect, most of the time. However, the odd time (when writing to the disk/downloading at 7Gbps+ speeds...speed maybe unrelated) windows will freeze and give me a bog standard BSOD. If I restart the VM, gets to the bios screen and is super slow...eventually lets me go into the settings where I would usually select a boot device....but....it is not there!) As soon as it crashes, without fail, the NVME cannot be seen anymore by UNRAID. If I check the hardware profile which would usually show it as stubbed...not there. If I reboot UNRAID and boot straight back into UNRAID...device is not there. As if it does not exist. BUT....if I reboot and I select boot device from POST, and select the NVME drive with Windows installed...it will boot into Windows. THEN...if reboot and boot into UNRAID, it is back! Reproducible every single time. Anyone ever seen anything like this? Any ideas what to check? Nothing obvious in either the VM log or the UNRAID log.
  5. The way I done this was to create a new VM with this utility, edit the config, remove the recovery and install disks (left only the open core image for the boot loader) then passed through the whole disk that I already had big sur installed. Worked perfectly...kind of! kind of as I still had issues with the GPU. Powering down the vm would randomly kill the entire unraid server, even letting it sleep would do the same at times. bare metal was perfect.
  6. Ive never been able to get this to work when passing a gpu. In the end I went bare metal, installed it, tweaked it and then used this as a template to create a vm and just passed through the bare metal nvme. I then had issues with the reset bug (rx580) so gave up. Might try again with the old GT710 and see if that works better. Might be worth trying the same? (Install bare metal and then pass through whole drive after you get working)
  7. You’re passing in the BIOS for your GPU yeah?
  8. When you say fresh install, what method did you use? If I create the base with Macinabox and then keep only the Opencore img file to boot, delete other disks and then passthrough the NVME, it will both and work perfect first time with VNC. Then I can change and pass through the 580XT, works fine first time, perfect, very close to bare metal performance. Then as soon as I either restart, or shut down, it's game over. Can't get back into MacOS with the GPU, or even reverting back to VNC! Reproducible each time, every time. Nothing in the logs that looks odd, really strange!
  9. Broke again! Passthrough GPU. Works. Shutdown (or restart) VM - can never get display back from GPU. Remove GPU (editing VM template) and switch back to VNC, Guest has not initialised the display error. Nothing obvious in the logs.
  10. The good thing is, it is quick to get back to "square one" 1. Delete the VM 2. Restart Macinabox 3. Run the scripts to generate the VM 4. Edit the VM template 5. Change RAM, CPU, Passthrough NVME and other PCI devices, delete all discs except Opencore (use this to boot) 6. Save changes and run script to fix the XML. 7. Start VM and it is all good. I will pass through the GPU again and see how it behaves.
  11. I returned the 5700XT, now using ASUS AMD DUAL-RX580-O8G Radeon RX 580 OC Edition 8 GB GDDR5 Opencore bootloader (from macinabox) and when using bare metal, also using OpenCore. Not a custom UNRAID, just the latest beta. I spoke too soon as well, after the AMD reset bug I just cannot get the machine working anymore! This was what kept putting me off using UNRAID as my main daily driver, too many times things have just broken with no real reason as to why. I now cannot get any display when passing through the GPU, switched to VNC and it now doesn't work with that either!!
  12. All working! GPU passed through and all nice and smooth. One issue, probably related to the reset bug for AMD GPU's, I just tried to restart MacOS, as you would, and it hung UNRAID and maxed out the allocated CPU's. I seem to remember someone had a way to resolve this, anyone recall?
  13. Thanks, I'm a touch confused still! I guess I need to remove the sub for my NVME disk, as it doesn't show in the disk by ID folder, probably as UNRAID cannot see it due to me stubbing it. I done this for windows installs previously, so UNRAID did not see the NVME. Do I need to do the opposite for MacOS? To confirm, if possible, I would like to passthrough the whole NVME, including the EFI partition (is this even possible?) as that EFI partition has all of my customisations to make macOS Big Sur work on my particular hardware. Or....as I am going to be virtualising it, do I forget this EFI and I have to start from scratch? **edit** had some coffee and now my brain works. Simple really! Just used Macinabox, created a dummy Big Sur VM and used only the Opencore image to boot from, deleted the other disks as I don't need to install. Then just passed through the NVME and all is good With VNC anyway....next step, see if the GPU will pass through!
  14. Question guys, probably a silly one but...bare with me!! I have a fully working Hackintosh, Opencore bare metal install. I would still like to use UNRAID though.....I would assume I should just be able to passthrough the NVME disk as I would with a windows install, yes? I tried using Macinabox, latest version, installed a working BigSur, tested it, all fine. Until I tried to pass through either an RX580 or GT710. Black screen as soon as I tried. It is something funky with my motherboard (ASUS X299 TUF MK II) I am sure, as ended up having to downgrade the BIOS to much earlier versions before I could get a bare metal Hackintosh going. I have all of the required kexts etc, all working perfect...on Bare Metal. I then tried to edit the new Big Sur VM, removed the disks and passed through the NVME which has the bare metal Big Sur install, assuming it would just boot as it did previously when I did this for Windows. Wrong, of course haha! What would I need to do? Hardware: ASUS X299 TUF MKII Motherboard Intel i910940x CPU 128GB RAM 1 x ASUS AMD DUAL-RX580-O8G Radeon RX 580 OC Edition 8 GB GDDR5 1 x MSI 1GB GT710 Fenvi FV-HB1200 BCM4360 BT4.0 1 x 2TB NVME (macOS Big Sur bare metal install) 1 x 2TB NVME (Spare for now) 1 x 480GB SSD (Single UNRAID Disk for now)
  15. I've given up waiting for now, so just went with a bare metal install., runs like a dream, finally! To use bare metal with Windows 10, I just use the vm template, assign my devices then don't create a vdisk, but select the name to passthrough with anything else. Same for MacOS?
  16. Oh well, again, failed! This card has to be the issue. It’s rubbish! Ordered an old Kepler card for now, stick that in and use that. Don’t need any heavy gpu power for macOS so it’ll do!
  17. That’s what I said 😛 Just a recent shift from the GM to RC naming, not sure what’s caused them to do this, always been GM which in turn becomes final unless any last minute show stopper bugs are found. 11.0.1 beta was just a new branch to hide some further info on the silicone macs I bet, as they’ve deviated a bit from the normal procedure here. I’m having one final try with this 5700XT, going to try bare metal with open core proper and not go the KVM route, just out of interest to see if it works any better...
  18. I wish....haha. Tried again yesterday, absolutely no way this GPU will pass through. Everything is perfect (with Clover or Opencore) as soon as the GPU is passed through. Stuck. Big Sur is Release candidate now, ahead of Fridays event I assume! So we should get a new version of Maxinabox then, I think that was what @SpaceInvaderOne confirmed to someone earlier in the thread!
  19. I believe the drivers are already in Big Sur beta's - or at least references to the cards
  20. They look AWESOME!!! I will try get this 5700 working but I guess I should return it actually, as if I am gonna stick with a Radeon card I may as well get the new Big Navi?!
  21. I was sure I did, I looked SO many times....I had looked too long I guess...I should have just ran a diff rather tha trying to do it line by line manually by just looking
  22. Hmm interesting post! That is the ONLY thing that is different on my XML which just would not work when trying to pass through a 5700XT. You have me thinking now, so I am gonna swap out the 2080ti (again!) and put the 5700XT in and give it another go!! I'll use OC, still got my EFI folders which I was messing about with. I wonder......
  23. Gave up with this for now. Each time I tried to boot with the 5700XT passed through it would not only freeze the VM, but also took UNRAID down with it! The 5700XT is going back, I'll stick with my 2080ti and other OS' I guess.
  24. Does anyone have a similar setup to this and would be willing to post their EFI folder for me to try before I pull what's left of my hair out?! CPU: Intel i9-10940X Motherboard: ASUS X299 TUF MK2 GPU: MSI 5700XT MECH OC Assuming nothing else really matters for now with regards to the build, as it can be tweaked later. I can install Catalina perfectly, whilst still using VNC graphics, works fine. Can also swap between Clover (default from install) and OpenCore with no issues, BUT...only when I remain with VNC graphics. As soon as I swap to passthrough the GPU, that is when the issues start, simply does not boot. I have tried: * Passing through my own BIOS file * Passing through other BIOS files from techpowerup (including different brand just in case that helped..) * Not passing through any BIOS * Passing through with/without the sound part of the GPU The 5700XT is removed from UNRAID by stubbing it. I also have an NVIDIA 1650 in the system which UNRAID can use for itself once I've made the 5700XT unavailable.