SanderScamper

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by SanderScamper

  1. +1 I like administering Unraid on a VM that's running on the same machine. The primary GPU is passed through so if I stop the VM I lose gui access unless I boot into a different unraid boot option. If I pass through a boot device, then I'd like Unraid to be able to start that VM with the array stopped.
  2. +1 => 1.) api unraid + all plugins 2.) api VMs 3.)api dockers
  3. @atoaster Thankyou for the tip about removing https://, I didn't realize it had pre-pended that so I had a leading https://https://. Not that it is now working, I'm still getting a 503 code fail error in the log. *edit scratch that 503 is auth and when I fixed that error, I got the children error again.
  4. I did not read the bug categorization. This bug is not urgent for me as I have resolved it, but it is a showstopper for anyone who encounters it trying to passthrough a GPU.
  5. Hey sorry my thing didn't help, but I'm glad you've solved your issue! Thanks for communicating it back, it's super interesting.
  6. Same issue. I also have no luck getting discovery to work with HA but I'm not sure if that's my fault.
  7. Hey take a look at the bug report I filed. Try recreating the VM and point it to the original vdisk.
  8. tldr: reproducible "Guest has not initialised the display (yet)" bug somehow corrupting VNC in VMs also breaks passthrough and is an absolute nightmare when troubleshooting VM passthrough. So I've spent the last 2 days trying to get reliable Windows 10 and 11 GPU passthrough with incredibly frustrating results. After painstakingly following all over the information online (thanks spaceinvaderone!) I reached my wits end with an attempt at Windows 11, cut my losses and set up a windows 10 VM, which worked perfectly, passed through GPU etc with no issue. Today, I proved it all still works, and upgraded to a 5950x due to the success of that test, redid my bios settings and after hours and hours, no luck getting the passthrough to work. In frustration, I re-enabled VNC to see if I could prove the VM still worked, and it failed throwing a "Guest has not initialised the display (yet)" error in VNC, apparently a common bug when re-enabling VNC after attempting GPU passthrough. Attempting to fix that problem, I recreated that VM using the original vdisk, an idea I got from the linked post, which worked. I then passed through the GPU etc, and it immediately worked. To prove the point, I restored BIOS to default, changed the bare minimum to enable VFIO, and the windows 10 VM still worked with passthrough. Then, I recreated the windows 11 VM (note, I only got this to work for 1 boot after a full day of troubleshooting), pointing to the original win11 vdisk, and that also immediately worked. This means that I've wasted minimum 10 hours across 2 days of troubleshooting, due to not understanding that the VM will sometimes get broken by the VNC vs GPU passthrough and need recreation via this method. I see a lot of people reporting frustration getting GPU passthrough to work, and this may be a significant contributor. *edit attached text comparison of xml for someone who is better at interrogating them. tartarus-diagnostics-20221018-1543.zip Gmail - Your comparison from Text Compare!.html
  9. What do you recommend I do trurl? I have reverse proxy set up but it's only routing to sabnzbd, sonarr, etc. I don't have the webgui remotely accessible. Are these showing up because I don't have the reverse proxy set as bridge?
  10. Ok, system rebuilt. I've borrowed (from an excellent person and power user) a LSI 9201-8i and 650w PSU. All new power connections (and higher watt PSU). All new sata connections to the new HBA. In addition, I've bought a 10TB red and set it as parity to try and recover the data from the previously identified drives that are failing. Here's the new diagnostics. I see that there are drive read errors. My understanding is that the parity drive will get what data it can from those drives, then if I replace those drives, it'll rewrite that data to those drives? I understand that there will be data loss due to already having read errors. tartarus-diagnostics-20220217-1611.zip
  11. Ok I'll disable the FTP server. JorgeB: How can I diagnose if this is disk failure as opposed to some sort of controller/SATA issue?
  12. I only enable webgui reverse proxy manually when I want to access the webgui from a desktop computer I can't install wireguard on, usually just for accessing it remotely like today, for like 30 min at a time. The rest of the time the unraid webgui isn't reverse proxied and can only be accessed through wireguard. Docker containers like sabnzbd are reverse proxy'd through Nginx Proxy Manager.
  13. I logged in this morning to see half of the shares missing. I grabbed the diagnostic for it to see if it's helpful. Notably, appdata was missing and that's cache drive only. My cache drive is a very new 1TB NVME m.2. tartarus-diagnostics-20220211-0757.zip
  14. hi JorgeB, could failing disks be the reason for Unraid to behave the way it has been? I guess I would have expected Unraid to handle disk failure more gracefully. I've had issues running extended SMART tests, they seem to stop at 10%, is there something I'm missing? I'll disable spin down and look into xfs_repair on disk1.
  15. I set it up a few days ago, sorry should have included it. tartarus-diagnostics-20220103-1852.zip tartarus-diagnostics-20220203-0948.zip syslog
  16. I've been having a very difficult time with Unraid. I migrated from Windows a few months ago and the most recent issue I'm trying to solve is that after a random period, I get errors in various docker containers (like sabnzbd failing to create directories) and when I check the User shares, it says there aren't any. A reboot fixes the issue. I've run SMART tests and the drives report fine. My current hypothesis is that when mover is invoked, the controller/sata interface is crashing and taking something with unraid with it. I haven't tested this yet but I was hoping someone could help with the diagnostics because I'm stuck. I can also provide system logs if the diagnostics are insufficient. tartarus-diagnostics-20220210-1816.zip