• Posts

  • Joined

  • Last visited

chaosclarity's Achievements


Newbie (1/14)



  1. @JorgeB Should I be regularly scrubbing the cache drive pool? I just wonder how I got in to this situation. But it is a good exercise in knowing how to restore and keeping good backups.
  2. Ok - I've re-added the original m.2 drive to the pool and formatted the drives, everything mounted now. I've gone ahead and started restoring a backup of all appdata content from back in June. From there I just need to re-add the docker apps from the Previous Apps section?
  3. Hi Jorge, Thanks for your help - I've uploaded the latest diagnostics
  4. Well, new issue... I replaced the drive with a Samsung 980 PRO and had intermittent issues getting this drive to show up in Unraid. Initially it did, and I added it to the cache pool, started the array but again I received these BTRFS errors and the cache pool went to a Read Only state. I can't really tell if this is a hardware issue or a software issue in Unraid that's causing the issue. My real concern is that if I restore, this issue will come right back. I replaced back to the original m.2 drive, but if I start the array, my docker/vm's are all gone. Is there some way to recover what was on there? Sadly, my Veeam backup license expired 30 days ago and now all backups are about 30 days old. Better than nothing, but am I really SOL here? I was backing up the appdata folder - if I want to restore from my backup, do I just dump all that in to the appdata folder? And what's the best way of going about restoring the dockers?
  5. Hi, Over the the past couple days I discovered an issue which prevented my cache drive pool (2x 1TB nvme ssd's) from working - it was on a read-only state. After rebooting it said there was no file system found. I was able to remedy this and get it going by performing this command: btrfs rescue zero-log /dev/nvme1n1p1 After performing this command and rebooting, it works for maybe 10min and goes back in to a faulty state. I then removed the drive nvme1n1 thinking it was faulty - ran the rescue command above, rebooted and all was well - albeit I only have 1 cache drive in my raid1 pool so I wanted to replace it with a new drive. I inserted a brand new Samsung 980 PRO in to that slot, booted up, added the Samsung drive to the cache pool, and now I'm back to having a faulty cache pool again. So... I guess the drive was never bad? But I'm not sure what is the issue here. I've attached diagnostic logs as well to the post. Appreciate any help and input! Thanks
  6. Preface: Server 1 had a psu die, which took out the cpu. I replaced the psu and cpu (switched from a Ryzen 3700X -> 5600G).. ok boots up now and Unraid back in business. Server 2 was working perfectly fine during this whole time, no issues. Now, tonight I notice that both servers Active Directory Join Status is "Not Joined".. I have 2 domain controllers, 1 on each unraid server. Ok, simple enough, I type in the admin password and click "Join"... they both sit there waiting, waiting, waiting for several minutes.. then I noticed both Unraid servers not responding to web gui any more. I press the power button on both to gracefully shut them down to avoid a parity check. They both shut down, I bring them back up and now neither one of them will let me in to the VM tab without freezing the web gui input. The dashboard screen won't show and Docker or VM info. Albeit, the Docker tab works fine. I noticed Server 1 will eventually show VM and Docker info on Dashboard, some VM's are indeed running (from autostart), but when I click on one of them to start, it took FOREVER... eventually it did start. Server 2, haven't gotten any VM or Docker info on Dashboard to show. I attached my diagnostic logs for both servers if that helps.
  7. I have one of my DNS servers down at the moment, which just so happens to be the 1st DNS server configured in Unraid's Network Settings. I have the 2nd and 3rd entries populated with working DNS servers, however I have noticed some Docker containers taking forever to resolve things and/or failing, then working later on. Is this normal behavior? Should I just stop the Unraid server and swap out the 1st (or just remove it) so Unraid doesn't attempt to use it? I also fail to see the point in having multiple if it cannot quickly failover to a 2nd or 3rd.
  8. I was able to get it working again. I was originally using a Windows 11 VM, but now tried with Windows 10. I honestly don't think it mattered which version of windows (10 vs 11), because what I noticed is that my XML configuration was reverting when I would try adding passthrough devices (usb controller), thus breaking the gpu passthrough configuration. If you have a blinking cursor, this is what you want on the console output screen. It should "freeze" or stop scrolling output right at the pci vga device and then show a blinking cursor. Once you start the VM, it takes over the screen output, but you will never see the bios/boot of Windows, just the windows login screen will suddenly appear.
  9. Reviving this from the dead. When you guys say it's "freezing" the output, in my case I still get a blinking cursor, so it doesn't appear to be frozen. I'm trying to passthrough an iGPU and I got it working briefly, but once I rebooted the unraid box with hdmi plugged in, I can no longer get it working for some odd reason. All I get is this console output, and when I start the VM it still shows this console output from Unraid.
  10. Not sure what's going on with it. I can't even get the Unraid host to release the iGPU. All I get is the console output. I've added video=efifb:off to my syslinux but it doesn't seem to release. I get the boot console and a blinking cursor when it's done. When I start my VM, it does start but it never "takes over" the HDMI output and I still see the Unraid console.
  11. Ok, another issue. I restarted my Unraid server with the HDMI plugged in, and naturally I see the unraid console while booting up. When I try to start the VM which has the passthrough enabled, the VM says it's Started but never boots and I still see the Unraid console on the screen plugged in via HDMI.
  12. Mine worked up to the point of installing the AMD drivers and radeonresetbugfix. I was able to install the AMD driver just fine, then the AMD driver had a 2min countdown to restart my VM, so I used task manager and killed the install to prevent that. Then, I installed the radeonresetbug service, waited until it went in to Started state. Then, I rebooted the VM and now it won't come back up with a display any more. It just shows a green garbled mess. Not sure if it's related to the radeonresetbugfix service or what. Edit: I logged back in via RDP, the AMD driver finished it's install and it all started working as expected.
  13. Doh, I powered off the server completely, unplugged. Upon checking to add the dropped drive back to the cache pool, it is now gone for good.
  14. Well, yesterday it dropped again. Attached diagnostics. But I'm almost thinking the drive is faulty.