charibdis

Members
  • Posts

    12
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

charibdis's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Thanks. Will be keeping an eye on it. I expect some of those errors are from the age of the drive as well. It's SMART power on hours info shows: 12y, 3m, 26d, 15h.
  2. Thanks, I've completed the manual backup of the data and I've removed the pool and am reformatting the drive. While I was doing this I did get a pop-up for CRC errors on the drive finally. I'll plan to replace it in the near future, need to look at options. For now will see if I can get basic services back up and running on it. There's a few things I may opt to temporarily shift to the Array if absolutely required, even though I know that's not optimal. Maybe I can finally get a faster cache drive... The power issue was isolated to that particular circuit, the power was cut and immediately put back on. At 300W load it's sitting at 7 minutes right now. Both Unraid boxes are set to shutdown if switched to battery for more than 90 seconds. Intent there is to ensure they both shutdown cleanly with some reserve power left over.
  3. Thanks for the info. Running the command and starting the array appears to have allowed the disk to be mounted. I'm able to read the files. Though I've noted that it appears to be mounted as read-only. I'm manually backing up what I can so I can at least reference config files, as the automated backup I had setup isn't running to backup anything related to dockers. I pulled a new Diagnostic now that it's mounted. scylla-unraid-diagnostics-20240116-0800.zip
  4. My cache drive appears unmountable. Noticed it after a transient drop to UPS power and back. (no more than a few seconds.) Noticed a bunch of docker containers were down, and decided to reboot, which may have aggravated the issue. I tried looking up similar cases, but in most it seems either disabling docker, deleting and restarting fixes it. OR the drive is just completely missing. In my case the drive still shows in the Main list as active. Actually spins down since it isn't mountable. Doesn't display any SMART errors. Raw UDMA CRC count of 123. I mean it's a Green spinner with 12+ years power on time. If it's on it's last leg I got my money's worth out of it. If only all drives were as reliable... And yes I know a spinner as a cache drive, especially a green one, is a recipe for slower performance, but for what I use my array it works out perfectly fine. The I did find this: brtfs check shows errors: I suspect I could possibly try a btrfs repair, but I don't know enough about it to be sure, considering all the warnings that pop-up about using it. My other alternative is likely just reformat. If memory serves I think I'll just lose whatever was on my cache in terms of docker configs among a few other things. I do have a backup of my "Appdata" but for some reason it's over 6 months old. Recommendations on best course of action? scylla-unraid-diagnostics-20240115-1944.zip
  5. LOL getting my money's worth out of that drive. It's lasted longer than many larger capacity drives I've had. I've deleted the docker image and set it to use to the cache drive. I've installed 2 dockers and it's behaving much more as expected. I'll see about moving everything else off the drive in question and if I can run an extended SMART test on it. At the very least if it does go, it won't take much with it. The other cache pool is still 2 HDD's but it's better than nothing for the time being. I do have physical room for a second NVME in this machine. Should have upgraded my keys when that had the rebate a few months ago cause I'm pretty much maxed out on devices. Hindsight... I don't expect this one to be the issue though the other server I'd expect to have issues, as it runs hot, is purring along... Thanks for your help.
  6. I'm using HDD cause it's what I have/had. And I'm moving the docker to a different cache pool to eliminate that disk. I just don't have SSD's in it. (hoping to pick something decent up in boxing day sales). There also seem to be timeouts reading the interface/css from the server based on logs. Not sure why that is, likely just random unrelated.
  7. I've been attempting to troubleshoot odd behavior related to Docker. I'm not discounting the possibility that the issue may lie elsewhere. I had a few orphaned images, the drive on which the dockers were installed is getting up there in age, Therefore I simply assumed it to be an issue with the drive. As I wasn't worried about losing any data for any of the dockers, I removed the Docker image and cleared all saved data so that I could have a clean start... However, I keep having issues with installing any Docker. It will either take 15+ minutes sitting on a blank browser window with no status/details. I seem to be able to open the web UI on a secondary PC during this operation and navigate. But the PC on which the install operation was triggered will not display any other UI page. When it finally finishes, I'll get a partially loaded UI page (the formatting/styles don't seem to get applied. See screenshot) If I click on the Done button it will reload the last page I was on and everything seems to work normally. Installs usually take a few minutes at most. I've reviewed my network performance to see if anything was routing incorrectly, (this server runs 1 of 2 Pihole dockers, but neither of my 2 unraid boxes are set to use these). I attempted going through the logs, but nothing jumped out at me. charybdys-unrai-diagnostics-20221219-1601.zip
  8. Manually downloaded the ZIP from the site, When attempting to manually copy over to USB Flash the files took a very long time to copy, a retry was required on bzfirmware and bzimage however the copy appeared to have finished successfully. Upon attempting to boot, it threw a kernel panic error. I attempted restoring my backup to the key, it had issues and took nearly 10 minutes to complete the last few %. Attempting to boot off the USB now returns a "This is not a bootable disk"... I'm going to say my USB is very clearly having issues. I ran the same process on a new USB and it completed within moments, and does boot without issue. So looks like I'll be contacting support.
  9. I've attempted to upgrade to 6.11.0 from 6.10.3 and keep encountering a "bad sha256* error : Performing the upgrade check reveals no issues with upgrade, which seems to leave the issue being with the either the downloaded package and the USB Flash itself. I haven't spotted anything telltale in the logs either. My only other issue is that I've already had to replace the USB Flash drive on this server this year, and I'm aware I will need to contact support to replace the key. I just want to exhaust the potential alternative fixes before replacing the USB Flash drive itself, keeping in mind that may be a foregone conclusion at this point. charybdys-unrai-diagnostics-20220929-1111.zip