Jump to content

bithoarder

Members
  • Posts

    31
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

bithoarder's Achievements

Noob

Noob (1/14)

1

Reputation

2

Community Answers

  1. It looks like everything is there... thanks again!
  2. Thank you again for your help... I'm not really sure where to go from here, and I want to tread lightly - especially since I don't fully understand what the repairs actually did. My guess is that I'm ready to try to bring Drive 4 back online? Another 14TB drive arrived today, but I'm thinking the first one may be okay, after the recabling(?)
  3. Started. Disc 4 is currently emulated - I really don't know at the moment if I'm missing any data. ood-diagnostics-20240826-1231.zip
  4. I do... I haven't checked them yet. I've been kind of afraid to know, and wanted to get everything else protected, first. Thank you for mentioning it. I'm not really sure I've lost anything at the moment. At least some missing files reappeared after the restart.
  5. I'd like to be sure that I'm following this. The disk hasn't been reporting as "unmountable or unformatted" for a while. Disk 13 appears to be active in the array, and Disk 4 is emulated. The array is still in maintenance mode. What should I do next?
  6. Attached... disc 13 looks pretty scary. Discs_4_13.txt
  7. It took me a little while to figure out how to do this... I almost did it from a prompt, but when I saw the UI element in the Device Settings, I understood. Disk 4: Phase 1 - find and verify superblock... writing modified primary superblock Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Disk 13: Phase 1 - find and verify superblock... writing modified primary superblock Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  8. Thanks Again! I see that it needs to be started in "maintenance mode" to do that. Do you have any idea how long I can expect that to take with a 14tb drive?
  9. The cable was delayed. New diagnostics are attached. I'm pretty sure I did this wrong, and rebuilt the parity instead of the drives. Thanks for looking! EDIT: I might actually be okay, data wise. It's showing that Drive 4 is emulated, but it's no longer showing the two drives as unformatted. Unfortunately, I need to head out for a couple of hours, but I'm more optimistic now. ood-diagnostics-20240825-1619.zip
  10. Will do. I opened the machine up and realized that I don't have a spare. Hopefully I'll have a replacement "SFF-8643 to (4) SATA" tomorrow. Thank you!
  11. Of course... thanks! I've got a really bad feeling that I did this the wrong way, but I swear the disc activity showed writing to the new drive(s). ood-diagnostics-20240822-1418.zip
  12. UnRAID removed two drives from my array because of performance problems. I replaced both with "refurbs". I restarted the array, and one of the drives dropped out again during the rebuild. I let the rebuild finish, expecting to restore one of the drives. Now, I'm seeing both of those drives listed as "Unmountable", even though all but one of the drives appears to be back in operation. Did I just rebuild my parity and lose the data on my drives? One of the drives was dumping so many "power on reset" errors that my log doesn't go back to start of the array rebuild.
  13. Thanks for the input... I didn't get a notification, so I'm just seeing it now. My idea here was to have a second DC as "backup". One could run on a VM, and I'd turn on the second machine only when there was a problem. I'm still kicking ideas around... a low power computer running Windows might be the way to go.
  14. Thanks... this is really what I was kind of expecting. I had hopes that maybe UnRAID had a credential caching mechanism to deal with this. I think it's strange that there seems to be so little overlap between UnRAID and AD - it's incredibly useful in a mutli-child, multi-computer family. It seems like there may only be two decent options: 1) Keep a domain controller offline, and boot it whenever I run into a problem. 2) Figure out the OpenLDAP thing. (the documentation I've found is really lacking, and it seems like I could run into the same sort of problem if Docker doesn't start.) If anyone has suggestions (or can point me to a good OpenLDAP resource), I'd be grateful.
  15. I'm on the verge of getting down to a single (UnRAID) server. All that's left to solve/replace is Active Directory. I know it's possible to do with OpenLDAP, but it seems like a rare and scarcely documented use case - I'm nervous about going that route. Instead, I'm considering just building a new Domain on a Windows 2022 VM and retiring my current 2012R2 PDC. (I understand the headaches of moving to a new domain, but I'm prepared.) Can I expect trouble if UnRAID is joined to a Domain that's only available in a nested VM? I had some really bad behavior when I had a network issue - I couldn't even log into the console - I'm afraid that I could end up in a similar situation by doing this. Thanks in advance for your ideas and experience!
×
×
  • Create New...