Jump to content

Teekno

Members
  • Posts

    22
  • Joined

  • Last visited

Teekno's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I have a new, larger parity drive that I want to upgrade to. I see in this: https://docs.unraid.net/legacy/FAQ/parity-swap-procedure/ That there's a procedure for this that would use the new drive and then use the old drive as a data drive. The document says that during the copy operation, the array will not be available. But near the top it says: "This procedure is strictly for replacing data drives in an Unraid array. If all you want to do is replace your Parity drive with a larger one, then you don't need the Parity Swap procedure. Just stop the array, unassign the parity drive and then remove the old parity drive and add the new one, and start the array. The process of building parity will immediately begin. " Which doesn't sound like there's any extended time of unavailability. Am I reading this right? If I value array availability more than the total time to complete, would I be safe in doing the procedure above to replace the parity drive, and after the array is done, then reintroduce the former parity drive as a data drive? Or is there unavailability either way? I am running 6.12.9.
  2. It's all green now. Thanks for all the help!
  3. Currently estimating another ten hours or so on the rebuild. I'll chime in after that. Again, I appreciate all the advice and education!
  4. OK, yeah, I see what you're talking about now. Well, I can't find anything else writing to it, but I ran another diag run and the read operations on that disk now look consistent with the rest of the array. tower-diagnostics-20240131-1018.zip
  5. Odd. I will take a look. Can you tell me where in the logs you can see evidence of that? I looked and didn't see anything, though that's my own inexperience.
  6. Sure. Here you go. I will note that I haven't stopped this incredibly slow rebuild. tower-diagnostics-20240130-1324.zip
  7. OK, one disk in particular is showing very high utilization on iowait, around 94%. I am thinking of stopping the rebuild, shutting down and maybe checking the cable, or replacing it? Does that sound like something that might work or is there another approach I should try?
  8. Yeah, I thought that looked odd. It's running at about 28 MB/sec with periods down to 6 MB/sec.
  9. OK, thanks. Currently rebuild time is around five days but I'll see where it shakes out.
  10. OK. And I really appreciate the help! tower-diagnostics-20240130-0826.zip
  11. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5) - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_fdblocks 220269811, counted 219118455 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 7 - agno = 5 - agno = 4 - agno = 6 - agno = 1 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (24:107349) is ahead of log (1:2). Format log to cycle 27. done
  12. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5) ERROR: The log head and/or tail cannot be discovered. Attempt to mount the filesystem to replay the log or use the -L option to destroy the log and attempt a repair.
  13. Ok. I didn’t do anything stupid. Here’s the output of the file system check. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... Log inconsistent (didn't find previous header) failed to find log head zero_log: cannot find log head/tail (xlog_find_tail=5) - scan filesystem freespace and inode maps... sb_fdblocks 220269811, counted 219118455 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 4 - agno = 6 - agno = 1 - agno = 5 - agno = 7 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... Maximum metadata LSN (24:107341) is ahead of log (0:0). Would format log to cycle 27. No modify flag set, skipping filesystem flush and exiting.
  14. OK. Replaced (and upgraded) my SATA card. So at this point I see Disk 5 emulated, like I did before. I now also see, under array options, the disk that's been emulated listed as unmountable, and I get this option: "Format will create a file system in all Unmountable disks." I am thinking that if I choose this, would it reformat the drive, and rebuild it as part of the array? I'd appreciate any assistance. I've attached the latest logs in case there's something else I should be looking at. tower-diagnostics-20240129-1920.zip
×
×
  • Create New...