Joseph

Members
  • Posts

    411
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Joseph's Achievements

Enthusiast

Enthusiast (6/14)

41

Reputation

  1. ok, I see what's going on now. The contents emulated message is only displayed if you hover the mouse over the red X. For some reason, I thought it would be a banner notification on the main page underneath the drive in question (like it did when I had a different issue where it said "Structure needs cleaning"). It looks as if I can just replace the drive and have parity rebuild the contents. Single parity disk only... maybe I should shut it off until the replacement drive arrives? ps. marking as solved.
  2. Hi unRaiders, after receiving a few notifications about SMART errors, a disk in my backup array has now been marked disabled. (The irony is, I just ordered a replacement disk from amazon.) Fortunately, the drive is not on my production box, but it would be nice if the data could be restored because of the huge amount of time it takes to complete a backup. Can anyone tell if the contents are being emulated or is the data lost forever? (please see attached diagnostics.) Also, can anyone point me in the right direction to restoring the data on this disk if that is still an option? Thank you!!
  3. Cool app!! FEATURE REQUESTS: User ability to change font color. User ability to change order of Disk Tray Layouts (when there's more than one, of course) on configuration page Thanks for your time and consideration.
  4. Update: extended smart scan ran over night and no physical issues were found. Marking issue solved. @trurl & @itimpi: Thank you both again for helping me get through a minor crises.
  5. Gotcha. I don’t see one, so I suppose that’s great news, no? also, I’m running an extended smart scan on the disk just in case there’s something more going on
  6. ok, I just tried the -L and the drive is back online... but I have no idea what data was lost (if any.) In the output windows of xfs_repair in unRAID it said "- moving disconnected inodes to lost+found ..." but I don't see that folder on that disk. Any idea where I might be able to find it? I'd like to see if there's anything that will clue me in to what data might be lost. Thanks again for everything.
  7. Good morning Mate, thanks for the info. I'm only hesitant because the RedHat warning is in contradiction to the unRaid wiki link you provided... Nevertheless, I'll give it a shot here in a bit then and report back. Thanks!!
  8. [UPDATE] So I started the array normally and the disk in question returned to unmountable. Also, I read this on a RedHat site and it doesn't bode well If the mount failed with the Structure needs cleaning error, the log is corrupted and cannot be replayed. Use the -L option (force log zeroing) to clear the log: This command causes all metadata updates in progress at the time of the crash to be lost, which might cause significant file system damage and data loss. This should be used only as a last resort if the log cannot be replayed. I went back into maintenance mode and the disk remained unmountable... debating next steps.
  9. I hit submit too hastily and forgot to include it on the original comment. I added it later, but I guess it didn't go through... here it is again so you can see. The screen capture is in maintenance mode, and that's where it remains for now.
  10. If you think it’s done with XFS repair then I will take it out of maintenance mode, then start it normally to find out.
  11. so I've let xfs_repair untouched for about 6 hours while I was out. The command output box under Check Filesystem Status hasn't changed. The last detail still reads: "Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting." Also, there are still reads happening across all drives with the disk in question 2777 reads ahead of most others in the array.... AND I forgot to mention that when the array was started, the unmountable message wasn't there. (see attached) Do you think it's ok to run it again with the -L option now or should I wait for something else in the command output box?
  12. Whew! I have to step out but if all goes well, I’ll try your next steps when I get back later today. Cheers!!
  13. understood. makes sense. Good Morning Guys, thanks for all the feedback. For me, I'm in uncharted waters so I appreciate your input thus far and I thought I'd give you an update before I proceed. I ran the "check" part of the test a few minutes ago and it didn't take long to give me these results: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 200013438, counted 200994233 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 7 - agno = 3 - agno = 6 - agno = 5 - agno = 2 - agno = 1 - agno = 4 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Hitting refresh on the main page indicates there is still some operations going. But I was curious if someone can kindly look at the initial findings and see if there's anything I should be concerned about thus far. Thanks again Guys and the unRaid community!!
  14. It would be cool if unRaid actually kicks a drive off line should the file system hiccup -- before the condition is written to parity -- so parity can be used as a recovery option in that scenario.
  15. ok, I will do that as soon as I have ample time... ...might not happen until Monday. Also, according to the link provided, it suggests leaving the disk in question available to the array and start in maintenance mode to do the check/repair. Any idea why moving the disk out of the array to run the check/repair and then copying the recovered data back into the array via copy isn't mentioned?