zcmack

Members
  • Posts

    8
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

zcmack's Achievements

Noob

Noob (1/14)

1

Reputation

  1. i understand now that the format is a parity-protected operation and thus the data is removed. thankfully its all replaceable. for future reference, if xfs_repair is unable to repair the disk, what is the proper procedure? remove drive from array, replace with another and let it rebuild?
  2. i ran it from the GUI. i ended up removing the drive, formatting and re-adding to the array. i rebuilt parity and the drive is back online, though it seems like i lost all the data on that device. i guess i don't really understand why, as i thought unraid could tolerate a single disk failure and that the rebuild from parity would be restoring the data that was on that drive.
  3. Hello Had an unclean shutdown on Friday and took the opportunity to install a heatsink fan on my LSI card. Booted up and I am unable to mount disk 1, which is a 14TB drive I installed in the last few months. I would estimate it has 4TB used at most. This is one of the 4 drives in my icydock fat cage, each connected to a forward breakout cable connecting to the aforementioned LSI card. I double checked all connections are tight though I guess I'm not ruling out a failed strand of the forward breakout. Diagnostics attached. I ran xfs_repair: Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .......... [and so on] Exiting now. I did stop xfs repair after about 2 hours when I realized I didn't add the verbose flag, and re-ran. I'm not sure if this just resumed the initial command and didn't restart with verbose enabled or what, but I expected to see more output. Any suggestions? As I implied above, the drive is relatively new so I was wondering if formatting and letting it rebuild from parity would be the fastest option to getting a working array back. xfs_repair took quite some time to tell me not much (and never seemed to find that second superblock). As of right now I'm just sitting in maintenance mode with disk1 unmountable. moya-diagnostics-20221216-1919.zip
  4. thank you so much for the quick reply @ChatNoir! i appreciate it!
  5. Device is disabled, contents emulated. I found another post with information on how to rebuild the disk in place, I just wanted to get an expert opinion on the SMART report before doing so. The disk went into an errored state, I rebooted and ran the SMART report. I have not unassigned it yet. Thanks in advance! WDC_WD20EARX-00PASB0_WD-WMAZA5573580-20221006-2157.txt
  6. deleting the docker image file worked for me in the above scenario. upgraded from 6.0-ish to 6.3.5 and ran into this issue. Deleted image file and re added containers. Everything is back up and running as usual.
  7. Sorry to resurrect this thread but I've searched this forum many times over and can't find a solution to this same problem. I've tried running getfattr all of my subdirectories and nothing displays netatalk attributes, however if I run getfattr on /mnt/disk1 or /mnt/user0 I get the following: user.org.netatalk.supports-eas.4x1iF4 user.org.netatalk.supports-eas.Iu2DcI user.org.netatalk.supports-eas.Z3dAwb I understand that the fix involves copying files to a temp directory and then moving the temp directory back in place of the original directory. How would I perform this with disk1 or user0?