Unmountable: wrong or no file system


jagame

Recommended Posts

A few days ago I got a new drive for my UNRAID server. After installing the new drive and powering on, I got an error on my parity drive. After a couple of days rebuilding my parity on my new drive, I went ahead and formatted and added the other drive back to the array. All looked good and things appeared to be back in order last night. I wake up this morning to find that Disk2 is now reporting "Unmountable: wrong or no file system". And I'm missing my SATA SSD cache drive (I have two cache drives, one is NVME and the other is a SATA SSD giving me a total of 6 disks to fill my license). I do see the cache drive listed under Historical Devices. What's more odd is that my new parity drive is also listed under Historical Devices and also in my array as the Parity drive. I'm really not sure what's going on now. I've pulled diagnostics and attaching them. If someone has a little free time, I would greatly appreciate some assistance with this. I screwed up the last time I tried to troubleshoot this on my own and lost all my data. 

unraid-diagnostics-20221208-1426.zip

Link to comment

Here is the result of the test using the -nv options. I don't see where it states success of failure as the directions indicate:
 

Phase 1 - find and verify superblock... - block cache size set to 1460704 entries Phase 2 - using internal log - zero log... zero_log: head block 96845 tail block 96716 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... block (0,38063048-38063048) multiply claimed by cnt space tree, state - 2 block (0,67188081-67188081) multiply claimed by cnt space tree, state - 2 agf_freeblks 1889169, counted 1889167 in ag 0 sb_fdblocks 474767990, counted 474767988 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 data fork in ino 57523006 claims free block 7190378 data fork in ino 304504010 claims free block 38063046 data fork in ino 537545451 claims free block 67188079 - agno = 1 data fork in ino 2957327356 claims free block 369667381 data fork in ino 3185871218 claims free block 398209908 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 data fork in ino 17682402172 claims free block 2210304304 - agno = 9 data fork in ino 19330604584 claims free block 2416325587 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... free space (0,1107568-1107570) only seen by one free space btree free space (0,7190380-7190381) only seen by one free space btree free space (0,38063051-38063054) only seen by one free space btree free space (0,67188022-67188024) only seen by one free space btree free space (1,101231928-101231930) only seen by one free space btree free space (1,129769772-129769773) only seen by one free space btree - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 6 - agno = 7 - agno = 2 - agno = 5 - agno = 8 - agno = 9 - agno = 3 - agno = 4 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Thu Dec 8 15:16:57 2022 Phase Start End Duration Phase 1: 12/08 15:15:54 12/08 15:15:55 1 second Phase 2: 12/08 15:15:55 12/08 15:16:00 5 seconds Phase 3: 12/08 15:16:00 12/08 15:16:41 41 seconds Phase 4: 12/08 15:16:41 12/08 15:16:42 1 second Phase 5: Skipped Phase 6: 12/08 15:16:42 12/08 15:16:57 15 seconds Phase 7: 12/08 15:16:57 12/08 15:16:57 Total run time: 1 minute, 3 seconds

 

That will probably be more readable in an image :(

 

unraid-a.PNG

unraid-b.PNG

Link to comment

Thank you, that took care of my array disk issue. I reseated the cables on the cache disk to no avail. I bought the cables a couple of years ago because I was having issues and I found what I thought were good cables and they seemed fine. But I guess not. Can anyone recommend good SATA cables with locking ends? 

 

Update - I forgot to mention that I did have to run it using -L. It errored indicating there was log data and wanted me to mount the drive to clear it but the drive is unmountable so -L seemed like my only option. I read a couple of other forum posts related to that as well. 

Edited by jagame
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.