(SOLVED) "Unmountable: No file system" before and after replace a disk


Recommended Posts

unRaid Version: 6.7.2

 

 

Two days ago my unRaid told me "Unmountable: No file system" for disk3.
Because this disk was an really old 2TB one (8y) i thought its broken.
So i replaced it with a new 3TB disk like it was described on this page https://wiki.unraid.net/Replacing_a_Data_Drive.

 

After 12Hours of restoring the disk, i got the same error for the brand new disk3:    "Unmountable: No file system"

I tried a restart, but still them same error.

 

xfs_repair -n produce this output:

 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 3
        - agno = 2
        - agno = 0
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify link counts...
No modify flag set, skipping filesystem flush and exiting.

 

 

Can somebody help me?

 

 

 

tower-syslog-20190710-1530.zip

tower-diagnostics-20190710-1546.zip

Edited by schwabelbauch
Link to comment

I tried xfs_repair via webinterface without any parameter and did a reboot after it was finished.
Still the same error.

 

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 0
        - agno = 2
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

 

Link to comment

There's a disk with a duplicate UUID:

Jul 10 17:29:56 Tower kernel: XFS (md3): Filesystem has duplicate UUID 83108b31-affc-4898-8734-4b45d1684720 - can't mount

This can't be a coincidence, it only happens if there's a clone of that disk already mounted, though if needed you can change the UUID.

  • Upvote 1
Link to comment

Okay... strange...

 

The old disk lie on my right side. It was the only 2TB disk inside my tower.
I removed this disk and put the new 3TB disk on the exact same place (same SATA port).

I executed the sudo blkid command and got the following result:

/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" TYPE="vfat"
/dev/sdb1: UUID="83108b31-affc-4898-8734-4b45d1684720" TYPE="xfs" PARTUUID="85bdf95a-9ac2-42b4-ae2c-cb3bc8940e91"
/dev/sdc1: UUID="f8a788eb-e4b8-4f8b-8a7d-5bd1da90c7db" TYPE="xfs" PARTUUID="0b3f26b0-6319-4ab5-8c5c-0df05922ded7"
/dev/sdd1: UUID="83108b31-affc-4898-8734-4b45d1684720" TYPE="xfs" PARTUUID="5d099f93-d623-4023-b765-08b6219520bb"
/dev/sde1: UUID="81e134b3-14cd-4b14-9aca-4b78d0192fef" UUID_SUB="7e186c91-33bc-4262-a982-701bd45f1d9c" TYPE="btrfs"
/dev/sdg1: UUID="f8a788eb-e4b8-4f8b-8a7d-5bd1da90c7db" TYPE="xfs" PARTUUID="1a106826-f4b9-4ba1-aeb9-fca623d197d6"
/dev/md1: UUID="f8a788eb-e4b8-4f8b-8a7d-5bd1da90c7db" TYPE="xfs"
/dev/md2: UUID="83108b31-affc-4898-8734-4b45d1684720" TYPE="xfs"
/dev/md3: UUID="83108b31-affc-4898-8734-4b45d1684720" TYPE="xfs"
/dev/sdf1: PARTUUID="41578220-ea80-4b75-935d-136577bd933d"

 

If i read the result correctly, my new disk has the exact same UUID like my disk2. But disk2 was never faulty and runs without any errors.

Something is really strange.

 

Can i change the UUID of the new disk3?

Will it fix my problem? Or did unRaid restored the wrong disk data to my new disk3?

Link to comment

Sorry, missed your reply, damn forum software 😠

 

On 7/10/2019 at 5:37 PM, schwabelbauch said:

Something is really strange.

Very strange indeed, I suspect something else is going on for this to happen, but you can change the UUID of either disk2 or disk3, start the array in maintenance mode and type:

 

xfs_admin -U generate /dev/mdX

 

Replace X with disk number

Edited by johnnie.black
  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.