Jump to content

Unmountable: Unsupported or no file system


Reznap
Go to solution Solved by JorgeB,

Recommended Posts

Hi,

 

Recently filled up my 6 TB pool and decided to get some new hard drives. Bought 4 new 18tb drives to replace my 3 3TB drives.

 

First thing I did was take out the current parity drive and replaced it with a new 18tb drive and ran parity, that finished.

 

Everything looked good. Shutdown.

 

Then installed the remaining 3 18TB drives (Unplugged cache drive as I only have 6 SATA Ports)

 

Loaded up UNRAID and 'Disk-Cleared' the 3 new drives. that finished.

 

Everything looked good. Now I want to move the data from the 2 3 TB drives in the system to 2 of the new drives.

 

Go to Tools ---> New Config --> Main --> Assigned 3 18TB Drives to 1, 2, 3, assigned 2 3TB drives to 4, 5 and leave parity unassigned.

 

Start Array --> my 2 3TB drives both say 'Unmountable: Unsupported or no file system'

 

Do some basic googling/searching, run xfs_repair on them and it shows nothing.

 

Pull 1 of the 18TB drives out and reinstall Cache drive, cache drive mounts fine. Both 3TB drives with all my data still say "Unmountable: Unsupported or no file system"

 

What can I do to recover this?

 

Thank you!

skynet-diagnostics-20230130-1639.zip

Link to comment

Nothing you did makes much sense. Why did you rebuild parity, then remove parity? I suppose you intended to add drives, copy data to them, then remove the source drives. Really wish you had asked before doing anything. The correct procedure would be to replace parity with a larger disk then let it rebuild. Then replace one of the data disks with a larger disk and let it rebuild, repeat as necessary.

 

With what you did instead, the new disks (3, 4?) would be unmountable until you format them, but the other data disks that already had data on them (1, 2?) should not be unmountable.

 

What filesystem was on disks 1, 2?

 

Are you sure you haven't left anything out in your description?

Link to comment
1 hour ago, trurl said:

Nothing you did makes much sense. Why did you rebuild parity, then remove parity? I suppose you intended to add drives, copy data to them, then remove the source drives. Really wish you had asked before doing anything. The correct procedure would be to replace parity with a larger disk then let it rebuild. Then replace one of the data disks with a larger disk and let it rebuild, repeat as necessary.

 

With what you did instead, the new disks (3, 4?) would be unmountable until you format them, but the other data disks that already had data on them (1, 2?) should not be unmountable.

 

What filesystem was on disks 1, 2?

 

Are you sure you haven't left anything out in your description?

XFS

Link to comment
1 hour ago, Reznap said:

Yeah, I got lazy and did not want to rebuild parity three times and thought I could use the 'Faster' method in the wiki. my bad....

I don't get the "lazy" part. Whatever you had in mind would have definitely been more trouble and prone to mistakes than the normal method of upsizing disks. And it probably wouldn't have been faster.

Link to comment
1 hour ago, trurl said:

Did you do this from the webUI or the command line? Easy to get the command wrong.

 

command line, just did xfs_repair -n /dev/sde1 (and sdf1)

 

I think I also did xfs_repair /dev/sde1 (and sdf1)

 

and xfs_repair -l /dev/sde1 (and sdf1)

Link to comment
6 hours ago, JorgeB said:

Post the output of:

blkid

and

xfs_repair -v /dev/sde1

 

 

/dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat"
/dev/loop1: TYPE="squashfs"
/dev/loop0: TYPE="squashfs"
/dev/sdc1: UUID="db429a5d-4243-4626-854d-a4fd8515757d" BLOCK_SIZE="512" TYPE="xfs"
/dev/sdg1: PARTUUID="fc394ee4-7101-4b4f-b841-76cf1b1861a2"

 

Phase 1 - find and verify superblock...
        - block cache size set to 741352 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 1027262 tail block 1027262
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 2
        - agno = 1
        - agno = 3
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...

        XFS_REPAIR Summary    Tue Jan 31 08:17:04 2023

Phase           Start           End             Duration
Phase 1:        01/31 08:16:52  01/31 08:16:52
Phase 2:        01/31 08:16:52  01/31 08:16:53  1 second
Phase 3:        01/31 08:16:53  01/31 08:16:58  5 seconds
Phase 4:        01/31 08:16:58  01/31 08:16:58
Phase 5:        01/31 08:16:58  01/31 08:16:58
Phase 6:        01/31 08:16:58  01/31 08:17:03  5 seconds
Phase 7:        01/31 08:17:03  01/31 08:17:03

Total run time: 11 seconds
done

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...