Darren Greenacre Posted June 26, 2023 Share Posted June 26, 2023 (edited) TL/DR: USB drive failed, after replacing and starting the array, Disk 1 and 2 are now 'unmountable' Hi, Im new to home servers, so i am hoping to find some helpful souls. My USB boot drive failed a number of months ago, and only now have i tried to replace and get the server going again. After seeting up array and starting, both the Disk 1 and 2 are now registered as 'unmountable'. I have seen forum post to run xfs_repair, but tbh i cant recall what format of setup i used originally and if this would solve the problem. I looked through the logs and saw this error for disk 1 and 2: Jun 26 14:55:17 Tower root: mount: /mnt/disk1: wrong fs type, bad option, bad superblock on /dev/md1p1, missing codepage or helper program, or other error. Jun 26 14:55:17 Tower root: dmesg(1) may have more information after failed mount system call. Jun 26 14:55:17 Tower root: mount: /mnt/disk2: wrong fs type, bad option, bad superblock on /dev/md2p1, missing codepage or helper program, or other error. Jun 26 14:55:17 Tower root: dmesg(1) may have more information after failed mount system call. I spent quite a lot of time setting the server up and have some family pictures stored on there, so i would like to avoid starting from scratch if at all possible. Thank you in advance. DG. syslog.txt Edited June 26, 2023 by Darren Greenacre added logs Quote Link to comment
JorgeB Posted June 27, 2023 Share Posted June 27, 2023 No filesystem is being detected on those disks, post the output of blkid Quote Link to comment
Darren Greenacre Posted June 27, 2023 Author Share Posted June 27, 2023 (edited) Hi... as requested: /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/nvme0n1p1: UUID="f9779a33-d576-4a14-a4c9-de2b2b677399" UUID_SUB="2ea12362-24fb-4fd9-bd59-cd2a4e02cea7" BLOCK_SIZE="4096" TYPE="btrfs" /dev/loop0: TYPE="squashfs" /dev/sdd1: PARTUUID="e39f54c4-733f-4163-84df-1030f55482a5" /dev/sdb1: PARTUUID="de53661b-234f-49bf-9759-fbdf5665b747" /dev/loop2: UUID="58a77730-681b-4c5e-ac83-af10618939a5" UUID_SUB="92ab0b4b-1065-47e6-ac55-f5da35f036dd" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sde1: PARTUUID="5fa14882-d5cc-4be3-8d64-173f18bc901e" /dev/sdc1: PARTUUID="2df06cac-42eb-4715-a22a-b88cf0a3d790" Edited June 27, 2023 by Darren Greenacre Quote Link to comment
JorgeB Posted June 28, 2023 Share Posted June 28, 2023 There's no filesystem reported, with the array started in maintenance mode post the output of xfs_repair -vn /dev/md1p1 Quote Link to comment
Darren Greenacre Posted June 28, 2023 Author Share Posted June 28, 2023 (edited) When running the command in mantenance this appears... root@Tower:~# xfs_repair -vn /dev/md1p1 Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................... .... after leaving it running for an hour, the 'secondary block' had not been located and was still searching. I ran the command again, and it had the same result. Edited June 28, 2023 by Darren Greenacre Quote Link to comment
JorgeB Posted June 28, 2023 Share Posted June 28, 2023 Looks like the disks were wiped, I assume you were using xfs? Quote Link to comment
Darren Greenacre Posted June 28, 2023 Author Share Posted June 28, 2023 I am not sure if i am using xfs or not, is there a way to find out? Quote Link to comment
Darren Greenacre Posted June 28, 2023 Author Share Posted June 28, 2023 (edited) After some reading here: https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/#btrfs-scrub I ran the suggested command, with the array no longer in maintenance mode, with the following results: root@Tower:~# btrfs scrub start -rdB /dev/md1p1 ERROR: '/dev/md1p1' is not a mounted btrfs device Edited June 28, 2023 by Darren Greenacre Quote Link to comment
JorgeB Posted June 29, 2023 Share Posted June 29, 2023 12 hours ago, Darren Greenacre said: is there a way to find out? Usually blkid would show the filesystem in use, but I've seen a few times where it didn't and there was still a filesystem there, to see if a btrfs fs exists you can do this in maintenance mode: btrfs check /dev/md1p1 Quote Link to comment
Darren Greenacre Posted July 3, 2023 Author Share Posted July 3, 2023 Sorry i had a long weekend away... results root@Tower:~# btrfs check /dev/md1p1 Opening filesystem to check... No valid Btrfs found on /dev/md1p1 ERROR: cannot open file system root@Tower:~# Quote Link to comment
Solution JorgeB Posted July 4, 2023 Solution Share Posted July 4, 2023 No valid xfs or btrfs found, looks like the disk was wiped. Quote Link to comment
Darren Greenacre Posted July 6, 2023 Author Share Posted July 6, 2023 ok thank you for trying to help. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.