scuppasteve Posted July 28, 2022 Share Posted July 28, 2022 (edited) Well, without a super long story, i over heated and fried my Unraid USB stick, i am in the process of putting it back together and i got the Unmountable: No File System on drive 14. I have attached the diagnostic. I was recovering a number of servers, and wasn't paying attention after the first boot, and mounted the array and it started to write parity. So both parity drives are not an option. Any ideas?movieserver-diagnostics-20220727-1735.zip Edited July 28, 2022 by scuppasteve Quote Link to comment
trurl Posted July 28, 2022 Share Posted July 28, 2022 Did you have a flash backup, or did you have to start over with assigning disks and everything? If you did have to reassign disks, are you absolutely sure you didn't assign a data disk to a parity slot? Or assign a parity disk to a data slot? If the disk you assigned as disk14 was actually a parity drive, that would completely explain why it is unmountable since parity has no filesystem to mount. It certainly would have allowed you to make that mistake since disk14 is as large as either parity, and it had no way to know what your assignments were supposed to be if you started from scratch. Can't really tell anything about whether any disks are mountable since you didn't start the array. Quote Link to comment
scuppasteve Posted July 28, 2022 Author Share Posted July 28, 2022 The drives are all physically labeled on the drive, i am almost 100% positive. Is there nothing wrong with the logs? Quote Link to comment
trurl Posted July 28, 2022 Share Posted July 28, 2022 Syslog in diagnostics is the current syslog, which is only since reboot. So there is only a few minutes of syslog in those diagnostics, and can't tell anything about whether any disks mount since the array was never started. Quote Link to comment
trurl Posted July 28, 2022 Share Posted July 28, 2022 Might be a good idea to unassign both parity before starting the array again. How long did you let parity rebuild run? Quote Link to comment
scuppasteve Posted July 28, 2022 Author Share Posted July 28, 2022 Honestly maybe 10min as far as rebuild. So you need diagnostics from the drive failing to mount if i am understanding? Quote Link to comment
trurl Posted July 28, 2022 Share Posted July 28, 2022 10 hours ago, scuppasteve said: maybe 10min Probably too long to try to repair the filesystem if you are indeed building parity onto a data disk. But let's see if there might have been a filesystem on disk14 that can be repaired. Unassign both parity so they don't take any more writes, then start the array and post new diagnostics. Quote Link to comment
scuppasteve Posted July 28, 2022 Author Share Posted July 28, 2022 So i restored the USB config to the original, i forgot i had a backup done by CA Backup/Restore, i got everything configured like it was and Drive 14 was in the correct location. Attached is the Diagnostic for mounting without parity movieraidserver-diagnostics-20220728-0648.zip Quote Link to comment
trurl Posted July 28, 2022 Share Posted July 28, 2022 Unrelated, your appdata and system shares have files on the array. You want appdata, domains, system all on fast pool (cache) and configured to stay there so docker/VM performance won't be impacted by slower array, and so array disks can spindown since these files are always open. 2 hours ago, scuppasteve said: Drive 14 was in the correct location check filesystem Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... ..........found candidate secondary superblock... unable to verify superblock, continuing... unable to verify superblock, continuing... Exiting now. This is the output of the drive check Quote Link to comment
trurl Posted July 29, 2022 Share Posted July 29, 2022 How exactly did you do this check? Easy to get the command wrong if you try it from the command line. If you do it from the webUI it will use the correct command. Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 I used the webui, and added -nv to the options. Just like the guide you linked said to do it. It took like 15+ hours. Quote Link to comment
JorgeB Posted July 29, 2022 Share Posted July 29, 2022 That's suggests there's no valid filesystem on that disk, are you sure it wasn't a parity drive? Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 I mean, look, i am 100% positive as i found my saved drive listing excel spreadsheet and USB backup, and loaded the config to verify it. I am willing to concede at this point. This server is a 24bay super micro that i have all the disks labeled as 1-20 and labels on both Parity. I also inserted them 1 at a time to assign them so i knew they were assigned properly. I get am gathering that the drive data seems like its gone. Is it worth removing it from the array and just trying to mount it? Quote Link to comment
JorgeB Posted July 29, 2022 Share Posted July 29, 2022 6 minutes ago, scuppasteve said: I get am gathering that the drive data seems like its gone. Drive is fine, but no apparently there's no xfs filesystem there, please post the output of: blkid Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 Ok well clearly I must have screwed this up a while ago, because seems like you are correct. Drive 14 is somehow a parity drive. and one of the ones i had marked as parity that is currently unmounted, shows an option to mount it. here is the output. /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/sdy1: UUID="50c1634f-f391-4d2a-aaf4-2251917d2c8a" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="c374f05d-5f9b-471b-b079-8b25ad4390a6" /dev/sdf1: UUID="352a4961-6abf-4a5f-ba6d-8d1fb9710787" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="c9189aca-589c-4a42-843f-6366c2ebb95c" /dev/sdo1: UUID="aae7c3a9-b5cc-4d46-94ff-57174de342b7" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="9f0f935c-c03e-49d9-9221-359f691c3f75" /dev/sdw1: UUID="9ea41af2-0b58-462f-8e94-0422eaffa0ea" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="d8a5c310-6930-4416-8936-d16a1bbb05b1" /dev/sdd1: UUID="e3a9bb28-700c-43c4-8143-dafee8525ce0" UUID_SUB="e6b516bd-e771-4c95-9563-61b7b65c19a3" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sdm1: UUID="b33c2e14-a1a6-4f76-b06e-3babdd097499" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1c6570b2-d859-4191-8500-e5f71732ff70" /dev/sdu1: UUID="ed4879de-3eb2-4bca-9515-2c424689d83f" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="21b9b2fe-87be-436e-a6eb-0769e080c268" /dev/sdb1: UUID="198bc950-c7dc-4441-a8aa-12e8904fa4b9" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="cecc2a0c-1919-4582-adb5-8e69c99412b7" /dev/sdk1: UUID="6850714a-a4aa-4224-ac10-4cec3e553dda" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="15c76c4f-e10f-4252-ab3a-28057836f194" /dev/sds1: UUID="26841d4a-efed-40fb-9775-63511f2626c9" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="c8148cb6-c620-4d2d-b48a-e4ee52590cfd" /dev/sdi1: UUID="5f31630a-a529-4097-81f1-5e8bd5aded9f" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="8cdc3e8c-2d3c-4dfe-b64a-8405c8bb1619" /dev/sdg1: UUID="18a86988-f069-4f8b-8987-cca92e488b25" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="9c8a62e9-101f-44e1-8fdc-3f94619ec896" /dev/loop0: TYPE="squashfs" /dev/sdx1: UUID="99b7f79f-651f-4fbc-a20f-b174d5b8b77b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="35e05b0d-1839-4f52-9f6c-e2cda1b03ea6" /dev/sde1: UUID="1332ba1a-eff7-422f-8ab8-c00c828dded4" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="613be9b7-de5b-48ff-b5bd-99a85831d927" /dev/sdn1: UUID="434c009d-7428-4944-bb77-c6df1d7c76ee" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="a8bed203-806e-40df-a69e-4838bf226a7c" /dev/sdv1: UUID="abd23d62-f1ae-44a1-8cab-20b26321ffe5" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="e65ec809-21df-4be3-a959-40c0075cbec3" /dev/sdc1: UUID="e3a9bb28-700c-43c4-8143-dafee8525ce0" UUID_SUB="575229f4-d1ec-456c-9299-00e4f114f2ec" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sdl1: UUID="3a6ff4a2-1d9c-4066-800c-a37e16531d38" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="8f2cfcbe-eaf7-4503-987e-3acf5b700c8f" /dev/sdr1: UUID="b9f50b6d-3091-40b9-b6f0-1a5ab0ae17d2" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="605a058f-a619-4224-92bd-e13c0b0bf305" /dev/sdh1: UUID="e8114966-14d3-4aa9-85c7-92bdccf3b222" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="2a5fe423-338b-4b38-8701-a1648b3c3a8e" /dev/sdp1: UUID="ab05b7c3-499a-43ea-a881-0c5d3ebd4a4b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="2e1d73a2-151d-4bf9-9807-4718a38b9287" /dev/md8: UUID="99b7f79f-651f-4fbc-a20f-b174d5b8b77b" BLOCK_SIZE="512" TYPE="xfs" /dev/md6: UUID="3a6ff4a2-1d9c-4066-800c-a37e16531d38" BLOCK_SIZE="512" TYPE="xfs" /dev/md18: UUID="5f31630a-a529-4097-81f1-5e8bd5aded9f" BLOCK_SIZE="512" TYPE="xfs" /dev/md4: UUID="50c1634f-f391-4d2a-aaf4-2251917d2c8a" BLOCK_SIZE="512" TYPE="xfs" /dev/md16: UUID="abd23d62-f1ae-44a1-8cab-20b26321ffe5" BLOCK_SIZE="512" TYPE="xfs" /dev/md2: UUID="b33c2e14-a1a6-4f76-b06e-3babdd097499" BLOCK_SIZE="512" TYPE="xfs" /dev/md12: UUID="9ea41af2-0b58-462f-8e94-0422eaffa0ea" BLOCK_SIZE="4096" TYPE="xfs" /dev/md9: UUID="352a4961-6abf-4a5f-ba6d-8d1fb9710787" BLOCK_SIZE="512" TYPE="xfs" /dev/md20: UUID="ed4879de-3eb2-4bca-9515-2c424689d83f" BLOCK_SIZE="512" TYPE="xfs" /dev/md10: UUID="6850714a-a4aa-4224-ac10-4cec3e553dda" BLOCK_SIZE="512" TYPE="xfs" /dev/md7: UUID="b9f50b6d-3091-40b9-b6f0-1a5ab0ae17d2" BLOCK_SIZE="512" TYPE="xfs" /dev/md19: UUID="aae7c3a9-b5cc-4d46-94ff-57174de342b7" BLOCK_SIZE="512" TYPE="xfs" /dev/md5: UUID="18a86988-f069-4f8b-8987-cca92e488b25" BLOCK_SIZE="512" TYPE="xfs" /dev/md17: UUID="1332ba1a-eff7-422f-8ab8-c00c828dded4" BLOCK_SIZE="512" TYPE="xfs" /dev/md3: UUID="26841d4a-efed-40fb-9775-63511f2626c9" BLOCK_SIZE="512" TYPE="xfs" /dev/md15: UUID="ab05b7c3-499a-43ea-a881-0c5d3ebd4a4b" BLOCK_SIZE="512" TYPE="xfs" /dev/md1: UUID="e8114966-14d3-4aa9-85c7-92bdccf3b222" BLOCK_SIZE="512" TYPE="xfs" /dev/md13: UUID="198bc950-c7dc-4441-a8aa-12e8904fa4b9" BLOCK_SIZE="4096" TYPE="xfs" /dev/sdt1: PARTUUID="bbf2aad2-261a-4f64-893b-d1c9bab09b61" /dev/sdj1: PARTUUID="2d137c9d-43af-4d4d-a3f1-e175356fbdb7" Quote Link to comment
JorgeB Posted July 29, 2022 Share Posted July 29, 2022 I though you started syncing parity on those drives, or did I misunderstand? Any case see if sdn mounts. Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 I did, start syncing parity with them, and no it wont mount through the UI Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 Is it possible to use SDJ, that i have mounted with the array as a parity drive and use it to rebuild the data? Quote Link to comment
JorgeB Posted July 29, 2022 Share Posted July 29, 2022 11 minutes ago, scuppasteve said: and no it wont mount through the UI Post new diags after the mount attempt. 9 minutes ago, scuppasteve said: Is it possible to use SDJ, that i have mounted with the array as a parity drive and use it to rebuild the data? Possibly, if it really was parity and if you know if it was parity or parity2. Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 movieraidserver-diagnostics-20220729-0906.zip Quote Link to comment
JorgeB Posted July 29, 2022 Share Posted July 29, 2022 Try checking the filesystem on sdn using UD, click on the checkmark left side of the partition. Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 FS: xfs Executing file system check: /sbin/xfs_repair -n /dev/sdn1 2>&1 Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now. File system corruption detected! Quote Link to comment
JorgeB Posted July 29, 2022 Share Posted July 29, 2022 Need to run the fix option. Quote Link to comment
scuppasteve Posted July 29, 2022 Author Share Posted July 29, 2022 this fix is clearly going to take a bit, and the browser window that is open keeps locking up, is there anyway to load the status of the repair from a terminal window at a later time? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.