strike Posted May 22 Share Posted May 22 (edited) The first and biggest mistake i made when I as started building my server many years ago was to by a raid card instead of a HBA. I Guess I'm paying the price now. So I encountered this issue with the same card: https://forums.unraid.net/topic/127767-71605-unmountable-unsupported-partition-layout-of-drives-hooked-to-card-and-settings-change-to-hba/ Was doing an hardware upgrade, had nothing but troubles and I of course went in the setting and changed to HBA mode at one point.. And now all of my xfs drives on that controller shows up as Unmountable: Unsupported partition layout or no file system. I tried unassigning one drive and start the array, but the emulated disk didn't mount. It said contents emulated but Unmountable: Unsupported partition layout. Now I'm trying to check the file system and started in maintenance mode with disk 1 unassigned. Do I Need to assign it to be able to xun xfs_repair? I thought I was supposed to check filesystem on the emulated disk, I get this when i run xfs_repair -nv /dev/md1 from CLI. /dev/md1: No such file or directory fatal error -- couldn't initialize XFS library If I stop the array and assign it I get as warning about data rebuild even with maintenance mode checked. So I was afraid to do that, but logic says that nothing would happen since it's not mounting any disks so no data rebuild would start. Any advice? I have backup of all my important data so it's just my shows and movies that would get lost if I can't recover and I can get them back. Edit: I see in my syslog now that I got a call trace, maybe that's the reason I couldn't run xfs_repair. Will reboot Edit: No still same error. Will have to continue this later today. Hopefully I'll get som help tower-diagnostics-20240522-0645.zip Edited May 22 by strike Quote Link to comment
Solution JorgeB Posted May 22 Solution Share Posted May 22 2 hours ago, strike said: /dev/md1: No such file or directory fatal error -- couldn't initialize XFS library Correct command is xfs_repair -v /dev/md1p1 or use the check filesystem option in the GUI. Quote Link to comment
strike Posted May 22 Author Share Posted May 22 Thanks, will have to check it later. But do I assign the disk before going into maintenance mode? Quote Link to comment
JorgeB Posted May 22 Share Posted May 22 You run that on the emulated disk1, without a disk assigned. Quote Link to comment
strike Posted May 23 Author Share Posted May 23 Ok, so here is the log from xfs_repair on disk 1. Any advice where to go from here? xfs_repair-disk1.txt Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 Start the array normally and the emulated disk1 should now mount, check contents to see if the contents look mostly OK or there are a lot of files in lost+found. Quote Link to comment
strike Posted May 23 Author Share Posted May 23 Emulated disk mounted fine now, and seems to have all the files. So I guess just rebuild and do the same for the rest of the drives? One more question, I ordered a proper a HBA so I don't have this issue again. If/when I have everything back and up and running do you think I will have the same problem if I disconnect all the drives from the raid card and move them to the LSI card? Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 16 minutes ago, strike said: So I guess just rebuild and do the same for the rest of the drives? If it looks good yes. 16 minutes ago, strike said: If/when I have everything back and up and running do you think I will have the same problem if I disconnect all the drives from the raid card and move them to the LSI card? Possibly. Quote Link to comment
strike Posted May 23 Author Share Posted May 23 Just now, JorgeB said: Possibly Hmm, ok. Will need to check if more files need to be backed up before I do that then. Maybe just try one disk first on my other unraid licence and see if I can mount it with unassigned devices. Forgot about one thing, my btrfs cache pool is also unmountable as it was on the same raid card. Just run scrub on the pool? There aren't much files I really need there, maybe my appdata, but I have backup from this weekend for that so I guess I could just recreate the pool right away. Will save that for last.Thank you so much for all your help btw, much appreciated! Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 2 hours ago, strike said: Just run scrub on the pool? Scrub won't help if the pool doesn't mount, post the output of btrfs fi show Quote Link to comment
strike Posted May 23 Author Share Posted May 23 14 minutes ago, JorgeB said: Scrub won't help if the pool doesn't mount, post the output of btrfs fi show It just shows my other pool which is not connected to the raid card, but my MB: root@Tower:~# btrfs fi show Label: none uuid: 958e914e-929c-4e2e-af21-a3b4af824726 Total devices 2 FS bytes used 358.16GiB devid 1 size 931.51GiB used 521.01GiB path /dev/sde1 devid 2 size 931.51GiB used 521.01GiB path /dev/sdq1 Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 Post also the output of: fdisk -l /dev/sdx For the other pool devices, the ones that don't mount. Quote Link to comment
strike Posted May 23 Author Share Posted May 23 1 minute ago, JorgeB said: Post also the output of: fdisk -l /dev/sdx For the other pool devices, the ones that don't mount. root@Tower:/mnt# fdisk -l /dev/sdx Disk /dev/sdx: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: Samsung SSD 870 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes root@Tower:/mnt# fdisk -l /dev/sdu Disk /dev/sdu: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: Samsung SSD 870 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 They are both missing the partition, just to confirm, before when it worked, were they using that controller or a different one? Quote Link to comment
strike Posted May 23 Author Share Posted May 23 Yeah, they were both on the same controller as the rest of the drives which doesn't mount. Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 But it's not the same controller you have now correct? Quote Link to comment
strike Posted May 23 Author Share Posted May 23 Yes, it's still on the same controller. I haven't got the new LSI controller yet. But I can move them to the MB controller if they need to be on a different controller. Quote Link to comment
strike Posted May 23 Author Share Posted May 23 @JorgeB I was going to ask you if you have Paypal, then I saw your signature. Just sent you some beer money. You've been here all day helping out. You're a real hero saving all of us unraid users a lot of time,frustration and maybe some tears. Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 Thanks for the beer money! Regarding the pool devices, we can try to recover the partitions, but it will only work if the controller is showing the same sectors and/or the superblock was not wiped, and I believe both can happen with those controllers when you change mode, but it won't hurt to try, type: sfdisk /dev/sdx then write 2048 and hit enter, post the output of that Quote Link to comment
strike Posted May 23 Author Share Posted May 23 Do I answer yes or no here? And I realized I have stopped the array and ran the command with the array stopped, do I need to start it first? >>> 2048 Created a new DOS disklabel with disk identifier 0x0170dbb8. Created a new partition 1 of type 'Linux' and of size 1.8 TiB. Partition #1 contains a btrfs signature. Do you want to remove the signature? [Y]es/[N]o: Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 Type N and Enter to keep the signature, then type write and Enter to write the new partition layout, then post again the output of btrfs fi show Quote Link to comment
strike Posted May 23 Author Share Posted May 23 root@Tower:~# btrfs fi show warning, device 4 is missing Label: none uuid: 958e914e-929c-4e2e-af21-a3b4af824726 Total devices 2 FS bytes used 358.16GiB devid 1 size 931.51GiB used 521.01GiB path /dev/sde1 devid 2 size 931.51GiB used 521.01GiB path /dev/sdq1 Label: none uuid: 3323890d-8be4-4998-8991-c7ce43f3aefd Total devices 2 FS bytes used 871.74GiB devid 3 size 1.82TiB used 1.20TiB path /dev/sdx1 *** Some devices missing Do I need to run the same command on the other drive in the pool? Quote Link to comment
JorgeB Posted May 23 Share Posted May 23 2 minutes ago, strike said: Do I need to run the same command on the other drive in the pool? Yep, but only continue if the btrfs signature is also found. Quote Link to comment
strike Posted May 23 Author Share Posted May 23 It was found so now they both show up: root@Tower:~# btrfs fi show Label: none uuid: 958e914e-929c-4e2e-af21-a3b4af824726 Total devices 2 FS bytes used 358.16GiB devid 1 size 931.51GiB used 521.01GiB path /dev/sde1 devid 2 size 931.51GiB used 521.01GiB path /dev/sdq1 Label: none uuid: 3323890d-8be4-4998-8991-c7ce43f3aefd Total devices 2 FS bytes used 871.74GiB devid 3 size 1.82TiB used 1.20TiB path /dev/sdx1 devid 4 size 1.82TiB used 1.20TiB path /dev/sdu1 The next step is to start the array and see if they mount or? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.