amura11 Posted March 24, 2022 Share Posted March 24, 2022 I'm in the process of setting up an unRAID server and I'm having an issue with 2 of my drives. I have the drives setup in two different pools, both pools are setup with 1 device in each pool and I've set the filesystem to xfs. 1 drive is small NVMe drive and the other is a SATA drive. Both are M.2 drives. These aren't new drives, they were previously in the server and being used to run Proxmox. As far as I know these drives were working before I installed unRAID (it's possible I somehow killed both of them during my migration to unRAID but that seems pretty unlikely). I've tried formatting both drives through the UI multiple times and erasing the drives and nothing seems to help. I tried running the xfs_repair on both but that didn't help. The NVMe drive gave the following output: Quote Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... ...Sorry, could not find valid secondary superblock Exiting now. The SATA drive gave the following output Quote Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .... I didn't wait for it to finish finding the secondary block, I cancelled it early. I don't even know where to begin with this issue. I'm completely new to unRAID but I've used Linux for a while so I'm comfortable getting down and dirty with the shell if I need to. Any help getting this fixed would be really appreciated, I really want to get unRAID going on my server. Quote Link to comment
trurl Posted March 24, 2022 Share Posted March 24, 2022 Attach diagnostics to your NEXT post in this thread Quote Link to comment
amura11 Posted March 24, 2022 Author Share Posted March 24, 2022 Sorry, should have read the read me first... unRAID Version: 6.9.2 2021-04-07 Hardware: Mobo: ASUSTeK COMPUTER INC. PRIME X470-PRO CPU: AMD Ryzen 5 2600 RAM: 48Gb HBA: LSI 9211 8i (flashed to IT mode) A note in the diagnostics, one of the HDDs has a bunch of errors, I'm not using it in my array I just haven't had a chance to figure out which drive it is to disconnect it. clow-diagnostics-20220324-1102.zip Quote Link to comment
JorgeB Posted March 24, 2022 Share Posted March 24, 2022 Try wiping the devices first with: blkdiscard -f /dev/sdX blkdiscard -f /dev/nvmeXn1 Not sure if -f is needed with v6.9, if it doesn't work just remove that. Quote Link to comment
amura11 Posted March 24, 2022 Author Share Posted March 24, 2022 I tried running blkdiscard on both drives and it didn't seem to help. I wasn't sure the order to do it in though. I tried running the commands then restarting the array. I also tried running the commands when the array was stopped then starting it up and the only difference that made was I now get: Quote Unmountable: Unsupported partition layout I tried formatting after that and that puts me back into my original problem state. Quote Link to comment
amura11 Posted March 25, 2022 Author Share Posted March 25, 2022 I managed to get both drives working by doing a pre-clear on both of them. Is a pre-clear always required? Quote Link to comment
JorgeB Posted March 25, 2022 Share Posted March 25, 2022 6 hours ago, amura11 said: Is a pre-clear always required? No, some devices come with weird partitions, but blkdiscard should fix that, same as preclear, but without actually adding a write cycle. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.