victoryvj Posted March 4, 2022 Posted March 4, 2022 Hello I’ve built my first Unraid server and I just keep running into issues when after parity check and restarting the server. System info: Model: CustomM/B: Gigabyte Technology Co., Ltd. X570 I AORUS PRO WIFI Version x.x - s/n: Default stringBIOS: American Megatrends Inc. Version F32. Dated: 01/18/2021CPU: AMD Ryzen 7 5700G with Radeon Graphics @ 3800 MHzHVM: EnabledIOMMU: EnabledCache: 512 KiB, 4 MB, 16 MBMemory: 32 GiB DDR4 (max. installable capacity 128 GiB)Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000 Mbps, full duplex, mtu 1500Kernel: Linux 5.10.28-Unraid x86_64 I’ve reinstalled version 6.9.2 three times now. Each time I’ve setup my cache drives, my arrays, mapped Remote SMBs, etc. This last time I’ve was able to download and restore from an Unraid.net backup. The issue I’m having now is the 2 of my WDC 4TB drives that were added to the array are not mounting and Unraid is telling me to format the disks in the WebUI, which I don’t want to do because I’ve already lost hours of my life setting up docker containers and plugins. On the monitor connected to the server 2 separate errors1. XFS (md2): Corruption detected. Unmount and run xfs_repair2. XFS (md3): Internal error !uuid_equal(&mp->m_sb (see screenshot) How can I fix this issue without starting from scratch? Quote
trurl Posted March 4, 2022 Posted March 4, 2022 3 hours ago, victoryvj said: reinstalled version 6.9.2 three times No reason to expect that to fix, you should have asked for help sooner. Attach Diagnostics to your NEXT post in this thread Quote
victoryvj Posted March 4, 2022 Author Posted March 4, 2022 (edited) I removed the Mover Tuning plugin and stopped C6 state through terminal command (/usr/local/sbin/zenstates --c6-disable). The issue still persists. Attached is my diagnostic. dune-diagnostics-20220304-1103.zip I didn't reinstall version 6.9.2 three times to fix this issue. I was saying I've gone through setting up Unraid 3 differnt time and I get it going and then I run into issues. I've only detailed this last issue. Edited March 4, 2022 by victoryvj for a complete response Quote
victoryvj Posted March 4, 2022 Author Posted March 4, 2022 I did the check on both drives and it referred me to run XFS_REPAIR. I ran the terminal command and got the following results for both drives. SB summary counter sanity check failed Metadata corruption detected at 0x47518b, xfs_sb block 0x0/0x200 libxfs_bwrite: write verifier failed on xfs_sb bno 0x0/0x200 xfs_repair: Releasing dirty buffer to free list! xfs_repair: Refusing to write a corrupt buffer to the data device! xfs_repair: Lost a write to the data device! fatal error -- File system metadata writeout failed, err=117 I'm going to just format the drives and add them back to the array. Quote
trurl Posted March 5, 2022 Posted March 5, 2022 16 hours ago, victoryvj said: ran the terminal command What terminal command exactly? Better to run it through the webUI so the command is run correctly. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.