yyc321 Posted January 20 Share Posted January 20 (edited) ver 6.12.6 GUI was unresponsive so I rebooted via CLI - unfortunately I forgot to grab the logs prior to reboot. Post reboot logs attached. I tried repairing the filesystem on disk 8 via the "Disk 8 Settings" page. < Was this a bad move? Both drives show as healthy on the dashboard and drive attributes page. Short smart test ran on both drives. CLI: xfs_repair -nv /dev/sdn Shows: bad primary superblock - bad magic number !!! I was tempted to just create a new configuration, and see if the system could overwrite the data on disk 8, but with a parity drive disabled too, I'm worried that this might cause more problems than it fixes. Thanks in advance for any assistance. r2-diagnostics-20240120-1635.zip Edited January 21 by yyc321 Additional Info Quote Link to comment
mathomas3 Posted January 21 Share Posted January 21 I wouldnt do anything atm... your have 2 failed drives ATM and should anything go wrong... you are going to lose data... I would suggest stopping dockers and limit the disk activity others will be better to advise you... but you can try to rebuild a drive that is still currently connected... but checking SMART reports are needed... please provide those Quote Link to comment
mathomas3 Posted January 21 Share Posted January 21 EXTENDED SMART reports are needed... for the two drives Quote Link to comment
JorgeB Posted January 21 Share Posted January 21 Since the diags are after rebooting we can't see what happened, but post new ones after array start. 11 hours ago, yyc321 said: CLI: xfs_repair -nv /dev/sdn This will never work, use the GUI instead, and run it without -n Quote Link to comment
yyc321 Posted January 21 Author Share Posted January 21 r2-diagnostics-20240121-0953.zip Thanks JorgeB. Extended SMART tests are at 70% and 90% respectively for parity 1 and array drive 8. I'll upload the results when it's finished. Quote Link to comment
yyc321 Posted January 21 Author Share Posted January 21 This might help... Totally forgot that I have persistent logs turned on. The server started creating significantly more log entries starting on the 16th, so I only copied those entries to the txt file (75,000 entries in 5 days). syslog.zip Quote Link to comment
trurl Posted January 21 Share Posted January 21 Doesn't look like there was any need to check filesystem on disk8. Was this an attempt to fix the disabled state? Wrong solution. Quote Link to comment
yyc321 Posted January 21 Author Share Posted January 21 (edited) Filesystem check has solved problems for me in the past, so I tried it. I realize this is definitely not the right approach to troubleshooting a server, but I was looking for a quick fix. I also saw some posts regarding the "bad primary superblock - bad magic number !!!" suggesting the filesystem check as the solution. At the time, I didn't know that "xfs_repair -nv /dev/sdn" was the wrong command - I guess I mistakenly pulled it from the unraid docs. Edited January 21 by yyc321 Additional Info Quote Link to comment
trurl Posted January 21 Share Posted January 21 You have to rebuild a disabled disk. Check filesystem is for unmountable disks. Disabled and unmountable are independent conditions. A disk can be enabled but unmountable. In this case you would check filesystem. A disk can be disabled but the emulated disk is mountable. In this case you would rebuild the disk. A disk can be disabled and the emulated disk is unmountable. In this case you would check filesystem on the emulated disk, and after it is mountable, rebuild. Quote Link to comment
trurl Posted January 21 Share Posted January 21 Just now, trurl said: A disk can be disabled but the emulated disk is mountable. In this case you would rebuild the disk. This is your situation. These latest Diagnostics shows emulated disk8 has plenty of data, which you can see on the webUI in Main - Array Devices. An unmountable filesystem doesn't show anything for Size, Used, or Free, it says Unmountable instead. Your screenshot, and the diagnostics posted with them in the first post, must have been with the array started in Maintenance mode. The screenshot doesn't show anything about Size, etc. and diagnostics doesn't know anything about the filesystems on any disks since Maintenance mode doesn't mount any. Quote Link to comment
yyc321 Posted January 21 Author Share Posted January 21 (edited) That's correct, I started the array in maintenance mode for the image. If I create a new config to start rebuilding the array disk, will this work considering one of the parity drives is also currently disabled? Does the system know to use parity 2 for the rebuild, or will I need to remove parity 1 until disk 8 is finished being rebuilt? Isn't the server going to try to rebuild both the array drive as well as the parity? Edited January 21 by yyc321 Additional question Quote Link to comment
trurl Posted January 21 Share Posted January 21 17 minutes ago, yyc321 said: I also saw some posts regarding the "bad primary superblock - bad magic number !!!" suggesting the filesystem check as the solution. At the time, I didn't know that "xfs_repair -nv /dev/sdn" was the wrong command - I guess I mistakenly pulled it from the unraid docs. You would never have seen that message unless you tried the filesystem check. And that result was because the command was wrong. The docs wouldn't have suggested that for any disk because it doesn't include a partition number. And checking the sd device for an array disk will invalidate parity, you should check the md device instead. Best to just use the webUI, it knows the correct command. Quote Link to comment
yyc321 Posted January 21 Author Share Posted January 21 I do have another unraid server running. Would it be best to copy the emulated contents from drive 8 over to the other server, and then start the rebuild on both the array and parity? If so, would the procedure be to create a new config, retaining all drive assignments including parity? Quote Link to comment
trurl Posted January 21 Share Posted January 21 18 minutes ago, yyc321 said: If I create a new config to start rebuilding the array disk, will this work considering one of the parity drives is also currently disabled? Does the system know to use parity 2 for the rebuild, or will I need to remove parity 1 until disk 8 is finished being rebuilt? New Config is also the wrong solution. New Config cannot rebuild a data disk, it can only rebuild parity. New Config accepts all disks exactly as they are, and (by default) builds parity on any disks assigned to any parity slots. That would enable disk8, but it would mean any writes to emulated disk8 would be lost. Unraid disables a disk when a write to it fails for any reason. Since it looks like both disabled disks are OK, this was probably a connection issue. When a disk becomes disabled, it isn't used again until rebuilt (or New Config forces it to enabled). All subsequent reads of the disabled disk are instead emulated by reading all other disks and getting its data from the parity calculation. Any writes to the disabled disk, including the initial failed write, are emulated by updating parity. So that initial failed write, and any subsequent writes to the disabled (emulated) disk, can be recovered by rebuilding. If you don't rebuild, all the emulated writes would be lost. It's possible some lost writes could even be filesystem metadata, which would make the filesystem corrupt and require check filesystem to repair it. Since you have dual parity, you can rebuild disk8 and parity at the same time, but not with New Config. Quote Link to comment
trurl Posted January 21 Share Posted January 21 3 minutes ago, yyc321 said: Would it be best to copy You must always have another copy of anything important and irreplaceable. Do you? Parity (even dual) is not a substitute for backups. Quote Link to comment
trurl Posted January 21 Share Posted January 21 You can rebuild both disk8 and parity at the same time. Assuming no continuing problems such as bad connections, that should succeed. Safest approach would be to use a new disk for disk8 and keep the original disk8 in case of problems rebuilding. Doesn't matter if you rebuild parity onto the same parity disk, since parity contains none of your data. Another approach would be to just rebuild parity by itself, and leave disk8 emulated. That would allow you to see that everything is working well before attempting to rebuild disk8. Quote Link to comment
yyc321 Posted January 21 Author Share Posted January 21 The data on that drive is desired but not critical. I don't have the space on my backup server to backup all 90+TB. Thanks for your help, I think I'll try rebuilding parity, and then fix the issue with the array drive. In this case would the procedure be to remove the parity 1 assignment, and then reassign it? Quote Link to comment
Solution trurl Posted January 21 Solution Share Posted January 21 11 minutes ago, yyc321 said: The data on that drive is desired but not critical. I don't have the space on my backup server to backup all 90+TB. You get to decide what is important and irreplaceable. 11 minutes ago, yyc321 said: try rebuilding parity, and then fix the issue with the array drive. The issue with disk8 is exactly the same as the issue with parity, requiring the same solution. Except disk8 contains data. Stop Array. Unassign the disk to be rebuilt. Start the array with nothing assigned to the slot to be rebuilt. This makes it 'forget' the specific disk assigned to that slot, but it remembers there is supposed to be a disk in that slot. Until it 'forgets', it won't know it has been 'replaced'. Stop array. Reassign the disk to be rebuilt. Start array to begin rebuild on the 'newly' assigned disk. You could unassign disk8 at the same time if you want, but don't reassign it if you don't want to rebuild it yet. Quote Link to comment
trurl Posted January 21 Share Posted January 21 19 minutes ago, yyc321 said: rebuilding parity, and then ... the array drive technically, parity is also an array drive. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.