Socrates

Members
  • Posts

    88
  • Joined

  • Last visited

Everything posted by Socrates

  1. The issue has been resolved. Both the parity drives are online now after removing and readding and rebuilding (though unnessary as the mod mentioned).
  2. Will do, right now the parity rebuild is at 40%, might complete in another 10-12 hrs, before i reboot i will grab the diags (just in case).
  3. For now i have attempted the normal procedure to re-enable a drive, which is to start the array with it unassigned, stop the array, assign the drive back and let it rebuild parity. It shows both the parity drives are online, and rebuilding, i will let ya'll know if it fails and the parity goes back to disabled mode.
  4. I have two parity drives, one of the drives shows that its disabled. However the disk is spinning and is warm when i pull the drive out from the hotswappable bay. I have a Supermicro 24Bay SAS3 based chassis, so its a sas3 card, no cables attached to the drives. attaching diagnostics and the screenshots. tower-diagnostics-20231018-2221.zip
  5. Will thanks for looking into it. ANy tips on disk cooling? I have a 4U supermicro chassis with 24 bay. The chassis has Norco Fans. Any suggestions, i would like to bring the heat down, some disks get hot upto 62 degs.
  6. A disk in my arrow shows thumbs down with an error, but i ran the extended SMART test, and at the end it shows no errors found. I am attaching the logs with this thread, kindly let me know if i should change the disk before they crash on me HUH721008ALN600_7SHGAXPU-20230419-0120.txt
  7. I just wanted to report back here. This issue has been resolved. I was able to add the unsupported disks back to the array and mount the disk without running any checks. The filesystem is back online, and no files on the drives are lost or curropt.
  8. Can we try a repair? I have checked quite a lot of files and movies in these disks and they all look good. What is the process to repair these two drives? I see some command based options on the forum, how do i find the drive names as mentioned here (sdd1) or is there a GUI based repair option? Reboot, force an XFS_Repair of the new drive (/dev/sdd) with: xfs_repair -L /dev/sdd1
  9. I think i got confused and missed a few steps and instructions, i have much data on these drives, about 72tb, hence been so confused and scared.
  10. let me explain. Disk13 was unassigned, but not disk12 you were asking me for to run xfs repair on disk12 hence i did not move it to unassigned drives. Now the Array build is complete. I have pulled another diag report. Please see attached. tower-diagnostics-20221223-1346.zip
  11. Cool thanks, i will post updates tomorrow after parity build is complete. Much appreciate your help here.
  12. I had Disk13 as unassigned, but disk12 was not. I just brought all the disk back to run the diagnostics again.. and its running parity now. What should be my next steps? wait for parity rebuild to complete, and then run diag?
  13. I have started the array to pull the diag report now parity is being synced, says data rebuild in progress. Diag logs attached below. tower-diagnostics-20221222-1027.zip
  14. How do i convert Disk 12 to unassigned disk? all i can see is the option of no device in the drop down.
  15. Do u want me to go ahead with the fix? Please could u post the steps for fixing this? Or do u want to first run the xfs check again after converting the disk to unassign mode?
  16. Steps i undertook 1. Stopped the array. 2. Started the Array in Maintenance mode. 3. Clicked on Disk 12 and then ran the check. No i did not unassign this disk. Do u want to unassign this and then run the xfs check with no -n switch?
  17. Filesystem is not hardware curroption, right? Can't i format the drive, format it with XFS filesystem, this way it becomes a new drive, and then plug it in and resover using the 2 parities I have?
  18. So here is the final result. FS: xfs Executing file system check: /sbin/xfs_repair -n /dev/sdi1 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 1657832310, counted 1659752643 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. File system corruption detected! RUN WITH CORRECT FLAG DONE
  19. Wait i am sorry i am failing to understand. you are saying i cannot treat these two drives as new drives and recover data from parity? I do have two parity drives in healthy state. WHy cant i treat them as two failed drives and recover the data from parity?
  20. Its still running and nothing has changed. what does it mean? Do u think its better for me to format these drives using xfs and then rebuild it with the 2 parity drives i have?
  21. I have initiated the check on Disk 12, its been running for a few hours, will report back tomorrow or when it completes.
  22. Thanks for looking into this. I've performed the operation you recommended on both the disks, made them xfs from auto and started the array again, and still it shows the same result Unmountable: Wrong or no Filesystem" and "Unmountable: Unsupported or Wrong filesystem" New logs from diags attached. tower-diagnostics-20221219-1101.zip
  23. I have restarted my unraid after moving states, and during this move i had removed the drives from the server and put them in extremely well packed bubble wraps in a secure container. Now, after a few months i have finally setup my server rack and after installing the hdd's into my 24 supermicro bay, unraid has come back online but two of the drives show "Unmountable: Wrong or no Filesystem" and "Unmountable: Unsupported or Wrong filesystem" These drives were working before. They were a part of the unraid system, but now i get these. Diagnostic attached with this post. tower-diagnostics-20221218-2345.zip
  24. I have an array of about 15 disks including 2 parity drives. The array is a combination of 12 sata and 3 sas drives. Unraid is virtualized in ESXI host. Even if i am not watching a movie, or no jobs are running, i see that the disks spin up every few seconds. this is like 30 seconds or so. I dont think i have any app/plugin that might cause this. But is there a way I can upload some logs and someone can help me understand what might be causing this? Please.