siege801

Members
  • Posts

    31
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

siege801's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I'm thinking this post as Solution.
  2. Ok, great to know. Next step for me (likely outside the scope of this thread) is to identify the cause of the FixCommonProblems alert about /mnt. This had been spamming my unRAID emails and meant I missed the more critical errors pertaining to the disks/array. @trurl, I can thank you enough. I've sent a small token via your donate link. It's minuscule in comparison, but hopefully it can in some way support you in being the helping soldier to others. Thank you thank you. I'll scan back through this thread and try and find out which of your posts should be marked as the Solution. Any thoughts on which would be most appropriate?
  3. Thanks @trurl Can you advise if this needs to be addressed?
  4. Alright, that looks much better. I pulled out the existing PSU, and upgraded to a unit with enough SATA points to not need splitters. It's also a more powerful PSU. So, whether it was an underpower issue, or a splitter issue, I may never know. But for now it looks good. Looking at the diagnostics attached, what other cleanup is required? lucindraid-diagnostics-20240317-1749.zip
  5. Correct That I can't remember. I'll have to check. I have been wondering whether a larger PSU / a PSU with more built-in cables would be necessary. Until I can get the cabling checked, am I right in thinking I'm still running with at least one parity drive?
  6. Hi @trurl, Ok, I've formatted Disk 5 and it's back in the array. Then, I stopped the array to unassign Parity 2 and Disk 6. I then started the array and let the rebuild commence. However, it's just paused itself. Diagnostics attached. lucindraid-diagnostics-20240308-0759.zip
  7. I'll do it whichever way you think is safest and gets me back into having some parity protection. Breaking down and clarifying the steps: #1 - Format Disk 5 I understand this as being WFL5W6LK I can see this disk listed under Array Operations as you suggested I would. I simply tick the box that says Yes, I want to do this and proceed with the format. Then complete step #5 from "Rebuild to replacement" guide, namely "Assign the replacement disk(s) using the Unraid webGui." I note this guide states: I don't think any of my disks are emulated any more, but I know one certainly was for a time. Am I ok to proceed with the above steps at this point in time?
  8. When you can, could you confirm what my next steps need to be?
  9. Oh right, I misunderstood that re-adding was "adding", but I see the distinction. The 2Tb disk is a "new" disk but it's replacing one that failed a while ago. The 12Tb parity I believe is emulated and just needs to have parity refreshed on it. And the 12Tb that is currently mounted through UD is going back to the array as it was before.
  10. Thanks @trurl. From what I've read, I believe I can only do one operation at a time. From the documentation: NOTE: You cannot add a parity disk(s) and data disk(s) at the same time in a single operation. This needs to be split into two separate steps, one to add parity and the other to add additional data space. Just to clarify, intended outcome is to have 2x 12Tb parity, and the the rest as data. Thanks again! lucindraid-diagnostics-20240229-1716.zip
  11. Hi @trurl, Again, I want to repeat how thankful I am for your help. I fully intend on dropping a donation on your link. I've gone ahead and recovered what I need/want from the lost+found. Would you be able to give a little more guidance on how I now get the two disks back into the array?
  12. This is real progress! Thank you so much again for the help so far. I'm very comfortable on the command line. I've just been working through the output of: du -ahx --max-depth=1 /mnt/disk6/lost+found/ | sort -k1 -rh | less So far I've determined: 3.3Tb in both /mnt/user/lost+found, and /mnt/disk6/lost+found - presumably the same data, but this is to be confirmed. Approximately 9,800 sub directories within the /mnt/disk6/lost+found Approximately 7,500 of these have a directory size > 0 1.9T are Virtual Machine images that I have backed up anyway. Notably, I have backups of the irreplaceable data, but there is further data that is not economically feasible to backup. With that said, the more of it that I don't have to acquire through alternate means the better. I can spend the day working through the contents of the other sizeable sub directories of lost+found and come back to you once I've retrieved what is feasibly useful. Questions: Is it safe to leave the array running? I've stopped the Docker service. Also, is it safe to move content from /mnt/disk6/lost+found into the correct location under /mnt/user/ ? In case I haven't mentioned, my sincere gratitude for your guidance so far!
  13. Diagnostics attached. lucindraid-diagnostics-20240119-0142.zip
  14. And done. Output looks reasonably clean. unRAID file system check without -n second run - disk 6.txt
  15. Ok, I've done that. It looks like maybe it wants me to run that again? End of output: Metadata corruption detected at 0x453030, xfs_bmbt block 0x10081ce0/0x1000 libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x10081ce0/0x8 xfs_repair: Releasing dirty buffer to free list! cache_purge: shake on cache 0x50c6f0 left 5 nodes!? xfs_repair: Refusing to write a corrupt buffer to the data device! xfs_repair: Lost a write to the data device! fatal error -- File system metadata writeout failed, err=117. Re-run xfs_repair. Full output: unRAID file system check without -n - disk 6.txt