Jump to content

AgentXXL

Members
  • Content Count

    138
  • Joined

  • Last visited

Community Reputation

5 Neutral

About AgentXXL

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. For new drives I still prefer to do at least one pass (all 3 stages) of the preclear script/plugin. For drives that I've had in operation for a while, I'll typically only do a zero pass as long as the SMART report from the drive shows nothing concerning. Of course since unRAID will do that itself, it's not required to use the plugin. I just prefer being able to do a zero of the drive before adding it to the array. It takes the same amount of time whether you do the zero pass using the plugin or by adding the drive to the pool, but if by chance there's a failure during the zero, I'd rather know about it before adding it to the pool. One question about just adding a drive to the pool and letting unRAID do the zero: if by chance there's errors or a failure of the drive while unRAID is zeroing the drive, does it still add it to the pool? If so this means obtaining a replacement drive right away. If you do the zero pass with the plugin, you'll have the option of delaying the addition of the drive until it's replaced/repaired. Most of us tend to buy drives only when on sale, so using the plugin seems safer to me, although I do have dual parity in use.
  2. I'm by no means an expert like some of the others helping you, but if the drives that you want to recover data from were all part of the unRAID parity protected array, you could do as I mentioned a few posts back: get yourself some decent data recovery software that supports XFS formatted drives. As mentioned above, I went with UFS Explorer Standard edition and it allowed me to recover data from a drive that was also unmountable but otherwise functioning. I.E. no electronic/mechanical issues, just lost partition and directory tables. As for removing drive 6, you could use the 'Shrink Array' procedure, i.e. https://wiki.unraid.net/Shrink_array .
  3. Sorry, not wanting to hijack the thread but is this still an issue with the new 3rd gen Ryzens? Or the 2nd gen Threadrippers? I'm saving up some cash to replace my older i7-6700K based system that unRAID runs on. I've definitely been considering a Ryzen 3 setup but also might go with a 2nd gen Threadripper setup.
  4. I just went through something similar - failure of unRAID array due to old/defective SATA backplanes on my Norco RPC-4220 storage enclosure. I ended up with one drive that was repaired by the Check Filesystems procedure but the other didn't recover properly. I had made a disc image using dd/ddrescue before trying the Check Filesystems in maintenance mode. Glad I did as I was able to use data recovery software (UFS Explorer Standard) on the unmountable drive (image) and it let me recover everything I needed/wanted. I looked at assorted recovery platforms but chose UFS Explorer Standard as it ran natively on my Ubuntu main system. There's a few more details in this thread:
  5. So it appears all my issues have now been resolved. I was able to recover the data from the 4TB disk image I made with ddrescue. I went with UFS Explorer Standard for the recovery since the ddrescue image was unmountable. Both it and Raise Data Recovery are made by the same developer - SysDev. I chose UFS Explorer as it has a native Linux version whereas all the others are Windows primarily. Now the only thing that's left is to finish migrating the rest of my data from UD attached and mounted drives to being part of the protected array. 2 x 10TB and 1 x 4TB drives being precleared, with 3 x 10TB worth of data left to import. In any case, thanks again to all those that helped me along the way!
  6. New question but related to this topic: I used xfs_repair on one of the 2 drives that were failing/failed. The other drive died completely and wasn't seen by any OS or even at the BIOS level. I made an image of the 4TB XFS partition using ddrescue before attempting use of xfs_repair. As reported here, that image isn't mountable. While the xfs_repair run did appear to recover the data from the drive into the 'lost+found' folder, there are still a number of files/folders that are missing. I've restored the image I made with ddrescue to a working 4TB drive that was sitting on the shelf, but the filesystem is still reported as unmountable. In the 'Check Disk Filesystems' wiki article the section on 'Redoing a drive formatted with XFS', it mentions that if xfs_repair failed to recover all files/folders, you can use Testdisk (free) or File Scavenger (paid) to try and recover missing and/or important files. I've got the restored drive attached to my Ubuntu system and have installed Testdisk using 'sudo apt install testdisk'. This added Testdisk 7.0 to my /usr/bin folder under Ubuntu and I'm currently running it. It looks like it's going to take about 12 hrs to scan the disk for partitions/files. In the meantime I thought I'd ask here if others have used any of the paid tools that claim to be able to recover data from XFS formatted drives. Here's some of the ones I've found via some DuckDuckGo searches: File Scavenger - Windows based - $59 US ReclaiMe - Windows based - $199.95 US Raise Data Recovery - Windows based - General license €14.95 + XFS support module €12.95 (30 days support) UFS Explorer - Win/Mac/Linux based - Standard edition €49.95 Note that some of these can be discounted at ColorMango: https://www.colormango.com/utilities/data-recovery-file-repair/index.html?filter3=XFS-Recovery I'll certainly let Testdisk run and see what it can find, but curious if others have tried any of the above paid tools, or can recommend something else that has worked for them.
  7. I actually found the real cause of my issues to be the miniSAS backplanes (with 4 SATA connections each) in my Norco enclosure. I ended up ordering some miniSAS (host) to 4 SATA (target) breakout cables and removed the backplanes. This has apparently resolved my issues as I haven't seen a UDMA CRC or other read/write error since I did the cabling last Sunday. That said, I did break down and buy one of the genuine LSI 9201-16i adapters from the US eBay seller I linked above. This one came in a retail LSI box and looked to be factory sealed. I have sent the serial number on the new card to LSI/Broadcom for verification, but regardless it appears the original card is working fine. I did do a back-and-forth communication with the Chinese eBay seller and they admitted it was an OEM knock-off for the Chinese market, and they even offered to refund my money if I sent it back to them (at my cost for shipping of course). But since the replacement of the backplanes with direct cabling, it now appears that the original card is working fine. I've lost the hot-swap capability but that's a minor issue as this is a home-based server that can tolerate downtimes, unlike most businesses. I'm still going to keep the genuine card as well as I'll likely pick up a new storage enclosure at some point. The 9201-16i is limited to 6Gbps SAS/SATA but is fine for my unRAID needs. And it never hurts to have spare equipment on hand should the OEM Chinese unit ever fail or start causing issues. Glad to hear you got yours resolved as well with the new storage enclosure.
  8. The 'New Config' and shrink/removal of the two drives appears to be successful, other than waiting for the parity rebuild. One thing I noticed is that the number of writes to each parity drive differs. So far it's never more that 5 writes difference, but sometimes Parity 1 has more writes, and other times Parity 2 has more writes. No errors are being reported, UDMA CRC or otherwise. Is this considered normal?
  9. Thanks @johnnie.black and @itimpi! I'll do the 'Shrink array' and leave out the failing disks, but know that the rest of the drives won't be protected until my dual parity drives get rebuilt.
  10. One other clarification. The 'Shrink array' procedure says I need to set all shares and the Global share settings to include only the drives I wish to keep. One drive has failed completely and the other is definitely having issues, but I was able to use unBalance to move the data from both the emulated drive and the 2nd failing drive. Since both failing/failed drives are empty and won't be part of the new config/layout, do I need to set the includes for all shares?
  11. Sorry, that doesn't answer whether I can just use New Config with dual parity, but @itimpi's reply seems to indicate I can just do the new config and re-order my drives leaving both parity drives assigned. Yes, I was aware that New Config would require a full parity rebuild. Since I want to re-order and remove the drives that are failing, I assume I just leave 'Preserve current assignments' set to none and then proceed with adding the drives to the new config in the order I want. Thanks!
  12. Now that I've direct cabled my unRAID setup and removed the failing backplanes in my Norco enclosure, I have two older drives that are indeed failing. I have used the unBalance plugin to move the data off these drives so they're both empty. I'm now ready to shrink the array to remove those drives as I have enough free space for the interim. My question is related to the 'Shrink array' procedure in the Wiki. I will be using the 'Remove drives then rebuild parity' method as described here. However, I also want to re-organize the array while I do this to better group the drives of similar capacity (it's an OCD thing). In the unRAID FAQ here in this forum, the section on re-ordering drives states this: The reorder section seems to indicate you can move drives from one slot to another without doing the 'New Config'. Do you have to let the array rebuild parity on the 1st parity drive, then stop the array and assign the 2nd parity drive and again rebuild parity? Or can I just do the 'Shrink array' procedure, re-organize my drives using the 'New Config' with both parity drives still assigned? Can any of the experts clarify the best way for me to both re-org and remove my failing drives? Thanks!
  13. Latest update: the new MiniSAS to 4 SATA cables arrived and I went ahead and removed the failing backplanes from my Norco enclosure. The cables and numerous Molex to SATA power splitters later, my array is back and running. With a few caveats... 1st, the unRAID 'Maintenance' mode was partially successful in repairing the filesystem on the HGST 4TB drive. Alas when I ran XFS_Repair to implement the corrections, it didn't retain the original folder structure and everything got moved into numeric labelled folders in a root folder called 'lost and found'. 2nd, when I re-added the 4TB back into its slot, unRAID considered it a replacement disk so it's currently doing a rebuild from parity. As I ran the repair on the array device name (/dev/md14), the parity data was kept valid even though everything got moved into the 'lost and found' folder in the root of the drive. Once the rebuild from parity completes (in about 4hrs), I'll then have to go into the 'lost and found' folder to rename and move items back to their original location. Tedious, but at least it appears I didn't lose any data throughout this whole ordeal. One other note: before I ran the filesystem check and subsequent repair, I did make a ddrescue image of the 4TB partition. This was before the repair so it should have the original folder/file names, but unfortunately the image seems to be unmountable. I'll hold onto the image in case I need to try further recovery, but I suspect I'll be able to rename and move files and folders back to their originals once the parity rebuild is complete. And once that step is complete, I'll then go ahead and do a 'New Config' so I can re-order and group drives. This isn't strictly necessary but my OCD tendency will only be sated once I do the new config. The good news is that the drives that were throwing frequent UDMA CRC errors no longer appear to be having connection issues. No errors so far and I'm half-way through the parity rebuild. This would seem to indicate that all of my connection issues were related to the backplanes in the enclosure. Now that everything is direct cabled, all seems good! Thanks again to the users who've helped me through this.
  14. I just remembered that unRAID has the 'Maintenance' mode and a Check Filesystem procedure. I'm still going to image the 4TB using ddrescue, but then I'll try the maintenance mode and check it using unRAID.
  15. An update and more questions for anyone following this saga. My cables to allow removal of the failing backplanes haven't arrived yet but are out for delivery. In the meantime, I went ahead and did the disk re-enable procedure for both the 'bad' 8TB and 4TB drives. I was expecting this to require a data rebuild, but as I mentioned a few posts ago, when the 3 disks went missing after my reboot, the system was still configured to auto-start the array. Somehow it ran an extremely accelerated 'parity rebuild' of that 8TB drive - it took 22 mins compared to the normal 12 - 15 hrs. It didn't actually do anything as no reads/writes were incrementing for any disks other than the parity drives which only saw reads increasing. When it finished, the failed/missing Disk 8 went from red X to green dot but was still showing as unmountable. As mentioned a few posts back, the 8TB seemed fine when attached to my Ubuntu system. I was able to successfully clone it to another new 8TB drive. I then re-installed the original 8TB (with the matching serial number) back into the array but on a bay that's attached to the motherboard SATA ports. Yet somehow, after re-installing the drive, it came back online in the next restart of the array with no data rebuild. And it mounts fine with all files appearing to be intact. This is not a bad thing but puzzling as I still don't know how unRAID did this. That left me with 2 disks that were unmountable and still needed rebuild, and both show up under the 'Format' option below the array Start/Stop button on the main tab. What's strange is the 10TB disk has not been re-added to the array as there's no sense rebuilding what was an empty drive. Stranger still that unRAID sees it as unmountable and needing format, even though it hasn't been re-added to the array - the slot for that disk shows a red X and 'Not installed' message. The 4TB drive was also attached to a bay with a motherboard connected SATA port. After the re-enable procedure it went ahead and did a data rebuild from parity. When finished, it still showed as unmountable and needing a format just like the empty 10TB. I rebooted unRAID but upon starting the array, it still lists the 4TB as unmountable and needing a format. I did a power down and pulled the drive to see if my Ubuntu system sees it as valid, which it doesn't - it sees the XFS partition but when I try to mount it, Ubuntu reports it as unmountable - 'structure needs cleaning'. I assume this means an unclean filesystem from when the disk went missing. I'm going to image the drive with ddrescue before trying any methods to clean the filesystem. I don't have any spare or new 4TB drives so instead of using ddrescue for disk to disk cloning, I'll just do source disk to image file. Are there any concerns with this? And are there any recommended procedures for cleaning the filesystem that are unRAID specific? I'm fairly certain that any filesystem repair will be successful and once re-installed in the array, it should hopefully be seen and re-mounted properly. When my cables arrive later today, I'll of course remove the backplanes and direct wire them as suggested by @Benson. Once this is done and I power up, I'll confirm that all drives are 'green' and then proceed to do a New Config procedure. This will of course require the dual parity drives to be rebuilt. Hopefully after all this, I'll finally have a more reliable system. I'll still have the suspect LSI controller in use, so perhaps I should wait to do the new config until I have the genuine replacement installed? Any other concerns I should be aware of? Thanks again for all the help!