FryGuy

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

852 profile views

FryGuy's Achievements

Noob

Noob (1/14)

1

Reputation

  1. No I/O errors. Found that Disk 2 has 1 unrecoverable FS error. Just got my new drives, time for recovery.
  2. Disk 2 FS block uncorrectable. I'll have the new disks hopefully next wednesday or thursday to move data over to XFS and get recovery underway.
  3. Disk 2 had 1 FS error block, running a scrub repair now. Disk 3 had no FS errors. I got no reported errors on I/O.
  4. Set the script to run hourly like described above, reset the error count, now running scrub on the two BTRFS disks. I'll keep and eye out for errors, fingers crossed.
  5. Ran two cycles of the latest version of memtest, just to be sure. Nearly 20 hrs testing on the 32GB ram and returned no errors. Any other ideas? I hope you don't say SATA controller, its built on the mobo. 😬
  6. No its not good. I missed that. Shutting down now and doing a memtest.
  7. The other disks are mounting, I just don't know why it's crashing still. I just started it up 1 hour and 40 mins ago to start the scrub if that's fresh enough for the diags. Here is is. unraid-diagnostics-20200803-1449.zip
  8. It's another disk I have in there now. The corrupt drive is out in a USB dock for recovery. I have more disks arriving for future backup. I'm going to use to move the array to xfs disks. The damaged file system disk is only disaster recovery now, worse case I re-rip source material as I find out what has been lost. I may be able to at least use UFS Explorer to find out was has been lost, as it is listing all the file names and sizes. I'm hopeful that it will recover the files too but it don't appear to preserve the dir structure, oh well. Critical stuff I have backups of on disks and cloud. I now think I have errors on my other disks causing the system to freeze during parity check. I run a scrub last night to check the disks and forgot to turn off the scheduled weekly parity check and that froze the system up. Now running scrub again with parity check schedule disabled, by morning i'll know if it worked.
  9. Ok that confirms my worst fears, hard to visually check and confirm every file/dir with such a large pool. That's exactly why I didn't want to start anything up and mess up my cache and database. Looks like it's going to be disaster recovery after all. I tested with UFS Explorer it looks like it will recover everything. Do you know of any other free BTRFS recovery tools (or i'll just web search)? It doesn't preserve dir structure but at least i'll get my files back from that corrupt backup. Thank you, terminology confusion and all.
  10. I don't know what else you would call it when the array is running with the parity "emulating" the missing disk as part of the array. Clear english definition usage of the word to me.
  11. Other than the un-necessary step of clearing (format first then preclear next try) the disk this is what i had did. The array will start with the emulated disk and everything is green. I haven't dared to try VMs or Dockers to see if everything is there to risk the disk writes. Without running some things like Plex i'm not sure if anything missing. The disk isn't showing as disabled, its marked as green. The original issue with the server that was causing the kernel panic problem was solved with updates. I don't think i have the logs of the issue anymore as it was resolved. Ran scrub / repair on disks but by times updates fixed the issue the damage was done. It should have been a trivial fix to just backup rebuild a drive. Turned out to escalate into a situation. Now how do i figure out if my emulate disk is corrupt? I'll get the two more disks (assuming I overwrite the corrupt backup I got of the emulated disk, would rather not but $$$) to pull the whole array data off, i should have a current backup set on the shelf anyway. This is the only way I see to proceed forward with recovery if the emulated disk is good.
  12. If you read my post with my disk layout you wouldn't be debating the semantics of what emulated means. I have one parity disk with three data, all 14 TB. Every other disk is fine just can't rebuild from the parity but it can start array and emulate the missing disk. I don't understand how or why this happened that's why i'm here. Originally i was thinking that i did something horribly wrong in the procedure but now i'm reassured that i didn't. The only thing i don't understand is how my new drive backup became corrupt with cp -r command too? I'm sure i didn't get the drives mixed up with one on a USB bus. I think this is the point of confusion. It was the original disk 1 that had the corrupt BTRFS it was still R/W with 5 unrecoverable errors, it stayed in the same slot the new disk was used to backup its data. I perhaps wrongly thought it was safer if a rebuild went wrong, At the time i didn't know how else to get the original disk to appear as a new disk to rebuild. If a corrupt FS can affect the parity to cause it to not rebuild then i'm *expletive deleted* either way. Nothing I could have done from the start would have solved the problem. I'm an experienced user of linux and seldomly reach out for help, I usually find it so condescending and toxic on forums. This is a sh*t hit the fan moment and i'm not sure what even happened myself.
  13. I really thought it was going to be a straightforward, remove from array, reformat, re-add, start, and allow to rebuild kind of deal. I I backed up data to New Disk from the corrupt BTRFS Disk 1 in the server, Disk 1 remove from confg, shutdown, format Disk 1 in unassigned devices, add to array, start array, it didn't say building disk, immediately shutdown array, maybe i got a panic that something was wrong and nothing was. Then ran a preclear on Disk 1, started array and proceed to build drive, drive rebuild failed with "Unmountable: No file system" repeated twice with the same results. Then there should be no way that what i did corrupted the parity, the parity was valid before with no errors. I used the wording added it back, it was rebuilding the drive. Sorry for the vague description above, i was trying to keep it short it already seemed so long winded. No new config and i didnt use the format the drive check box either, i know that will definitely mess things up if i did that. New config, especially if you get the drive order wrong if i'm not mistaken. It does show all the files but it wont preserve any dir structure. Its my last resort if I must. I guess the is no way to backup just the data that is emulated? It don't show up as /mnt/disk1 as its not a mounted disk is there a location that the virtual disk mounts to for the pool? If not then I need to do a full tar backup of the whole array to external drives to be sure i have everything on the array?
  14. I originally had Disk 1 with BTRFS uncorrectable errors from kernel panics and forced shutdowns. Found and fixed the offending software with updates but the damage was done. (One parity, Three data, 14TB WD DC HC530's) Had got a extra drive to backup the data to (foreshadowing here) first. Backed up the data from command line, error on some temp and couple corrupt files of no value. Confirmed the files on the drive with a ls in a few main dir's, all is good. Removed the drive from the array config, shut down, now my mistake (I know RTFM, I should have read it again), formatted the drive in unassigned devices, added back to the array and hit start, had a oh *explitivitive deleted* moment, shut it down. Precleared the drive, then added again. Think in hindsight the damage was already done. That's why I made a back up, drive won't mount! I'm not sure how but in the copy process (cp -r) to the brand new drive it also corrupted the BTRFS on that drive also. Now the original source drive is gone and the backup is corrupt, nice! It's damaged beyond BTRFS repair tools capability but recovery looks possible as last resort with UFS Explorer but it looks like it will lose all the dir structure. It looks to be still emulating that disk, any way i can pull just the data of that emulated disk? At this point it almost looks like i got to bite the bullet and buy two 14TB hard drives! I do have seven 4TB hard drives with an older backup on it. The new data set has outgrown the disks size, even then I don't feel comfortable overwriting it with everything else that's happened. I got archival and cloud backup of everything really critical. I guess my question is what are my options moving forward for data recovery at this point? Thank you.