Jump to content

apandey

Members
  • Posts

    461
  • Joined

  • Last visited

Everything posted by apandey

  1. what is the best way to re-create the array with no data? My cache pool is still OK, so want to keep that and only recreate the array
  2. Some success, but pointless 😂 I ran the repair, and did a lot of things, and then failed with fatal error -- File system metadata writeout failed, err=117. Re-run xfs_repair. I re-ran it and it seemed to repair successfully on 2nd go Then I restarted the array, and the disk was fine and mounted BUT, it only came back with about 60GB of data. The healthy array had each of the disks about 60% full (about 4.8TB each). Most of what remained was lost+found, but it doesn’t account for any reasonable chunk At this point, it might be easier for me to build the Array from scratch and copy over data again, then to fight this battle. An LSI replacement is on the way. Lesson learned
  3. first disk repair finished. Doesn’t look very well after that too. Seems no valid filesystem Diagnostics attached godaam-diagnostics-20221202-1139.zip
  4. Thanks for the pointers. I did not see anything like an emulated disk, but I think I might be doing what you suggested I unassigned one of the 3 data drives, it started showing up in unassigned drives. I started the array. The drive in unassigned section was not mountable, only had format button. I didn't touch that I then stopped the array, put the drive back into its array spot and started again. So a rebuild is now happening with other 4 drives being read and the one I moved around being written. I think I understand your point reading the other thread you linked - partitions aren't protected by parity, so if the data is intact, it will be recreated on rebuild and while doing so unraid will create partitions fresh. So if data was OK, this might work. So how do I know that it worked after first disk? It will not say unmountable? and I will be able to see partial data in shares from whatever was on it? If it shows up empty or still unmountable, it didn't work I guess Curiously, while the rebuild is happening, the drive being rebuilt has gone from "Unmountable: unsupported partition layout" to "Unmountable: wrong or no filesystem". This is expected? If it does work, since I have 2 parity drives, would I be able to rebuild the the other 2 data drives in a single rebuild rather than one by one? 1 rebuild is about 12 hours for me
  5. Good point about firmware update, I can see that as an easy way to fall into a similar trap if settings are reset on update I am not sure I am following what to do here. don't want to go in a wrong direction. I am convinced my parity is valid, those drives haven't been moved off their motherboard and haven't seen any writes except for anything related to my attempts at xfs_repair. I don't undertstand how unraid can repair partitions from parity information only when all my data drives are offline. I was expecting to at most lose 2 drives for that to work, but seems I've lost 3. Any pointers?
  6. I am in the process of assembling an unraid server to move over (and expand over) from a synology 12 bay NAS I initially connected 5x 8TB drives (2x parity, 3x data) as main array and 2x 1TB SSDs as cache, all connected to my motherboard SATA ports. Have been running this happily for a few weeks Today, I finally got an adaptec ASR-71605 to connect the remaining 16 SATA bays to the HBA. I was told that the card was in HBA mode (and I know its mode switchable), and I was negligent enough to not check on that. I made a mistakes of rearranging the drives to move the data drives to HBA. Should have validated the card first. The card was in Simple volume mode, and seems it corrupted my drives on first boot. When I didn't see any drives at boot, I realized what had happened and switched the card to HBA mode in BIOS The drives were detected after that, but they are no longer mountable. I tried running xfs_repair in safe mode, and it replaced the primary superblock with secondary one. did a few other things and doesn't seem to complain now, but the drives are still unmountable "Unsupported partition layout" Diagnostics attached. I am not sure if I can recover from this, but I still want to try. At this point I can rebuild everything, but I see a risk of the adaptec card accidentally resetting itself into non HBA mode some day and and I would not want want to deal with this again. Any experiences on whether this is something people ever experience? Or are the cards usually stable to not reset back into some other mode? godaam-diagnostics-20221201-1949.zip
  7. I ran into the exact same issue. My solution was to use rsync to only itemize the changes but then apply them myself in a script. While applying, I skip directory creating items until I have a file to be written. I used user scripts plugin to manage the sync here is an explanation of rsync itemize - http://www.staroceans.org/e-book/understanding-the-output-of-rsync-itemize-changes.html my example script function texe() { if [ "${run_type}" == "dry-run" ]; then echo "dry-run: $@" elif [ "${run_type}" == "real-run" ]; then "$@" fi } rsync --dry-run --recursive --itemize-changes --delete --delete-excluded --iconv=utf-8 \ --exclude '@eaDir' --exclude 'Thumbs.db' $src $dst | while read -r line ; do echo "$line" read -r op file <<< "$line" if [ "x$op" == "x*deleting" ]; then log "removing $dst/$file" texe rm -rf "$dst/$file" else op1=$(echo $op | cut -b 1-2) sizeTsState=$(echo $op | cut -b 4-5) if [ "x$op1" == "xcd" ]; then echo "not eagerly creating $dst/$file" #mkdir -p "$dst/$file" elif [ "x$op1" == "x>f" ]; then fpath="$dst/$file" dpath=$(dirname "$fpath") if [ "x$sizeTsState" == "x.T" ]; then log "update $fpath timestamp only" texe touch -r "$src/$file" "$fpath" elif [ "x$sizeTsState" != "x.." ]; then if [ ! -d "$dpath" ]; then texe sudo -u nobody mkdir -v -m 777 -p "$dpath" fi texe install -o nobody -g users -m 666 -p -D -v "$src/$file" "$fpath" fi fi fi done
×
×
  • Create New...