abynez

Members
  • Posts

    5
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

abynez's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Happy to hear that! My server has been in storage for close to a year and a half and I'm about to upgrade from 6.9.2 as well. Might need to open a beer first.
  2. Hellooooo! Hey ich777 did you ever get very far with the bf2 (battlefield 2) server? Last info I could find was from 2019 when @johnodon first brought it up.
  3. Thank you! I had to wait for the disks to preclear before starting but it worked great. New config allowed me to restart the server with disks 1-6 data in tact and usable. After parity-sync completed I tried to mount the disks in UD. Disk 7 mounted and I was able to recover all 1.28 TB of data off of it. Disk 8 wouldn't mount and Disk 9 is completely dead. I ended up giving up on the 169 GB of data from Disk 8 but it's no biggie. Nothing critical. Thank you again and Merry Christmas, Happy belated Hannukah, Happy Kwanza, Happy New Year, Happy almost end to 2020, etc 😃
  4. Yea no SMART for disk 9, it looks completely dead. I figured it would be bad drives but thought you all could interpret the logs and reports better than I could. Thanks for taking a look. Sorry to be a bother but could you give me more detailed instruction for the new config and resyncing parity steps? That sounds like a scary move . Once that's done I can try UD to transfer data like you suggested.
  5. Running 6.8.3 Supermicro X9DRH-7F motherboard LSI 9207-8i HBA Longtime user who never had any problems with SATA disks except a bad cable so I got cocky. I fell to temptation and picked up some secondhand SAS drives for cheap and threw them in the server. I precleared (3) disks with dual cycles before using them and they all passed. Last week I started getting the email warnings about read errors. Then I got a notification about a disabled disk. I backed up everything vital and started poking around. Now I have one disk missing (disk 9), one disk disabled (disk 8 ) and another disk that is greenball but SMART shows predictive failure (disk 7) It appears to me that I just got a bunch of dying HDDs but they're all connected to a new LSI 9207-8i and new breakout cable that I've never used before and so the cause of this is suspicious to me. Unfortunately I don't have another cable to test with right now BUT I did buy (2) replacement SATA drives that I'm more than happy to replace the SAS drives with. Microcenter only had two left in stock ¯\_(ツ)_/¯ I figure if I can safely recover my server then I'll play with the SAS drives separately. Attached here are my diagnostics from 12/22 which is from before I reboot. It was a few days after the initial problems so I worry the logs may not capture the original causes. Also attached are diagnostics from 12/23 which is from after I reboot. Disks 1-6 are original and not an issue. Disk 7 has 1.28 TB of data written which I'm not worried about losing if necessary Disk 8 has 169 GB of data written which I'm not worried about losing if necessary Disk 9 had nothing written to it except (I believe) a 21 GB docker img file. I've spent the past two hours reading similar posts in the forum and got some leads but I worry about proceeding without checking with the more knowledgeable community. Help me obi-wan kenobis. viento-diagnostics-20201222-1555.zip viento-diagnostics-20201223-1057.zip