• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About visionmaster

  • Rank


  • Gender
  1. I tried updating to 6.9.1, 2 weeks ago from 6.8.3 which has been stable for years. I had the Seagate drive issue after the drives spin down and then spin back up, I start getting read errors on 2 versions of my Seagate drives (ST8000VN004 and ST8000VN0022). Once reverting, no issues and stable again. I just tried to upgrade to 6.9.2 and immediately back to same 2 drives and spin up read errors, so I back to stable on 6.8.3. Hopefully fixed on next update. Those are the only type of Seagate drives affected on my system. My 4TB Seagates work fine. My drive controllers are Supermicro (MV88
  2. I've tried to set this up on unRaid 6.5.0 on scheduler without success. I'd like the parity check to run 4 times a year automatically, at midnight once every 3 months. Is this possible in the settings? Thanks! -Rich
  3. OK, thanks for the info! The parity upgrade completed and all still reports 0 errors. So when I get home from work I will upgrade and retire disk 17 that is having read errors.
  4. Hi, I've gotten great help in the past, so I always appreciate this forum. I can upload logs if needed, but this is the scenario, just need to know how I should proceed. I am on unRAID 6.5.0. I am planning to expand my parity to 8TB from 4TB, so I can upgrade a couple of other HDDs to to 8 TBs afterwards. I bought 5, 8TB HGST NAS on black Friday that I just got around to preclearing in my other rig. I was going to recycle the old 4TB drive to my Windows machine, because I needed it there, when the parity upgrade completed, and my goal was to add a second 8TB parity,
  5. Hi, sorry for the late reply. Just saw this. It's mostly for the slow parity check, but several months ago, I had an issue with drives dropping out during a monthly check. That was the only time and since then, no more problems, just slow. I guess I will play with the tunables and wait for now. Thanks!
  6. I haven't had issues in years with my server, but recently have had a few problems including slow parity checks, which may be in part due to the 2 current SAS2LP cards I have. I'd like to change them out for the Dell H310 cards flashed to the LSi firmware in IT mode. I looked on eBay, but since flashing might be an issue, due to lack of hardware other than my server, I'd rather save myself some time and a headache. Anyone selling? I live in Florida.
  7. According to the GUI, the parity check ended by "error code:user abort" finding 718 errors. But I didn't stop it, it stopped itself, 9.5 hours in. I guess this happened when cable to disk 8 and several other cables got unseated? So even though it was doing a monthly parity check with correct errors set to yes, did incorrect parity get written? My question at this point is? Is there away to restart the array with disk 8 in a green state and to recheck the parity? If it comes back all ok, then chalk it up to loose cables and if there are errors this time then assume disk 8 is bad and rebu
  8. According to scheduler, in the unRAID GUI, it was set to write corrections. So does that mean that parity is incorrect and rebuilding from parity will place errors on the rebuilt disk? My gut says that the data disks are actually good (even disk 8 which is red balled).
  9. It took awhile to unmount everything and a bunch of errors popped up on several other disks. I went to the cmd console and forced unmount then did a powerdown. All cables seemed fine and I double checked and replugged everything. I powered up and everything came online ok with disk 8 in red ball state but everything else ok. I checked SMART on all the drives and all pass and seem good. 0 reallocated sectors and 0 pending sectors on all 23 drives. The parity check was going well for 9 and a half hours before the disks errored. I'm wondering if a cable of connection came loose which cau
  10. Silly question, but should I powerdown and try reseating all the cables? I just looked and everything seems to be well connected. Will I be in danger of losing any data powering down at this point?
  11. Hi guys, I need help on how to best proceed. My monthly parity started at midnight as usual and my system was fully up to date and has been working well without any errors. I'm on 6.2.4 and it's been up for 51 days since last reboot. Around 9:35am this morning, I got 3 email notifications that said I had problems: 1) Disk 8 in error state 2) Disk 6 - 78 errors, Disk 9 - 78 errors, Disk 11 - 78 errors, Disk 8 - 690 errors 3) Parity check ended with 718 errors. I am waiting to power down, reboot, disable disks, rebuild disks, etc until I get your good advice. Let me know if I forgot any
  12. I was searching around and this seems to do what I want. (The previous work around was telnet into server and type lsof). I'm a total noob with linux.