Michael_P

Members
  • Posts

    668
  • Joined

  • Last visited

Everything posted by Michael_P

  1. FWIW: I haven't had any luck with those Toshiba drives in my Norco, they drop like dominoes, and the WD Greens I would replace at around the same number of hours you have on 'em. I've also had one of those WD 8TB crib die on me too, so it's not that unusual. Unlucky, sure. I've felt your pain brother.
  2. Yes, I replaced all my Greens with the Toshiba N300s and X300s, and one random MD04ACA500 (which was the latest casualty). 12 in all, over a period of 3 years. Only 4 remain in service. The good thing, so far, is that they fail gracefully. I think I've only had 1 or 2 files unable to move from them due to errors. WD is usually good for me too, in that regard. But I've NEVER had a Seagate fail gracefully, it just stops working or is so un-readable that the files take too long to move off the drive, which was normally how I discovered a problem - files no longer accessible .And, it only triggers SMART warnings when it happens so no advanced notice (I am partially to blame too, by not running enough scans proactively). The biggest issue with the Toshiba drives, besides the time wasted, is their RMA process is too slow. I have to ship the drives back to them, at my expense, and then they issue a gift card in compensation two months later. So in the meantime, I have to buy another drive to replace it. 7 so far... Gets expensive.
  3. Yes, thanks - it completed the rebuild of the Seagate disk, I purchased yet another WD 8TB drive, I'm getting pretty adept at shucking these things, and began a rebuild of both suspect drives - the red balled 5TB and the SMART error WD 8TB drive. Hopefully it will complete in the next few hours. I am beginning to suspect that these Toshiba drives may be more susceptible to vibration than the others, being in a storage shelf probably isn't within their capability. The failure pattern has been the same for them: Initial parity build and 1 drive was showing millions of read errors Manually move data off and remove drive from array Attempt to build parity again another drive from the same serial # series (I bought 4 at once) showing millions of read errors Purchased 2 more WD 8TB drives Manually move data off and remove drive from the array run pre-clear on both drives showing millions of read errors, both passed, no SMART errors move 1 drive to my WHS VM, the other removed as a cold spare Update firmware for the controllers, probably what was causing the read errors. Parity build completes, parity checks scheduled for first of every month 2 weeks later, scheduled parity check starts - parity drive shows read errors (1 month old Toshiba) Purchase ANOTHER 8TB WD to take its place as parity Preclear checks the 8TB Toshiba, passes Parity rebuild begins, Toshiba drive assigned to VM starts to show pending sectors. First 8, then 16 then rapidly up to 72. Drive is removed, cold spare takes its place RMA process started on this drive Parity build begins again, second Toshiba drive begins to show pending sectors, 16 and then 32. Drive is removed and the 8TB toshiba takes its place. Drive is added to RMA Parity build completes Days, maybe a week later, another Toshiba drive from a different batch shows pending sectors Purchase another 8TB WD Drive is removed and added to the other RMA - all 3 are returned Rebuild of parity completes Purchased another 8TB WD to add as second parity 8TB Toshiba in VM begins to report pending sectors (just outside of return window, rats.) Begin RMA process At some point, another Toshiba 5TB drive fell out of the array and was added to the WHS VM, I don't remember exactly when. Scheduled parity begins, I just happen to be next to the server as it begins to run and here an intermittent drive noise - if you've heard a Seagate begin to fail then you know what I mean, almost like a scratching record. So I KNOW that a drive is going to fail. Seagate drive falls out of the array A short time later, the Toshiba drive attached to the VM begins reporting pending sectors Purchase another 8TB WD, screw up adding it to the array and wipe one attached to my VM for local machine backups. Add the correct one to the array, rebuild of the Seagate drive begins. At some point during the rebuild, another Toshiba drive falls out of the array - rebuild continues During the re-build, one of the 8TB WDs begins to show pending sectors, 16, but the count does not increase and the re-build completes successfully (?) Purchase ANOTHER 8TB WD, pull both the dropped Toshiba and the 8TB WD with the pending sectors and add in the 8TB WD I wiped by mistake and the newly purchased drive and begin re-build Also, in the middle there somewhere I purchased 2 9207s to increase bandwidth. The Seagate was more or less expected, 3 of its brothers met the same fate; with data loss each time. That's a big part of my decision to move to unRAID, re-ripping movies is a PITA. All of my major data (pictures and documents and such) are backed up, offsite so this is more frustrating than anything. All of the Toshibas followed the same pattern, read errors followed shortly after with SMART failures. They all experienced very light duty over their lifetime, mostly sitting idle - that is until getting stressed by the parity checks. It's certainly possible that the parity checks revealed latent defects with the marginal drives, but to have so many go down in a short period of time certainly raises my eyebrow. So, my theory is vibration, which when exposed to during a parity check when all drives are being accessed in close proximity, is simply too much for these Toshibas to bear. Other pertinent, or not, information: These were almost all their "High Reliability" NAS drives with the exception of 2 desktop performance drives I bought by mistake. I do have one question: Why in the world would they begin show pending sectors while running a parity check if they were attached to the VM and not part of the pool? My WHS VM is very light duty, only daily PC backups for my windows machines. Strange. Thoughts?
  4. Hi! The case and power supply I noted above, I started in a regular PC case with external enclosures, but moved to the norco after having problems completing the initial parity. All cabling now is mini-SAS, and I've replaced them all too. And the controllers. The SMART errors have all been pending or uncorrectable sectors. I have so many WDs because the Toshibas have all gone to sh@t.
  5. Nobody knows if it's OK to let it continue to rebuild? Bueller?
  6. And now disk 9 is showing 8 pending sectors.... WD Red less than a month old.
  7. Hello, I've been toying with unRAID since April as a means to reduce some of the anxiety that came with storing all my data on my old WHS setup (all independent drives), every time a drive failed it took FOREVER (exaggerating only slightly) to restore from backups. However, it seems I can't run a parity check without one of my drives dropping offline or one of my unassigned devices throwing SMART errors. It runs the check, and so far, every time there's another drive with read errors. So far since April, I've RMA'd 7 Toshiba drives - all in the 20k hour range, except 1 with less than 1000 - which is kind of a pain, since they issue a Visa "gift" card in exchange - for the full original value of the drive (which is nice) but it takes 2 months to receive.. Each of them were part of the array, and during a parity check began to have read errors and were dropped out of the array. At this point, not showing any SMART errors, I just assigned them backup duty to my WHS VM, until the next parity check... Then during the parity check (which they are not a part of, as they're unassigned devices) they begin showing pending and reallocated sectors. So, I pop them out and RMA them. Every time I've run the parity check, this happens. This time, the parity check started as scheduled, and after a while one of my 4TB Seagates dropped out of the array with 770 read errors, it's a couple years old so I didn't think it odd, but my other 4 Seagate drives of the same vintage are all dead too so I didn't think it was too odd. So I popped over to Best Buy, bought an 8TB MyBook, shucked it (WD Red score!) and installed in the Seagate's spot. In my haste, I did not wait for the drive to initialize and mistakenly assigned one of the unassigned 8TB drives to the array and blew away a couple TBs of backups. Whoops. Annoying, but they're only local machine backups so no real loss. During the re-build, another Toshiba drive dropped out of the array with 768 errors.. This one is a few years old, but only around 10k hours (*edit- 25k hours, I was looking at the wrong drive). I have dual parity, and the re-build is continuing - my question is should I let it continue? Will it fully rebuild the first drive allowing me to then replace the second? Why is this such a pain - do I have unusually bad luck? Are Toshiba drives really THIS bad (unfortunately I still have 5 in the array)? I will say that the drives that have "failed" all allowed me to recover most all files from the drives, save for 1 here and there. Here's my setup: Gigabyte Z370XP mobo Intel i7-8700 G.Skill Ripjaws 32GB DDR4 (8GB*4) Thermaltake Toughpower Grand RGB 850W 80+ Gold 2x LSI 9207-8i (previously used 2x M115 in IT mode, same problems) 1x Intel RES2SV240 (dual linked to one 9207 above) Norco 4224 (latest revision) I'll note that after each of these mishaps, I've checked and re-checked the cables, replaced them, and replaced the controllers. I don't think it's a back plane as the drives causing issues have been in different slots on different back planes and it's a new slot each time. I've attached the diagnostics from the first parity fail and the second, I hope someone can help me shed some light on this. I'm spending more time watching parity re-builds than watching the movies stored on it... Thanks! urserver-diagnostics-20180630-1934.zip urserver-diagnostics-20180701-0808.zip
  8. It is an odd issue, for sure. It's running fine now in the new folder, so thank you for your help!
  9. Using appdata does not work, but using my new "app" folder does. There must be a permissions problem with my appdata folder. I'm happy to leave it run in the new folder since it's working there but if you know which permissions should be assigned to appdata manually, I would appreciate the knowledge
  10. As far as I know. One of my troubleshooting steps was to delete and re-create the appdata share
  11. OK, following your lead - I tried to re-create the folder in appdata a few times, without success - service wouldn't restart. SO, I created a new share to stuff the container into, and it works - so something is wrong with my appdata folder.
  12. OK, I've done that - added a new device and stopped and started the container and it restarts properly
  13. No disk errors noted (just a random IRQ 16 nobody cares that I haven't gotten around to addressing) - I have Fix Common Problems installed, only lists the IRQ16 problem
  14. Looks like you have found the solution! I removed the config directory, applied the setting which automatically restarted the container - I logged in, created a new device, set the destination and restarted the container - the service started normally. Hooray! I will now try to take over the old device or just re-upload the data and report back the results. Thank you very much for your assistance!
  15. I'm running version 6.5.1 and installing via the Community Application plugin There is no crashplan folder in /usr/local - only: bin/ emhttp/ etc/ lib/ lib64/ sbin/ src/
  16. This error keeps showing up in the logs, probably why it's failing: 2018/05/23-12:31:05.285880 14e5665f8700 Level-0 table #5: 247 bytes IO error: /usr/local/crashplan/conf/udb/000005.ldb: No such device SEVERE: Service UniversalStorageService [FAILED] has failed in the STARTING state. java.io.IOException: IO error: /usr/local/crashplan/conf/udb/000005.ldb: No such device
  17. Yes, I set up the "new" device but did not add any files to the backup set - then I restart the container and the service fails to start until appdata is deleted again Tried a new account, same issue
  18. yes, tried that too - if I stop the container, it will not start the service again, I have to delete appdata to get it to run again. Similar to francrouge's problem and my logs show the same error:
  19. Yes, I even tried creating it on directly on the array instead of the cache drive