whitewlf

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by whitewlf

  1. I recently got a nearly new, corporate surplus supermicro server decked out with some lovely hardware, which I am moving my Unraid onto. It came with an Adaptec ASR-81605z with battery backup, feeding the chassis 12xhotswap SATA units via SAS->Sata cables. Does anyone know if this card has any issues with unraid, aside from needing to turn it to HBA/single disks mode? eg. need a special firmware to expose the drives properly, etc? The motherboard base is a SuperMicro X10SRH-CLN4F (https://www.supermicro.com/en/products/motherboard/X10SRH-CLN4F)
  2. I have a complicated question, but, I hope there is a simple answer. I have an array of 8x8tb drives, dual parity (6+2), ssd cache. When I had 7 drives, one of them went out of sync (cable/power bump) and I was low on space, so I bought an 8th. I enabled encryption on the new drive, none of the others are encrypted. I added it to the array just fine, and, then used unbalance to move the data off the emulated drive. (incase i messed up the rebuild/re-add). I tested (heavily) and pre-cleared the out of sync drive to be safe, and was going to re-add it, as a new drive, encrypted. (Had not gotten around to it, and was unsure how best to do so. That procrastination likely saved me more dataloss in this case.) The problem came today when I had to shut down the array. Like an idjit, of course, I forgot the passphrase. I have tried, methodically, over 170 passphrases in iterations I would/should have used. I will try some more tomorrow before calling it moot, but, wanted to ask the best way to minimize the loss if it doesn't work. There is only about 3-4tb data on the encrypted drive... and I think I can replace most of it easily. What is the best way to bring this back online, encryption off/lost, with the 4x8TB of existing, un-encrypted data, then add the two "new" drives, then the dual parity. I am assuming a 'new config', no parity drives, no encrypted drives, just the 4x8tb as a fresh array (with data), then add the two as encrypted, stop, and add the 2 parity, and let it calculate parity all at once. (Preserve cache+data slots? or near-blank slate new?) I am aware I will lose the data on the encrypted drive, and, the emulated (empty) drive should lose nothing. I just do not want to lose the other 4x8tb (about 28tb data). I am fairly sure this can be done. And, yes, I will be using a keyfile with a backup after this. My memory is simply not sharp enough anymore. The purpose of the encryption is simply warranty snooping protection, so, I do not need to keep my tinfoil on that tight in this case. I will assume, also, that the parity drives are holding the encrypted raw bits/sums so, the parity atm would be of 0 help. Note on system: I was about to move the entire unraid into a new PC, as it has been in tinker-lab mode for a long time now. This is likely a good opportunity. I am just hesitant to make too many changes at once. It is currently (yes you can laugh.. but, it works..) an i5/8g low pro pc, sata ssd, 2x4port USB3 pcie cards, and 8x8tb external USB3 drives. They are a lil noisy in dual parity, and poor for heat, plus usb cables/8xpower adapters is easy to glitch/unsync. But, it is only 50-60watts in use (all drives included.) I was about to shuck them, and put them internal to my old gaming rig, stripped for server use. (hexcore/16g, full tower, direct sata, dual ssd cache). To make the parity operation smoother/faster, I think the PC swap would be a bonus before adding the parity at least. The i5/USB setup has also been getting io loaded recently, especially with large amounts of nzb files. After it is stable, I'll need to follow the remove-a-drive, replace with encrypted drive, one-by-one dance. I'm fairly sure each drive I do that to will be yet another fun parity/rebuild too. A dual parity check on the current system was about 26hours. Moral/Lesson of the story.. encrypt first, and don't lose your flipping passcode.
  3. Quick update, everything does seem ok, no errors thus far. The array rebuilt, the new drives cleared and added in, and now it's syncing up the second parity, which will take quite a while. Since it needs to read 100% of all the drives, if there are any exposable problems, it should trip them. 27.8% done, 40TB usable, 17.7TB free.
  4. Letting it add these two new drive in first, will do more checks when it finishes, and run parity check. Best to add the second drive first? Otherwise it will likely be 3days+3days to do it twice. I am still fairly wary of Disk 2. If it has more than a small bit of error it's likely best to warranty out, it's only 9-10mo old. Thanks for all the help, I'll let you know how it ends up. or blows up. ?
  5. Already did the new config, "parity is valid" approach, then checked disk 2 & 3 btfs --repair, checked all disks short smart, stopped it and added two more drives. It's clearing them now overnight. Will add the second parity when/if that finishes. I won't know if the data is damaged really until I run across it. No errors reported yet. selene-diagnostics-20180611-2344.zip
  6. As this array is simply a personal only, cost effective mass storage solution I was choosing the least expensive drives which are readily, easily available. My local Fry's has them for 159 or less on sales. They do not seem to stock the WD in 8tb often. I am still hoping someone who knows more about the force enabling of the out of sync offline drive could advise on the above.
  7. On second glance, those drives preclearing have spun down, and stopped doing anything. They were writing at ~98-110MB/sec last night. The counter is still stuck at 39% progress. I am thinking something hung up.
  8. I could not see a direct way of doing so, to enable disk 3. The two new devices are still clearing, taking an awfully long time. been about 6-7 hours, only 40% done. I found a mention about enabling an out of parity disk for Unraid 6, stating to drop the parity disk from the array, add it back, and click "Accept parity".. would this be the process to use to re-enable Disk 3? Also, the pre-clearing, I am not sure what that process does. Should I wait for it to complete, or, just stop the preclear now, stop array, drop the parity, re-add it, etc? I am just trying to not make a mis-step that makes this more difficult or loses more data. On that note, anyone have an idea why Disk 2 would go from seemingly fine to spewing errors after only an unclean shut down? My only guess is, the 2 hour down time cooled the drive and exposed a flaw by the thermal fluctuation. It was the first time any of the drives had been shut down longer than a couple minutes since install, and even then only a couple times.
  9. I have/had a 5 drive system (4x8tb w/parity, 1xSsd cache). While it is on a UPS, it shares it with my PC and the PC had the USB line to it. (swapped/fixed that now). This morning we had a 2hour power outage, I am unsure if there was any flicker. All drives and both PC/unraid are on the UPS. When the unraid spun back up, it had about 1000 read errors on disk 2, and disk 3 was Disabled. I had -just- bought a new drive friday to add space to the array but hadn't put it in yet. I ran out and bought 2 more drives today, and all three are in the unit now, 2 pre-clearing, 1 waiting to be a second parity once that finishes. (Can't do it all at once). Disk 2 is now throwing tons of read errors, and took smb offline. It passed a btrfs and smart test before I added the new drives, and, until this morning's unclean shut, had not shown any errors at all. Disk 3 passed long and short smart tests, and btrfs. The data on it appears to be intact as well, but, due to what is likely a tiny parity mismatch, it's not in the array. With Disk 2 acting this squirrely, yet not yet kicked from the array, I am in doubt I will get too far with salvaging the data from it, but, it will also make Disk 3's emulated data broken. While I am not sure all the data on Disk 3 is 100%, I really don't care if a little is lost. I can likely replace it, but, replacing 7TB..+ 7TB.. (they were both nearly full) is entirely another pain in the ass. These are media files, most are huge, and, can survive a little damage, possibly. What is the best suggestion to proceed once the 2 new data drives enter the array. I'm holding off on the parity for now, plus, wondering if I should use that drive to direct copy the drive 3 data to if things go any more pear shaped. Is there any way to force Disk 3 to be back to active, accepting a bit of the parity might be bogus? This would allow me to remove Disk 2 and rebuild to the new drives, given that real errors>a likely touch of parity alignment. Note, all the data drives are Seagate expansion drives, identical models in and out, afaict, mag shingle design. They are currently all individual units, USB 3, on 2xPCIx-USB3-4 port low-pro cards (until a new server is built and drives shucked.) They are all under a year old. I plan on a second cache drive as soon as this data is fixed, as well. (2 parity, 5-6 data, 2 cache ssd.. depending on seagate's warranty for the erroring drive) I would have loved to do IronWolf's but, they are 2.5x the cost, and, I don't have a tower server ready for that atm. This server is (hilariously, and impressively) running on a low profile i5 ex-office pc. The USB connections are obviously less than ideal, but, have been running for nearly a year with only minor slowness (heavy use of sabnzb, plex, samba, parity, etc likely causes high IO load, large file deletes are slow, but, it's been adequate for simple plex/sab/sick/CP). On a side question, is there any way, in the future, to pre-empty an array data drive.. ie. have it spread it's contents to the other devices, before being removed/replaced for old age or if it starts tossing errors? (Versus the cold turkey, pull it and rebuild method.) I'm assuming this isn't a common method perhaps due to the similar amount of read/writes involved to the eventual rebuild anyway, but, I though I should ask. selene-diagnostics-20180611-0009.zip