Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

10 Good

About extrobe

  • Rank
    Advanced Member
  • Birthday May 27


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'd love to see partial-parity scheduling. eg, do 25% each week, meaning a full scan is done every 4 weeks. Currently, doing a full scan takes nearly 25hours, so don't tend to run them unless I have had a dirty restart or something. Being able to phase it would allow users to keep parity in check
  2. Bummer - that does make it more challenging Perhaps I'll add that to my rainy-day to do list!
  3. It's taken quite some time, but have finally finished applying disk encryption to all 20 drives using the 'shuffle' method. But I'm not sure what the best approach would be to encrypt the cache drive(s). I have a 4-disk cache disk configuration (4x 500gb) in btrfs. I know one of these disks needs replacing soon anyway, so will I be able to ... - Remove one of the disks from the cache array - Install the new disk - Format with encryption - Add to cache array - repeat for each subsequent disk or is there a better approach?
  4. Thanks BRiT - thought it was worth asking. I'm replacing both anyway, and I'll later stick one of the SMR drives back into the array - see what happens!
  5. Over the last few weeks, I've 'lost' one of my parity drives on 3 occasions (not always the same drive). I get a 'Parity1 Disabled' error. On each occasion I ran extended SMART reports, and couldn't find any issues. I've swapped out power, data cables etc, but never found the cause and on each occasion I ended up re-formatting and re-adding the parity drive. The 2 parity drives are connected to different controller cards (I have 3xH200's), and different power outputs from the PSU. I do have 2 new WD 'Red' drives onorder to replace both parity drives which are currently Seagate Archive drives (ST8000AS0002, which I understand are SMR drives?). But it's bugging me why the parity drives keep getting kicked out. It then occurred to me that on all 3 occasions I had been running Unbalanced to shift data between disks as I go through applying XFS Encryption to each of the 18 array disks. Could the combination of shifting data between disks, and the less-than-ideal SMR drives being used for Parity have something to do with this? Or is it simply the fact that the law of probability means I'm more likely to encounter such an issue during this type of activity?
  6. Thanks @jude, this did the trick for me in 6.7 stable as well
  7. Right, I think I know what I did now - Because I planned to remove both disk 16, and disk 1 eventually, I removed disk 1 from the 'included' disks for each share, but also from the Global Share Settings - this seems to be what triggered the data on disk one to be treated differently. In theory, fixing it was just re-adding disk 1, but was made tricker by half my dockers hanging because the data in the shares they referenced was not there, and had to force a hard reset. Gah.
  8. I'm currently in the process of decommissioning one of my disks, by following the Shrink Array process. Disk 16 is to be removed. All data shifted via Unbalanced, formatted, and now running the clear script. All is good. But I think saw Radaar going nuts telling me everything was missing. And indeed, when I go into my shares, I'm missing a heck of a lot of files - not looking good! I then realised that under shares, disk1 is listed as a disk share - and when I go to the root folder of my server, there's disk1 folder share sat along side the other shares. Now, I did remove disk1 from the disks to be used for all shares, as I plan to decommission that one at some point too - this might have something to do with it. Any idea what's going on, and the cleanest way to sort it out? Can I just use Krusader or something to rehome them directly to an appropriate disk?
  9. I asked pretty much the same question on reddit the other day. One of the devs responded that they're working on a change to allow you to force a 'move' when you're mapping a remote path - for this specific reason
  10. Yes, that's certainly a possibility. I think I'm going to order another anyway (they're not super expensive) and then do some more testing Now, I did have all three of these cards in consecutive slots - and the one I'm having problems with is the one sandwiched in the middle. These things can get pretty hot, so that could be a contributing factor as well.
  11. I get the same error - 6.7 is the next major release (with a new dashboard etc) that's currently being tested.
  12. After applying the latest update and rebooting, UR reported 4 disks being missing. All were attached to the same breakout cable on the same controller. (3x DELL Perc H200s) Usual trouble shooting - shut down, check connections - restart. 8 disks now missing - arghh, but it's all disks on that same controller. By now I'm looking on eBay to see how quickly I can get a replacement, but tried moving the controller to a different slot - and boom - all worked fine. -- How likely is it that this was in fact a MoBo slot issue, and how likely is it that this is a sign I need to replace the raid card?
  13. I also migrated to sickchill - very simple. I'd previously rolled back to v2018.10.06-1-01 Then backup --> create sickchill docker --> restore Done Thanks for the heads up on SR
  14. Hi - having a bit of trouble with the extended test. It's been 'running' for best part of 3 days now, but don't think it's actually doing anything. On the plugin tab, it says it's processing a specific share ( Status: Processing /mnt/user/wAppDataBackup) , but it's been like that for most of the time it has been running. There's no disk activity, so don't think it's actually doing anything But unsure how to check / force stop / if this is actually normal or not
  15. Yes - I have 20 bays in total - and I've had the disk in several of those bays, which covers different sata connections to the PSU, and different raid controllers. I think it's must have been some sort of corrupted filesystem, but that corruption appeared to be being replicated whenever the disk was rebuilt - and about half way through actually shifting the data to different disks, it 'gave up'. I way eventually able to mount the drive in my Ubunutu VM and recover about 200GB - reckon there was closer to 1TB of data, so unless I can figure out how to recover it from parity ,looks like I've lost it. The important stuff is backed up, so will just be media. Just got to figure out how to get out of this 'loop' I seem to be in. Edit: There's a huge Lost & Found folder, so looks like I'll be able to recover most data. So just need to work out how to sort out the unmountable drive. Think I have a plan though - fresh drive, re-build the drive - then remove it out of the array and rebuild parity without it. re-format the disk, add it again - then copy files across from the repaired drive.