CiXel

Members
  • Posts

    96
  • Joined

  • Last visited

Everything posted by CiXel

  1. Sometimes when running a scheduled parity check I don't get enough throughput from the system for other operations and so I need to pause it. The problem is sometimes I forget to unpause the check to allow it to continue. It's be great if the pause function had a little drop down next to it that allowed one to 'Pause for Y hours' 2,4,8,12,24,48 or maybe X to set a custom value.
  2. Ah Great. Thanks everyone for your insight.
  3. Good point, they were a group of metadata files that 'corrupted'. I supposed it's possible they were updated from somewhere all on the same drive between the time the checksum and checks were made. That would explain that.
  4. Understood- This was the first time I had seen this though, so I wasn't sure if I had more to concern myself with. Thank you
  5. I'm running Dynamix File Integrity and for the first time have some bitrot corruption on one of my drives. What's protocol when this happens? Can I replace the drive with a larger drive, format the removed drive to reset the sectors, and use it to upgrade another drive or is it a sign of drive failure that's just going to get worse and I should put it out to pasture?
  6. Unfortunately it didn't work. jonnie was correct. I even tried to start in maintenance mode as a first step thinking it would not mount any of the drives as a precaution. By the time I was able to stop it I had 7303 parity writes. I ended up popping in a replacement drive and starting from scratch with a 'new' array. I mounted the 'broken' drive outside the array and once the initial parity is rebuilt (protecting the rest of the array) I'll copy the data back over from lost+found and run a crc check on it vs the backup data. Thanks for all your help. I really appreciate the support you provide this community when things go south. I'll open up a new thread for Tom asking that 'Trust My Parity' actually does so.
  7. Let me preface this with: Ughhhhhh This seem to come down to an EEOC error. When reassigning drives, I accidentally swapped the Parity Drive with a Data disk. This is why the one drive was unmountable and why no superblock could be found. Now, you may take a moment to slap me upside the head for being stupid. ...Go ahead. I'll wait... Now with that out of the way here is my plan. Since my parity drive was not touched beyond the superblock scans, that should be ok. The rest of the array was not really written to during the shrink EXCEPT for the drive that was accidentally made the 'new parity' - disk 1 This means I just have to address that one disk. I was thinking of this: 1) re-initializing the array 2) set the the old assignments 3) tell Unraid to 'Trust my Parity' 4) Shut down and fail disk 1 by replacing the drive 5) Let a rebuild occur on disk 1 In theory this should get me back to square 1ish at which point I can run a crc check on the data against the backups Does this sound reasonable? Is there something I'm not thinking of?
  8. Booo- No dice on the secondary Superblock after checking the /dev/md#
  9. Yup. I fell in the camp of converting ALL my disks over to XFS for unraid 6 I have backups, but the hardest part is figuring out what was on this particular disk in relation to the others to restore. Since my backups aren't actively online it's harder to run a straight diff on the whole array with ViceVersa or the like. Appreciate the UPS suggestions. What I've had had been good enough for a while.
  10. I realize Reiser is antiquated, but is there any value to going back to straight resier for Unraid purposes in your opinion?
  11. Yup. There's a UPS. I forced the bad shutdown. That said, my UPS is just a generic one. Is there one you recommend that can trigger a graceful shutdown? (xfs_repair is still searching for a secondary superblock)
  12. Thanks itimpi Perhaps that is actually my current issue with the lack of Superblock. Let me jump to maintenance mode and give it another whirl.
  13. Thanks BRIT good to hear about BTRFS (a lot less work for me too) I'm running the commands on the /dev/sd# with the array not started Since the drive is 'unmountable' it wouldn't be available on /dev/md# Parity is already shot since I was trying to shrink the array at the time.
  14. Yes I caused it. I'll need to replace it since I can't find a valid superblock for drive. I figure I'll put another drive in the array and run this corrupted drive outside the array and copy what I can off of it to the replacement drive.
  15. Good thought. I ran a xfs_repair -n to check and that's when it replied 'Sorry, could not find valid secondary superblock option' (Which I found odd) I'll likely have to blow it out with a xfs_repair -L to at least hope I can easily recover the data
  16. I had an XFS drive get corrupted on me due to power loss. Right now it 'can't read superblock on a xfs partition' so that's a fun problem Since I'm likely going to have to replace the drive, I was wondering if BTRFS was less susceptible to disk corruption vs XFS on power loss? Thoughts? (Longer version. I was trying to make the array smaller. I removed the old drives, init'ed the array, and was assigning back the drives. It was only once the array booted up and the new parity check was started that Unraid told me that drive was 'unmountable'. At that point, the new parity check had wiped out the old version of the parity and so I was left with this drive situation. Since I'll need to 'start over' per se, now would be the time to change the filesystems if that's the way to go)
  17. Well timed post. My big problem of late has been while trying to upgrading the parity drive I have one of the other drives fail. This means I now how a dual failure and data loss. It's happened to me twice recently. I'm pining for some form of dual parity or redundancy to protect parity while upgrading.
  18. Good call on the flash drive. I did a complete format instead of a quick format and reloaded everything. All seems to be working now. Thanks for the lead.
  19. Yeah, it's really odd. It is there, but it's like nothing is being read. There's another oddity I've copied over my previous config and yet there's no indication of it being read. Ok. Whatever, I'll start from scratch. I load my pro key, still comes up as trial. Ok, I must have not done it right since it has to be in the config folder. So, I load my pro key from the gui web url, it registers appropriately. great. I reboot, comes up as trial again. I copy over my .key into flash > config reboot It's gone. Any file I put in there no longer exists. My bad. I thought this upgrade was going to be 'easy' =)
  20. hmmm. If I run "emhtp" I can get it to show up, but seems odd that it wouldn't be running from the start. (mods- I also just noticed I put this in the 5.x general support, feel free to move it)
  21. Even on a 'fresh' new installation of 6b14 I have no web interface :\
  22. I'm stumped for the moment. I plugged in my upgraded 5 to 6b14 usb stick to the server and upon booting up I cannot access the web interface. SSH to the specified IP works fine. I'm going to start fresh and not copy over the config directory as a testing step, but it seems odd that the web server would not be accessible. Any ideas?
  23. Ah I thought that was a benefit of drive pools.
  24. I'm getting ready to start my upgrade to 6. I'd like to go with XFS for my file system as it seems more robust that BTRFS, but I'd also like to have dual parity drives. Is this best of both world option possible? What's the general consensus?
  25. It looks like my current USB stick is on its last legs (my 2nd lexar to do so). I'm ready to go from 4.7 to 5.0, but both my /flash directory as we as /boot directly on the server are showing up as blank at the moment. I don't want to take it out of the server to try in case it doesn't come back up, but it means I won't have my previous configs. I've got all my drive positions recorded and 5.0 on a new stick. What further considerations do I have to make before I pull the old stick and try with 5.0? Would I just boot up, make sure the drives are in the correct positions and then start the array?