XanXic

Members
  • Posts

    8
  • Joined

  • Last visited

XanXic's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Yeah they were both corrective, I was going to post the diags but it had restarted since then. I have it setup to restart once a week in the dead of night. So I ran ANOTHER check lol and this time it found none. So I'd have to guess the corrective check plus a restart fixed the issue.
  2. So I ran a corrective parity check, 58 million errors. Then ran another check to see if it solved them and it's still 58 million errors. Parity is still valid but that didn't seem to fix the errors it's seeing.
  3. Alright will do, I just wanted one other person to say it's okay before I went for it lol. I will know in...21 hours... lol
  4. Yeah, that was the exact procedure I followed. Is there something I should do or should I just expect the errors from now on?
  5. Sort of title, after my server ran a scheduled parity check it reported back over 61 million errors. This was a non-corrective check, I ran another one to be sure and same report 61+. But the drives themselves are showing zero errors and the parity is reported as valid. The thing is these are the first parity checks after installing a new drive, I did the 'unraid swap' and replaced a bad 8TB drive with a 12TB one (All drives in the array were 8TB at this point), and added a second 12TB drive. So my parity moved to the new higher TB drive and the other was added to the array. Rebuilt parity over like...two days almost. And then I'm like 90% sure I ran a parity check after just to make sure it was all cool, which it was and has been fine since. So my suspicion, because 61 million is a lot, these errors are related to a 'missing' drive? I'm not sure, I'm nervous to run a corrective check because idk what that'll do. Despite the error's it's still reporting the parity as valid. All drives have clean bills of health from SMART tests, and again show zero errors on them. I tried searching around but couldn't find a similar issue and I can't find a way to actually read the parity report.
  6. Is this still the case? I'm using PIA and having some connection, IP setting, and slow network issues. Only over the last like three days though, but it's only with my DelugeVPN and SABNZBVPN dockers. Running it as an actual VPN has no issues. Been pulling my hair out trying to get it functional again. I have the latest OpenVPN files and stuff. Is it really just been a bad week for PIA?
  7. Alright, it was the destructive tag on the repair command that implied to me it would delete everything. Already got unencrypted backups of appdata going! Thanks, I'll probably do a reformat soon just to be safe.
  8. I'm not super proficient with unraid/linux but I'm fairly computer literate. But I tried doing too much with everything and hardlocked unraid and had to do a hard shutdown. When it came back the cache drive said unmounted file not found or something. Anyways after googling a lot I came across the steps here I was able to mount it as readable, copied my appdata folder off to another disk because I desperately didn't want to have to rebuild all my docker apps. Restarted just to be safe, I got confused on the instructions and ran "btrfs check --repair /dev/sdX1" listed last on the instructions on my cache dir. After it started running it seemed odd, looked over instructions and apparently I wasn't supposed to run that? Let it go because, why not? Afterwards restarted the array to get to my disk with the app data I copied on it and the SSD cache remounted, all the files where there and my unraid was totally fine. My plex database was corrupted but I restored it from a DB backup on the cache. Lost two days of changes but overall is fine. It's been perfectly fine for the last few hours. I've been working on getting appdata backups setup so if this happens I can just reformat, remount and copy onto the cache again. But for now I clearly didn't understand the --repair command or the instructions. My understanding was that it should've deleted all the files but my SSD seems totally fine. I've ran a SMART test and unraid says it's a-okay in the summary but if I'm being honest I don't totally understand how to read the results. By everything I can think to check it all seems good....idk if I need to do anything else or since I just did a repair if it can corrupt again or something? Since it's all working it'd be easy to copy appdata off, reformat and put it back but idk if that's over kill? I honestly feel like I learned a lot through this process (Like encrypted cloud backups are useless when rclone is unaccessable) but I feel like I clumsily lucked through the process