wickedathletes

Community Developer
  • Content Count

    392
  • Joined

  • Last visited

Everything posted by wickedathletes

  1. Running it now. How do you know if the one run on 1/13 did not have write turned on? My scheduler is set to run on the first day of the month but write corrections is on. So this must have been a crash, do those not write corrections by default?
  2. I did pull my disk 4 some time around the 13th. Maybe its seated bad? I plan to replace it because its causing a bug in unraid, so rather than wait for a fix I figured I would swap it with another 8TB, rebuild parity, then take that 8tb drive a clear it (hopefully fix the bug) and replace a 4TB drive with that. I didn't start this because I had some work done to my home that required power outages, now that thats over of course I have parity showing issues so now I don't want to do it yet haha.
  3. nothing... that is what is confusing me. Nothing in my logs that I can see either... I am running out of space, could a space issue cause something like this? I had 2 drives down to KBs left while something was trying to move to them. I am trying to replace a drive to add some more TB's but didn't want to do anything with my parity with errors in it.
  4. 1/2 was my last 0 errors run. 1/21 was my manually run check.
  5. one was scheduled, the second I manually ran with "Write corrections to parity" checked. Both came back with the same number. All monthly checks have been 0 prior.
  6. Write corrections to parity is checked. Do I need to check something else?
  7. My last 2 parity checks completed but I noticed it: Finding 24047 errors Duration: 1 day, 19 hours, 21 minutes, 52 seconds. Average speed: 89.7 MB/sec Same error count, both checks. Any thoughts on what could be going on? hades-diagnostics-20210121-1803.zip
  8. @limetech, just wanted to confirm the issue still exists in RC2: Here are diags from beta 30: https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta30-available-r1076/page/2/?tab=comments#comment-10968 And here are the results of `hexdump -C -n 512 /dev/sdh` https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta30-available-r1076/page/3/?tab=comments#comment-10972 limetech said: Thanks, yes that explains it. Probably, with -beta25 and even earlier releases, if you click on the device from Main and then look
  9. any chance this was resolved in rc2 @limetech? Here are diags from beta 30: https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta30-available-r1076/page/2/?tab=comments#comment-10968 And here are the results of `hexdump -C -n 512 /dev/sdh` https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta30-available-r1076/page/3/?tab=comments#comment-10972 limetech said: Thanks, yes that explains it. Probably, with -beta25 and even earlier releases, if you click on the device from Main and then look at 'Partition for
  10. @limetech, looks like this is still an issue for me in RC1 (beta 25 and back works fine, bug first discovered in beta 30), any update on a potential fix timeline? One of my drives shows "Unmountable: Unsupported partition layout."
  11. @limetech, any chance this is fixed in beta35? I can't chance it as I wont have a recourse back to nvidia drivers since the old plugins are dead in the water so I don't want to change bumping up to then have to go back without a way to go to the nvidia driver sets. "Thanks, yes that explains it. Probably, with -beta25 and even earlier releases, if you click on the device from Main and then look at 'Partition format' it will say 'GPT: 4KiB-aligned (factory erased)' - that (factory erased) shouldn't be there. Will fix for next release."
  12. that is definitely possible, but asking my brain to think back 3 years is a big task hahaha. It would have been one of 2 things, a rebuild of a disabled disk or a replacement of a 4TB disk as 3 years ago my server only had room for 8 drives.
  13. This drive is functioning 100% fine in beta25 and earlier. 29/30 it wont mount. I honestly don't recall. I would say I have pre-cleared 90% of my drives but their was a time when I didn't have the ability to pre-clear due to a drive failure and no extra bays to use (aka I was under the gun). Its possible this was the drive that was not, but I doubt it. The drive is roughly 3 years old. Is their a way out of this state, or with this drive am I stuck on beta25 for eternity haha?
  14. thank you and as always, you guys are the best!
  15. 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001c0 02 00 00 ff ff ff 01 00 00 00 ff ff ff ff 00 00 |................| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200
  16. I assume in a "broke" state? or as beta25 right now?
  17. attached. I also tested beta29, same issue. beta25 is good though. hades-diagnostics-20201008-1401.zip
  18. I am kind of scared to do it again, for a 3rd time haha. Before I do, if the drive goes into that state, is that a state that parity can get it out of? I just don't want to lose 8TB of data because a drive is failing but not throwing a "failure" per-say.
  19. Coming from beta25, beta30 keeps throwing one of my drives into "Unsupported partition layout." Restoring back to 25 fixes it. Drive details attached, not sure if you would like anything else. It happened both attempts. I am using the linuxservio version of beta30.
  20. EDIT: I am a dolt, I was adding the NVIDIA_VISIBLE field, not using the one provided.... ugh. sorry all. Any thoughts? root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='unmanic' --net='bridge' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'NVIDIA_VISIBLE_DEVICES'='GPU-BLAHBLAH' -e 'PUID'='99' -e 'PGID'='100' -e 'NVIDIA_VISIBLE_DEVICES'='false' -p '8888:8888/tcp' -v '/mnt/cache/apps/Unmanic':'/config':'rw' -v '/mnt/user/Movies/'\''Twas the Night Before Christmas (1974)/':'/library/movies':'rw' -v '/mnt/user/TV Series/13 Reasons Why
  21. thanks. google ended up fixing it for me, "New Docker Safe Permissions"
  22. I plopped in 2 new drive slots and formatted the drives to add to my pool. They are showing fine, and added the correct amount of space but when i try to copy data to them in Dolphin it fails saying "could not write file . disk full." What step did I miss, its been about 5 years since I added a new drive to my unraid server (I have just been upsizing them recently).
  23. thank you! its what i assumed and I planned to pre-clear them anyways, just figured before I plugged them in I would ask before catastrophe haha... I've done before ask and that crushed 16TB of data before.
  24. Wasn't able to figure out what to search for properly, so figured I would ask and hopefully its a quick answer. I spent the last year converting my 10 drives to 8TB (from 4). My case was maxed out at 10 bays. I have since got a new case and have just moved over to it. Prior to putting back in the 4TB drives, I just am making sure unRAID won't recognize them anymore since the parity has been re-written with the replaced drive correct? Meaning, if I plug them all back in, they wont screw with my system since the drives were just replaced, not wiped clean.