Jump to content

cardNull

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by cardNull

  1. This resolved my issues. Many thanks! If anyone sees this, your codecs are located here: https://support.plex.tv/articles/202915258-where-is-the-plex-media-server-data-directory-located/ Delete the whole "codecs" folder.
  2. Thanks for your help with this. I have moved disks and parity is rebuilding now. Good to know on the CRC. I will monitor it for now. I have 10 drives of varying size. It would be ideal to pare them down so that I am not using SATA power splitters. Which I am now, and could be causing these issues? I am using a SAS HBA for the disks that have issues now.
  3. The disk that was tossing errors "Raw read error rate" (raw value of 1) is now zero. "Returned to normal". However the Multi zone error rate is still 1. Now my Parity disk has a Raw read error rate of 1, and UDMA CRC error count of 53. So something is up. But I am not sure how to keep chasing this one. I do have 1 8TB drive with less then 48 hours on it so my plan was to put that in the parity slot. Then build parity. Once that was correct I was going to replace all my 4tb with new 8tb disks. I guess my question now is, I can just shut down, remove the parity disk, add the new parity disk, boot up and let it build right? Looking at this (https://docs.unraid.net/legacy/FAQ/parity-swap-procedure/), I am right. But maybe I am doing it wrong lol This procedure is strictly for replacing data drives in an Unraid array. If all you want to do is replace your Parity drive with a larger one, then you don't need the Parity Swap procedure. Just remove the old parity drive and add the new one, and start the array. The process of building parity will immediately begin. (If something goes wrong, you still have the old parity drive that you can put back!)
  4. I added 1 and 200 to all my WD drives. Disk 1 popped alerts right away. Another disk has those same values for 1 and 200. But its not in error.
  5. nachoserver-diagnostics-20240105-0827.zipAttached is the diagnostics export.
  6. For the last 5 days I have been getting notified that the health check of my disks is failing. One disk in particular "disk1" is having read errors. When I run a SMART test, both short and extended, it says PASSED. I will note that my extended test took like 8 hours to complete. Is there a next step to testing the drive or gathering additional info on the "read" errors? Would it be the FS that is causing it? I have attached the SMART logs. Thank you. SMART-REPORT.txt
  7. After correcting all those settings (following my guide and the one @itimpi suggested) then allowing the parity check to complete. I rebooted and it was fine. Then I rebooted an hour ago and it was unclean again. It seems like progress, since I was not able to get a clean reboot since spring.
  8. I adjusted it from the default value today. The array is doing a parity check now (from the unclean shutdown) so I will have to wait to test until after. My timeout now is 7 minutes (420 seconds) on the array. It was 1.5 minutes (90 seconds)
  9. I had not read that one, but I have now. The only differing info from the post I linked above was about the docker container shutdown and SMB / NFS mounts. I set my docker containers to 30 seconds from 10 seconds (default). My remote server (Unassigned Devices remote shares) is not offline when my main server goes down. Today it was definitely up. I tested them today and they all took about 2 seconds to unmounted. My remote shares are for my backup server. Additionally I set the timers the same as Dlandon suggested in my link above.
  10. My system shutdown normally until did the 6.12 update earlier this year. (pretty sure that was the version that started it. It was spring I think this all started). I have read this thread (by dlandon) and made the changes suggested, but have not had the chance to test yet: I was hoping someone could help me figure out what the logs say is the reason for my unclean shutdowns. I have not had a clean shutdown since spring. If I stop the full array and then reboot / shutdown, everything is fine. But I shouldn't have to do that. I didn't used to have to do that. Thanks nachoserver-diagnostics-20231015-1014.zip
  11. allthethings-diagnostics-20220502-2151.zip
  12. Here are the diagnostics. allthethings-diagnostics-20220502-2151.7z
  13. I have a drive in my array that reports errors. All 4 drives are the same age. They came from a retired WD sentinel. The drive seems to be operating properly. I have put 2TB of data on it. Are these errors of great concern? Do they "resolve" if I overwrite them? I read that can happen. Is there any way to repair the sectors? Can I do anything but replace it? My array is 4 x 4TB disks. WD Blacks model: WDC WD4000F9MZ-76NVPL0 I have 2 cache drives (128GB and 250GB) - No errors, just thought I would supply it in case. Here is the downloaded report: https://pastebin.com/raw/a8b0r7SB
×
×
  • Create New...