BF90X

Members
  • Posts

    35
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BF90X's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Long story short, the parity appears to have gone back to normal. I tried so many different things, I have no idea what fixed it. -Updated BIOS -Updated to UnRaid 6.10.1 -I tried one of the drives, it originally did not work. (Now that it's working, I did try the other drive instead) So that might be it? -Tried to boot UEFI instead of legacy (Literally hours trying but no luck) -At some point, I wasn't even able to boot with legacy due to the AER issue. I added the following to the syslinux, nvme_core.default_ps_max_latency_us=5500 pci=nommconf Already had, pcie_aspm=off I am still seeing the "Hardware error from APEI Generic Hardware Error" but not as much. The only major issue I am seeing for now is that one of the 2.5 inch drives on one of the ZFS pools is failing. Waiting for the replacement. prdnas002-diagnostics-20220524-1235.zip
  2. Thanks for sharing that information, I will follow the steps on that link. Ohhh, I guess that's why core 6 is maxed out. Everything else seems to be working properly. I run docker containers and a coupe of vm's without much issues. One of the steps I tried to troubleshoot the parity rebuild issue was to do a new config. Also, pre-cleared the drives as well. I rebooted on safe mode and try the rebuild as well but got the same speeds. Did not look at the CPU usage when I did that though.
  3. Thanks for the input @ChatNoir & @JorgeB I updated the firmware of the HBA and added a temporary fan for additional cooling, ordered a few items to come up with permanent solution to ensure the HBA has better cooling. After making these changes, I am still seeing the same issue. Might not be related but at some point after upgrading to 6.10 I notice I was getting a lot hardware errors. May 23 05:45:24 PRDNAS002 kernel: {20}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 512 May 23 05:45:24 PRDNAS002 kernel: {20}[Hardware Error]: It has been corrected by h/w and requires no further action I looked around online and found that someone had to run the following command to eliminate them, setpci -v -s 0000:40:01.2 CAP_EXP+0x8.w setpci -v -s 0000:40:01.2 CAP_EXP+0x8.w=0x2936 Attached is the new diagnostics. Appreciate all the help. prdnas002-diagnostics-20220523-1207.zip
  4. Hello, I was wondering if someone could help me identify why my parity rebuild is slow. Some of the last changes were, - Installed a lsi sas3224 about 5 months or so ago - Replacing all my Seagate drives (Data Drives) I am getting roughly 10-20 MB/sec on the rebuild as of right now. I also have three 3 separate zfs pools, need to replace a drive on the hdd pool. The other two ssd and nvme are working properly. Attached is the diagnostics. prdnas002-diagnostics-20220523-0047.zip
  5. Hello, I am experiencing issues with buffering as well. Media doesn't appear to start buffering, stop it and try again and then it would sometimes work. Is there anyway possible to revert back to a previous release of the docker container?
  6. Hello, I swapped the motherboard, ram and cpu. I haven't had the same issue occurred again. Thanks for your help. I will troubleshoot the malfunctioning hardware to see if I can RMA any of those components.
  7. Hello, Thanks for sharing this information. I will be replacing/upgrading the hardware later this week and I will report back if I am still experiencing the same issue.
  8. I was asked to submit this bug report after speaking with Jonp from support thread.
  9. Hello, I really need help after numerous issues with one of my unraid servers. I have tried everything I can think of so far but so far nothing has worked. Randomly, one or many of my drives do not mount properly after inputting the encryption key. This causes me to have to reboot the server and then my parity drives generate errors which then I have to rebuild the parity drive(s). Sorry for reaching out but thus is really frustrating. Thanks in advance. prdnas002-diagnostics-20210929-1030.zip
  10. Thanks for the feedback, I appreciate it. It seems like that didn't work for me. It actually caused one of my parity drives to generate a significant amount of errors due to the reboots I had to do to get the drives to mount properly.
  11. Having the same issue over and over again. Were you able to find a fix? It's starting to get really frustrating.
  12. Actually, it appears I found the culprit. The Nexcloud docker container.
  13. Unraid keeps crashing and says it is out of memory. These are some of the errors I am seeing from the logs, real-time. Feb 12 21:25:56 PRDNAS002 kernel: out_of_memory+0x3dd/0x410 Feb 12 21:25:56 PRDNAS002 kernel: mem_cgroup_out_of_memory+0x79/0xae Feb 12 21:25:56 PRDNAS002 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=74d31b45190470112aa2ffae084a6d3d7736aa205157cbed8fa557e9eef6f65a,mems_allowed=0,oom_memcg=/docker/74d31b45190470112aa2ffae084a6d3d7736aa205157cbed8fa557e9eef6f65a,task_memcg=/docker/74d31b45190470112aa2ffae084a6d3d7736aa205157cbed8fa557e9eef6f65a,task=php-fpm7,pid=87851,uid=99 Feb 12 21:25:56 PRDNAS002 kernel: Memory cgroup out of memory: Killed process 87851 (php-fpm7) total-vm:1749760kB, anon-rss:993432kB, file-rss:0kB, shmem-rss:19184kB, UID:99 pgtables:2240kB oom_score_adj:0 Attached are the logs. prdnas002-diagnostics-20210212-2128.zip
  14. Thanks for the response JorgeB. That's what I thought but I just didn't know if It would be possible to add the drive back to the array and build it from an earlier stage.
  15. Hello, I recently encounter the following issue. Unraid crashed and then when I brought it back online, one of my data drives came out as an unmountable encrypted drive. Also notice that one of my cache drives didn't mount properly since it was complaining about a wrong passphrase, even though all other ones mounted properly. I rebooted the system and tried to bring the array back online, the cache drive that was previously complaining mounted properly. As far as my data drive, it still didn't want to mount and it was still giving me an unmountable disk drive message. It was giving me the option to format the drive and I thought it would format the disk and rebuild from parity but instead, it actually formatted the drive and lost all 14 TB of data. I know, I should have paid more attention and should have reached out beforehand but is there anyway possible to recover or even know what data was on that drive? Attached is the last diagnostics after I had already clicked format. The disk that failed to mount was Disk 17, ST14000NM0018-2H4101_ZHZ4JPR8 - 14 TB (sdd) prdnas002-diagnostics-20210211-1419.zip