BF90X

Members
  • Posts

    35
  • Joined

  • Last visited

Everything posted by BF90X

  1. Long story short, the parity appears to have gone back to normal. I tried so many different things, I have no idea what fixed it. -Updated BIOS -Updated to UnRaid 6.10.1 -I tried one of the drives, it originally did not work. (Now that it's working, I did try the other drive instead) So that might be it? -Tried to boot UEFI instead of legacy (Literally hours trying but no luck) -At some point, I wasn't even able to boot with legacy due to the AER issue. I added the following to the syslinux, nvme_core.default_ps_max_latency_us=5500 pci=nommconf Already had, pcie_aspm=off I am still seeing the "Hardware error from APEI Generic Hardware Error" but not as much. The only major issue I am seeing for now is that one of the 2.5 inch drives on one of the ZFS pools is failing. Waiting for the replacement. prdnas002-diagnostics-20220524-1235.zip
  2. Thanks for sharing that information, I will follow the steps on that link. Ohhh, I guess that's why core 6 is maxed out. Everything else seems to be working properly. I run docker containers and a coupe of vm's without much issues. One of the steps I tried to troubleshoot the parity rebuild issue was to do a new config. Also, pre-cleared the drives as well. I rebooted on safe mode and try the rebuild as well but got the same speeds. Did not look at the CPU usage when I did that though.
  3. Thanks for the input @ChatNoir & @JorgeB I updated the firmware of the HBA and added a temporary fan for additional cooling, ordered a few items to come up with permanent solution to ensure the HBA has better cooling. After making these changes, I am still seeing the same issue. Might not be related but at some point after upgrading to 6.10 I notice I was getting a lot hardware errors. May 23 05:45:24 PRDNAS002 kernel: {20}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 512 May 23 05:45:24 PRDNAS002 kernel: {20}[Hardware Error]: It has been corrected by h/w and requires no further action I looked around online and found that someone had to run the following command to eliminate them, setpci -v -s 0000:40:01.2 CAP_EXP+0x8.w setpci -v -s 0000:40:01.2 CAP_EXP+0x8.w=0x2936 Attached is the new diagnostics. Appreciate all the help. prdnas002-diagnostics-20220523-1207.zip
  4. Hello, I was wondering if someone could help me identify why my parity rebuild is slow. Some of the last changes were, - Installed a lsi sas3224 about 5 months or so ago - Replacing all my Seagate drives (Data Drives) I am getting roughly 10-20 MB/sec on the rebuild as of right now. I also have three 3 separate zfs pools, need to replace a drive on the hdd pool. The other two ssd and nvme are working properly. Attached is the diagnostics. prdnas002-diagnostics-20220523-0047.zip
  5. Hello, I am experiencing issues with buffering as well. Media doesn't appear to start buffering, stop it and try again and then it would sometimes work. Is there anyway possible to revert back to a previous release of the docker container?
  6. Hello, I swapped the motherboard, ram and cpu. I haven't had the same issue occurred again. Thanks for your help. I will troubleshoot the malfunctioning hardware to see if I can RMA any of those components.
  7. Hello, Thanks for sharing this information. I will be replacing/upgrading the hardware later this week and I will report back if I am still experiencing the same issue.
  8. I was asked to submit this bug report after speaking with Jonp from support thread.
  9. Hello, I really need help after numerous issues with one of my unraid servers. I have tried everything I can think of so far but so far nothing has worked. Randomly, one or many of my drives do not mount properly after inputting the encryption key. This causes me to have to reboot the server and then my parity drives generate errors which then I have to rebuild the parity drive(s). Sorry for reaching out but thus is really frustrating. Thanks in advance. prdnas002-diagnostics-20210929-1030.zip
  10. Thanks for the feedback, I appreciate it. It seems like that didn't work for me. It actually caused one of my parity drives to generate a significant amount of errors due to the reboots I had to do to get the drives to mount properly.
  11. Having the same issue over and over again. Were you able to find a fix? It's starting to get really frustrating.
  12. Actually, it appears I found the culprit. The Nexcloud docker container.
  13. Unraid keeps crashing and says it is out of memory. These are some of the errors I am seeing from the logs, real-time. Feb 12 21:25:56 PRDNAS002 kernel: out_of_memory+0x3dd/0x410 Feb 12 21:25:56 PRDNAS002 kernel: mem_cgroup_out_of_memory+0x79/0xae Feb 12 21:25:56 PRDNAS002 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=74d31b45190470112aa2ffae084a6d3d7736aa205157cbed8fa557e9eef6f65a,mems_allowed=0,oom_memcg=/docker/74d31b45190470112aa2ffae084a6d3d7736aa205157cbed8fa557e9eef6f65a,task_memcg=/docker/74d31b45190470112aa2ffae084a6d3d7736aa205157cbed8fa557e9eef6f65a,task=php-fpm7,pid=87851,uid=99 Feb 12 21:25:56 PRDNAS002 kernel: Memory cgroup out of memory: Killed process 87851 (php-fpm7) total-vm:1749760kB, anon-rss:993432kB, file-rss:0kB, shmem-rss:19184kB, UID:99 pgtables:2240kB oom_score_adj:0 Attached are the logs. prdnas002-diagnostics-20210212-2128.zip
  14. Thanks for the response JorgeB. That's what I thought but I just didn't know if It would be possible to add the drive back to the array and build it from an earlier stage.
  15. Hello, I recently encounter the following issue. Unraid crashed and then when I brought it back online, one of my data drives came out as an unmountable encrypted drive. Also notice that one of my cache drives didn't mount properly since it was complaining about a wrong passphrase, even though all other ones mounted properly. I rebooted the system and tried to bring the array back online, the cache drive that was previously complaining mounted properly. As far as my data drive, it still didn't want to mount and it was still giving me an unmountable disk drive message. It was giving me the option to format the drive and I thought it would format the disk and rebuild from parity but instead, it actually formatted the drive and lost all 14 TB of data. I know, I should have paid more attention and should have reached out beforehand but is there anyway possible to recover or even know what data was on that drive? Attached is the last diagnostics after I had already clicked format. The disk that failed to mount was Disk 17, ST14000NM0018-2H4101_ZHZ4JPR8 - 14 TB (sdd) prdnas002-diagnostics-20210211-1419.zip
  16. Hello, I am experiencing the same issue. I even started from scratch but now I have even more movies where is able to see the directory but not the files within those directories.
  17. Quick Question, Has anyone else been getting this error message? 2019-12-26 14:31:35,294 DEBG 'radarr' stdout output: [Warn] HttpClient: HTTP Error - Res: [GET] https://radarr.aeonlucid.com/v1/update/master?version=0.2.0.1450&os=linux&runtimeVer=5.20.1: 404.NotFound {"errorMessage":"Latest update not found."} 2019-12-26 14:31:35,298 DEBG 'radarr' stdout output: [Error] TaskExtensions: Task Error
  18. My aplogies, please disregard. It was my pfblocker causing the issue. Rebooting after update now.
  19. Having issues upgrading. Getting the following message, plugin: installing: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg plugin: downloading https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg ... failed (SSL verification failure) plugin: wget: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer.plg download failure (SSL verification failure)
  20. Was there an update recently? It seems to be working properly again and I haven't changed anything.
  21. Thanks for your response. That is correct, my unraid server is in the ip address range 192.168.1.0/24 My VPN provider is PIA and I didn't experience issues before the ISP change. My router is also the same, I am using Pfsense which was getting the hand off from the modem before and now it's getting the hand off from the FIOS circuit. I haven't experience any issues with other docker containers.
  22. I followed the steps from the link. After reviewing it, it seems that VPN connects but the container page doesn't load when it is active. When I turn off VPN, I am able to access the container page and it loads properly. Thanks for your help. supervisord.log
  23. Hello, I just recently change my ISP from Xfinity to Verizon FIOS. After I cut over to the new ISP and now I am unable to get my PIA VPN started. I tried changing the opvn file to TCP or IP but it didn't work. Once I start the container it starts to loop with the following message, 019-08-06 20:50:20,166 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:50:50,305 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:51:20,446 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:51:50,584 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:52:20,725 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:52:50,862 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:53:21,004 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:53:51,144 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:54:21,282 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:54:51,429 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:55:21,577 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:55:51,722 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:56:21,862 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:56:52,002 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:57:22,142 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:57:52,280 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:58:22,419 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:58:52,558 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:59:22,696 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 20:59:52,834 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:00:22,973 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:00:53,112 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:01:23,251 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:01:53,390 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:02:23,528 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:02:53,668 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:03:23,807 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:03:53,947 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:04:24,085 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:04:54,224 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:05:24,362 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:05:54,500 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:06:24,639 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:06:54,780 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:07:24,916 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:07:55,054 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:08:25,192 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:08:55,331 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:09:25,468 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:09:55,607 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:10:25,745 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:10:55,884 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:11:26,022 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:11:56,161 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:12:26,301 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:12:56,440 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:13:26,577 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:13:56,714 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:14:26,852 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:14:56,991 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:15:27,129 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:15:57,266 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:16:27,406 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:16:57,545 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:17:27,684 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:17:57,822 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:18:27,960 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:18:58,098 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:19:28,236 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:19:58,376 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:20:28,514 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:20:58,652 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:21:28,791 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:21:58,928 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:22:29,067 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:22:59,205 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:23:29,343 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:23:59,482 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:24:29,621 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:24:59,758 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:25:29,898 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:26:00,036 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:26:30,173 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:27:00,312 DEBG 'watchdog-script' stdout output: 172.217.10.100 2019-08-06 21:27:30,450 DEBG 'watchdog-script' stdout output: 172.217.10.100 If someone can please help me get the proper configuration done so I can get this working.
  24. I will definitely be upgrading my 1950x to the 2nd gen Threadripper. Sent from my SM-G955U using Tapatalk