Bodine95

Members
  • Posts

    18
  • Joined

  • Last visited

Bodine95's Achievements

Noob

Noob (1/14)

2

Reputation

1

Community Answers

  1. Great news, with the different hardware my parity check finished. Thank you to trurl and JorgeB for giving me great advise on what could be wrong.
  2. Changed my hardware of my server and started the parity check again. The parity check has completed to 9.37 Tb and is at the point where it failed the last two times before. I am hopeful that it passes with the changed hardware. Here is what I am using for my server: ASUSTeK COMPUTER INC. Z97M-PLUS American Megatrends Inc., Version 0330 BIOS dated: Thu 29 May 2014 12:00:00 AM PDT Intel® Core™ i7-4790 CPU @ 3.60GHz Ram: 32 GiB PSU: 1000W I have attached the latest diag file and syslog for everyone's review. 192.168.130.245-diagnostics-20230205-0705.zip syslog_10
  3. Update - It failed. I came to check on it 4 hours later and the had server rebooted several times and failed the parity check. From Syslog looks like it rebooted three times in quick succession. Line 58133: Feb 2 17:05:04 Line 61109: Feb 2 17:20:24 Line 64085: Feb 2 18:17:12 Any ideas would be greatly appreciated. syslog_8
  4. Here is an update on the parity status. The current largest drive with data is 8Tb, it has completed readying all the other data drives and is just moving to the end of the drive. Total size:14 TB Elapsed time:20 hours, 32 minutes Current position:8.05 TB (57.5 %) Estimated speed:126.0 MB/sec Estimated finish:13 hours, 7 minutes
  5. No power issues to the drives. I am trying an experiment, I removed the 14Tb drive that I used to replace the first 8Tb parity drive (which completed successfully). I am now using the second 14Tb drive and the parity check has run for 13.5 hrs and is at 5Tb. I will keep everyone updated if this completes successfully or not.
  6. If I don’t start the parity check it runs without issues. No power issues that I can see. The unit is connected to a UPS and it show no issues with power in/out. The issue I have is that for a good year everything worked until I started to increase the parity disks from 8Tb to 14Tb. The first drive worked without any issue. When I replaced the second drive this is when my issues started. Could the second drive be bad?
  7. System crashed after a few hours of running the parity check. Here is the latest diag file and syslog file. 192.168.130.245-diagnostics-20230201-1302.zip syslog_6
  8. I have done a lot of work on my server. I have removed all unsupported plugins and removed the Raid controller and drives that were on that controller used in Unraid. I have started the parity check and will see if this solves my issues.
  9. Well after 6 hours and 57 minutes and @ 1.42TB my system crashed. Here is the syslog from the flash drive. syslog
  10. I have enabled the syslog to the flash drive and started the parity check. I will upload the log once the system crashes and reboots. I have also corrected the diag file with the correct current file. 192.168.130.245-diagnostics-20230129-1757.zip
  11. I have been running my Unraid system for a few years and moving my two parity drives from 8Tb to 14Tb drives. Here is what I happened, I received an error that one of my drives failed. I installed the new 14Tb replacement and the parity check started and completed nearly 3 days later. I then had one 8Tb and one 14Tb parity drive. I then removed the 8Tb drive and replaced it with the other 14Tb drive and now at about 30 minutes the system reboots and starts with the array stopped. I have tried booting in safe mode with no plugin and starting the array and parity check. This still causes the system to reboot at about 30 minutes also. I have tried moving the new parity drive to a different sata port and the reboot still happens. I removed my NVIDIA card and removed the NVIDIA driver and the reboot still occurs. The system is stable if I start the array and cancel the parity check. It just show that I have only one working parity drive and one disabled parity drive. Thanks in advance for helping me solve this issue. I am running Unraid version 6.11.5 Here is my system hardware: HP G7 DL580 4 socket Xeon CPU E7-8837 @ 2.67Ghz (32 core) 64 Gb DDR3 ECC memory BIOS P65 Drives: [0:0:0:0] disk SanDisk Ultra Fit 1.00 /dev/sda 30.7GB [1:0:0:0] disk ATA ST8000VN0022-2EL SC61 /dev/sdj 8.00TB [1:0:1:0] disk ATA ST14000VN0008-2K SC61 /dev/sdk 14.0TB [1:0:2:0] disk ATA ST8000VN0022-2EL SC61 /dev/sdl 8.00TB [1:0:3:0] disk ATA ST8000VN0022-2EL SC61 /dev/sdm 8.00TB [1:0:4:0] disk ATA ST14000VN0008-2K SC61 /dev/sdn 14.0TB [1:0:5:0] disk ATA WDC WD40EFRX-68W 0A80 /dev/sdo 4.00TB [1:0:6:0] disk ATA Samsung SSD 850 2B6Q /dev/sdp 1.00TB [1:0:7:0] disk ATA WDC WD40EFRX-68W 0A80 /dev/sdq 4.00TB [1:0:8:0] disk ATA ST8000VN004-2M21 SC60 /dev/sdr 8.00TB [1:0:9:0] disk ATA ST4000DM000-1F21 CC54 /dev/sds 4.00TB [2:1:0:0] disk HP LOGICAL VOLUME 6.40 /dev/sdb 146GB [2:1:0:1] disk HP LOGICAL VOLUME 6.40 /dev/sdc 146GB [2:1:0:2] disk HP LOGICAL VOLUME 6.40 /dev/sdd 146GB [2:1:0:3] disk HP LOGICAL VOLUME 6.40 /dev/sde 500GB [2:1:0:4] disk HP LOGICAL VOLUME 6.40 /dev/sdf 500GB [2:1:0:5] disk HP LOGICAL VOLUME 6.40 /dev/sdg 500GB [2:1:0:6] disk HP LOGICAL VOLUME 6.40 /dev/sdh 500GB [2:1:0:7] disk HP LOGICAL VOLUME 6.40 /dev/sdi 500GB [3:0:0:0] cd/dvd hp DVD D DS8D3SH HHE7 /dev/sr0 1.07GB I have attached my diagnostics file of my system while my parity check is running. I have no error logs to share as they are deleted upon the system restart. 192.168.130.245-diagnostics-20230129-1757.zip
  12. My docker (binhex-sonarr) will no longer start, I am getting the following error from the docker log. Created by... ___. .__ .__ \_ |__ |__| ____ | |__ ____ ___ ___ | __ \| |/ \| | \_/ __ \\ \/ / | \_\ \ | | \ Y \ ___/ > < |___ /__|___| /___| /\___ >__/\_ \ \/ \/ \/ \/ \/ https://hub.docker.com/u/binhex/ 2022-05-23 11:21:53.746037 [info] Host is running unRAID 2022-05-23 11:21:53.783569 [info] System information Linux 8654a6ba22c4 5.15.40-Unraid #1 SMP Mon May 16 10:05:44 PDT 2022 x86_64 GNU/Linux 2022-05-23 11:21:53.827855 [info] OS_ARCH defined as 'x86-64' 2022-05-23 11:21:53.875126 [info] PUID defined as '99' 2022-05-23 11:21:54.469487 [info] PGID defined as '100' 2022-05-23 11:21:55.364307 [info] UMASK defined as '000' 2022-05-23 11:21:55.388254 [info] Permissions already set for '/config' 2022-05-23 11:21:55.484651 [info] Deleting files in /tmp (non recursive)... 2022-05-23 11:21:55.545110 [info] Starting Supervisor... 2022-05-23 11:21:59,135 INFO Included extra file "/etc/supervisor/conf.d/sonarr.conf" during parsing 2022-05-23 11:21:59,136 INFO Set uid to user 0 succeeded 2022-05-23 11:21:59,189 INFO supervisord started with pid 7 2022-05-23 11:22:00,191 INFO spawned: 'shutdown-script' with pid 65 2022-05-23 11:22:00,192 INFO spawned: 'sonarr' with pid 66 2022-05-23 11:22:00,193 INFO reaped unknown pid 8 (exit status 0) 2022-05-23 11:22:01,194 INFO success: shutdown-script entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2022-05-23 11:22:01,194 INFO success: sonarr entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-05-23 11:22:01,555 DEBG 'sonarr' stdout output: [Info] Bootstrap: Starting Sonarr - /usr/lib/sonarr/bin/Sonarr.exe - Version 3.0.8.1507 2022-05-23 11:22:02,225 DEBG 'sonarr' stdout output: [Info] AppFolderInfo: Data directory is being overridden to [/config] 2022-05-23 11:22:02,229 DEBG 'sonarr' stdout output: [Trace] DiskProviderBase: Directory '/config' isn't writable. Access to the path "/config/sonarr_write_test.txt" is denied. 2022-05-23 11:22:02,230 DEBG 'sonarr' stdout output: 2022-05-23 11:22:02,329 DEBG 'sonarr' stdout output: [Fatal] ConsoleApp: EPIC FAIL! My system info: Model: Custom M/B: ASUSTeK COMPUTER INC. Z97M-PLUS Version Rev X.0x - s/n: 140627176000110 BIOS: American Megatrends Inc. Version 0330. Dated: 05/29/2014 CPU: Intel® Core™ i7-4790 CPU @ 3.60GHz HVM: Not Available IOMMU: Not Available Cache: 256 KiB, 1 MB, 8 MB Memory: 32 GiB DDR3 (max. installable capacity 32 GiB) Network: eth0: 1000 Mbps, full duplex, mtu 1500 Kernel: Linux 5.15.40-Unraid x86_64 OpenSSL: 1.1.1o Uptime: 0 days, 01:14:58 Thank you in advance for your assistance.
  13. Thank you. I just received two LSI 9211 cards as I wanted to remove the motherboard as the possible cause. I will install the new HBAs and see if my issue gets resolved. It sounds like this should solve my problem. Thank you for your help.
  14. I am new to UnRaid, but not completely new to computers. I have tried to troubleshoot this issue, but I cannot seam to find the root cause of my problem. I am running Unraid ver 6.8.3 on the following hardware: Motherboard - ASUSTek Z97M-Plus Bios - May 29, 2014 CPU - Intel Core i7-4790 @ 3.60 GHz Memory - 32 Gb Drives: Parity - Seagate IronWolf Pro 8 TB NAS RAID Internal Hard Drive Parity 2 - Seagate IronWolf 8 TB NAS RAID Internal Hard Drive Disk 1 - Seagate IronWolf 8 TB NAS RAID Internal Hard Drive Disk 2 - Seagate IronWolf 8 TB NAS RAID Internal Hard Drive Disk 3 - Seagate IronWolf 8 TB NAS RAID Internal Hard Drive Disk 4 - Seagate IronWolf 8 TB NAS RAID Internal Hard Drive Disk 5 - Seagate IronWolf 4 TB NAS RAID Internal Hard Drive Disk 6 - Western Digital 4 TB Red Plus NAS Cache - Samsung SSD 850 EVO 1TB Here is what is happening to my server: I will get the email alerts that two drive have gone bad and are no longer available. I shutdown the array and remove the two drives I replace the drive with spares I try to rebuild the array and it will freeze the system where there is no network or keyboard response I reboot the server in safe mode with GUI and no plugins I rebuild the array and it finishes in 21 hrs I reboot the system into normal GUI mode The system will run for 2-4 weeks and this process repeats. I have tried switching the HBAs and the issue still happens to the same two disks #4 & #5 I switched the SATA cables between drives and still disks #4 & #5 will fail. I swapped drives from disk with matching sizes and still disk #4 & #5 will fail. In my last attempt I disabled the CA Backup / Restore schedule and my server rain for several months and just now crashed. I have attached the diag file for your review. I am hoping someone can point me to my root cause of my problem. Thank you in advance for you help! 192.168.130.245-diagnostics-20210201-2300.zip
  15. I did not notice that my server is rebooting. Now I need to figure out why it is happening.