-C-

Members
  • Posts

    91
  • Joined

  • Last visited

Everything posted by -C-

  1. No. It's an older, smaller drive and I don't need it any more, so am planning to remove using the command that @JorgeB recommends here.
  2. Haven't had this issue since my last report. I have a failing disk so using Unbalance to move files off it and onto another. All seemed to be going well for the first couple of hours, then this morning when I log into the main GUI I'm seeing this: Unbalance is still running with over 6 hours to go, so I'm going to leave this until that's finished and will then reboot. For me this issue has only happened when running rsync from shell or using Unbalance (which uses rsync).
  3. I'm running Unraid 6.11.1 Can't see a version number, but it's dated 2023.03.01 Here are all the entries from parity-checks.log 2022 Nov 11 01:35:22|2|0|-4|0|recon P|17578328012 2022 Nov 12 20:18:51|120861|148933137|0|0|recon P|17578328012 2022 Nov 30 22:54:57|128424|140162336|0|0|check P|17578328012 2022 Dec 7 19:33:32|3|0|-4|0|recon Q|19531792332 2022 Dec 8 19:05:46|266984|74.9MB/s|0|0|recon Q|269251|2|AUTOMATIC Parity Sync/Data Rebuild 2022 Dec 10 11:47:13|141961|140887676|0|0|check Q|19531792332 2022 Dec 11 17:04:58|95747|208.9MB/s|0|0|clear|95747|1|AUTOMATIC Disk Clear 2022 Dec 12 19:06:47|116|0|-4|0|recon P|19531792332 2022 Dec 13 22:37:14|2|0|-4|0|recon P|19531792332 2022 Dec 15 00:57:26|252391|79.2MB/s|0|0|recon P|252760|2|AUTOMATIC Parity Sync/Data Rebuild 2022 Dec 21 18:45:10|171468|116.6MB/s|0|2|check P Q|171468|1|MANUAL Correcting Parity Check 2022 Dec 25 11:04:39|328646|60.9MB/s|0|2|check P Q|338803|2|MANUAL Non-Correcting Parity Check 2022 Dec 31 00:42:17|2786|0|-4|0|check P Q|19531825100 2023 Jan 1 17:58:32|148553|134.6MB/s|0|2|check P Q|148553|1|MANUAL Correcting Parity Check 2023 Jan 3 12:49:22|148056|135.1MB/s|0|2|check P Q|148056|1|MANUAL Correcting Parity Check 2023 Jan 6 05:02:08|423315|47.2MB/s|0|2|check P Q|423648|2|MANUAL Non-Correcting Parity Check 2023 Jan 7 00:25:47|19|0|-4|0|check P|19531825100 2023 Jan 8 16:31:21|144317|138587893|0|2|check P|19531825100 2023 Jan 10 13:02:42|142749|140.1MB/s|0|0|check P|142749|1|MANUAL Correcting Parity Check 2023 Jan 10 13:21:39|130|0|-4|0|recon Q|19531825100 2023 Jan 12 17:17:53|60312|331.6MB/s|0|0|recon Q|60312|1|AUTOMATIC Parity Sync/Data Rebuild 2023 Jan 31 06:25:45|145405|137550902|0|2|check P Q|19531825100 2023 Feb 2 19:41:30|153414|130.4MB/s|0|0|check P Q|153414|1|MANUAL Correcting Parity Check 2023 Mar 8 12:13:04|24865|0|-4|0|check P Q|19531825100 2023 Mar 10 18:09:37|148148|135.0 MB/s|0|2|check P Q|148148|1|Manual Correcting Parity-Check
  4. Here is the log from the shutdown signal to the last entry before it shut down: Mar 7 14:43:38 Tower shutdown[8597]: shutting down for system halt Mar 7 14:43:38 Tower init: Switching to runlevel: 0 Mar 7 14:43:38 Tower flash_backup: stop watching for file changes Mar 7 14:43:38 Tower init: Trying to re-exec init Mar 7 14:43:59 Tower Parity Check Tuning: DEBUG: Array stopping Mar 7 14:43:59 Tower Parity Check Tuning: DEBUG: No array operation in progress so no restart information saved Mar 7 14:43:59 Tower kernel: mdcmd (36): nocheck cancel Mar 7 14:44:00 Tower emhttpd: Spinning up all drives... Mar 7 14:44:00 Tower emhttpd: spinning up /dev/sdh Mar 7 14:44:00 Tower emhttpd: spinning up /dev/sdg Mar 7 14:44:00 Tower emhttpd: spinning up /dev/sdd Mar 7 14:44:00 Tower emhttpd: spinning up /dev/sde Mar 7 14:44:00 Tower emhttpd: spinning up /dev/sdf Mar 7 14:44:00 Tower emhttpd: spinning up /dev/sdi Mar 7 14:44:00 Tower emhttpd: spinning up /dev/sda Mar 7 14:44:17 Tower kernel: ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Mar 7 14:44:17 Tower kernel: ata5.00: configured for UDMA/133 Mar 7 14:44:17 Tower emhttpd: sdspin /dev/sdh up: 1 Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sdj Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sdk Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sdh Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sdg Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sdd Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sde Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sdb Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sdf Mar 7 14:44:17 Tower emhttpd: read SMART /dev/nvme0n1 Mar 7 14:44:17 Tower emhttpd: read SMART /dev/nvme1n1 Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sdi Mar 7 14:44:17 Tower emhttpd: read SMART /dev/sda Mar 7 14:44:17 Tower emhttpd: Stopping services... Mar 7 14:44:38 Tower emhttpd: shcmd (9923955): /etc/rc.d/rc.docker stop Mar 7 14:44:39 Tower kernel: docker0: port 9(vethb92db4c) entered disabled state Mar 7 14:44:39 Tower kernel: vetha796224: renamed from eth0 Mar 7 14:44:39 Tower avahi-daemon[10171]: Interface vethb92db4c.IPv6 no longer relevant for mDNS. Mar 7 14:44:39 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface vethb92db4c.IPv6 with address fe80::84b5:c3ff:fe35:1c52. Mar 7 14:44:39 Tower kernel: docker0: port 9(vethb92db4c) entered disabled state Mar 7 14:44:39 Tower kernel: device vethb92db4c left promiscuous mode Mar 7 14:44:39 Tower kernel: docker0: port 9(vethb92db4c) entered disabled state Mar 7 14:44:39 Tower avahi-daemon[10171]: Withdrawing address record for fe80::84b5:c3ff:fe35:1c52 on vethb92db4c. Mar 7 14:44:39 Tower kernel: veth520a485: renamed from eth0 Mar 7 14:44:39 Tower kernel: docker0: port 6(vethc2c8bcf) entered disabled state Mar 7 14:44:39 Tower avahi-daemon[10171]: Interface vethc2c8bcf.IPv6 no longer relevant for mDNS. Mar 7 14:44:39 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface vethc2c8bcf.IPv6 with address fe80::f0eb:9cff:fe48:b5f0. Mar 7 14:44:39 Tower kernel: docker0: port 6(vethc2c8bcf) entered disabled state Mar 7 14:44:39 Tower kernel: device vethc2c8bcf left promiscuous mode Mar 7 14:44:39 Tower kernel: docker0: port 6(vethc2c8bcf) entered disabled state Mar 7 14:44:39 Tower avahi-daemon[10171]: Withdrawing address record for fe80::f0eb:9cff:fe48:b5f0 on vethc2c8bcf. Mar 7 14:44:39 Tower kernel: veth359095c: renamed from eth0 Mar 7 14:44:39 Tower kernel: docker0: port 1(veth11635d1) entered disabled state Mar 7 14:44:39 Tower avahi-daemon[10171]: Interface veth11635d1.IPv6 no longer relevant for mDNS. Mar 7 14:44:39 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface veth11635d1.IPv6 with address fe80::c8d0:34ff:fe40:b86c. Mar 7 14:44:39 Tower kernel: docker0: port 1(veth11635d1) entered disabled state Mar 7 14:44:39 Tower kernel: device veth11635d1 left promiscuous mode Mar 7 14:44:39 Tower kernel: docker0: port 1(veth11635d1) entered disabled state Mar 7 14:44:39 Tower avahi-daemon[10171]: Withdrawing address record for fe80::c8d0:34ff:fe40:b86c on veth11635d1. Mar 7 14:44:43 Tower kernel: docker0: port 8(vethba1f846) entered disabled state Mar 7 14:44:43 Tower kernel: veth39aff71: renamed from eth0 Mar 7 14:44:43 Tower avahi-daemon[10171]: Interface vethba1f846.IPv6 no longer relevant for mDNS. Mar 7 14:44:43 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface vethba1f846.IPv6 with address fe80::d40f:c0ff:fe86:60e8. Mar 7 14:44:43 Tower kernel: docker0: port 8(vethba1f846) entered disabled state Mar 7 14:44:43 Tower kernel: device vethba1f846 left promiscuous mode Mar 7 14:44:43 Tower kernel: docker0: port 8(vethba1f846) entered disabled state Mar 7 14:44:43 Tower avahi-daemon[10171]: Withdrawing address record for fe80::d40f:c0ff:fe86:60e8 on vethba1f846. Mar 7 14:44:43 Tower kernel: docker0: port 2(vethb13e418) entered disabled state Mar 7 14:44:43 Tower kernel: veth8acea87: renamed from eth0 Mar 7 14:44:43 Tower kernel: veth82bed5c: renamed from eth0 Mar 7 14:44:43 Tower kernel: docker0: port 5(veth59668a6) entered disabled state Mar 7 14:44:43 Tower avahi-daemon[10171]: Interface vethb13e418.IPv6 no longer relevant for mDNS. Mar 7 14:44:43 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface vethb13e418.IPv6 with address fe80::88b4:78ff:fe8f:4348. Mar 7 14:44:43 Tower kernel: docker0: port 2(vethb13e418) entered disabled state Mar 7 14:44:43 Tower kernel: device vethb13e418 left promiscuous mode Mar 7 14:44:43 Tower kernel: docker0: port 2(vethb13e418) entered disabled state Mar 7 14:44:43 Tower avahi-daemon[10171]: Withdrawing address record for fe80::88b4:78ff:fe8f:4348 on vethb13e418. Mar 7 14:44:43 Tower avahi-daemon[10171]: Interface veth59668a6.IPv6 no longer relevant for mDNS. Mar 7 14:44:43 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface veth59668a6.IPv6 with address fe80::5c8f:c0ff:fe00:838. Mar 7 14:44:43 Tower kernel: docker0: port 5(veth59668a6) entered disabled state Mar 7 14:44:43 Tower kernel: device veth59668a6 left promiscuous mode Mar 7 14:44:43 Tower kernel: docker0: port 5(veth59668a6) entered disabled state Mar 7 14:44:43 Tower avahi-daemon[10171]: Withdrawing address record for fe80::5c8f:c0ff:fe00:838 on veth59668a6. Mar 7 14:44:43 Tower kernel: docker0: port 7(veth3623bf7) entered disabled state Mar 7 14:44:43 Tower kernel: vethe14a813: renamed from eth0 Mar 7 14:44:43 Tower avahi-daemon[10171]: Interface veth3623bf7.IPv6 no longer relevant for mDNS. Mar 7 14:44:43 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface veth3623bf7.IPv6 with address fe80::84f7:d3ff:fe68:350b. Mar 7 14:44:43 Tower kernel: docker0: port 7(veth3623bf7) entered disabled state Mar 7 14:44:43 Tower kernel: device veth3623bf7 left promiscuous mode Mar 7 14:44:43 Tower kernel: docker0: port 7(veth3623bf7) entered disabled state Mar 7 14:44:43 Tower avahi-daemon[10171]: Withdrawing address record for fe80::84f7:d3ff:fe68:350b on veth3623bf7. Mar 7 14:44:43 Tower kernel: docker0: port 3(veth2739f34) entered disabled state Mar 7 14:44:43 Tower kernel: vethb683262: renamed from eth0 Mar 7 14:44:43 Tower avahi-daemon[10171]: Interface veth2739f34.IPv6 no longer relevant for mDNS. Mar 7 14:44:43 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface veth2739f34.IPv6 with address fe80::4884:77ff:feb7:a969. Mar 7 14:44:43 Tower kernel: docker0: port 3(veth2739f34) entered disabled state Mar 7 14:44:43 Tower kernel: device veth2739f34 left promiscuous mode Mar 7 14:44:43 Tower kernel: docker0: port 3(veth2739f34) entered disabled state Mar 7 14:44:43 Tower avahi-daemon[10171]: Withdrawing address record for fe80::4884:77ff:feb7:a969 on veth2739f34. Mar 7 14:44:43 Tower kernel: docker0: port 4(veth5bc1dc8) entered disabled state Mar 7 14:44:43 Tower kernel: vethea5fbb3: renamed from eth0 Mar 7 14:44:43 Tower avahi-daemon[10171]: Interface veth5bc1dc8.IPv6 no longer relevant for mDNS. Mar 7 14:44:43 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface veth5bc1dc8.IPv6 with address fe80::8ae:8eff:fede:a0fe. Mar 7 14:44:43 Tower kernel: docker0: port 4(veth5bc1dc8) entered disabled state Mar 7 14:44:43 Tower kernel: device veth5bc1dc8 left promiscuous mode Mar 7 14:44:43 Tower kernel: docker0: port 4(veth5bc1dc8) entered disabled state Mar 7 14:44:43 Tower avahi-daemon[10171]: Withdrawing address record for fe80::8ae:8eff:fede:a0fe on veth5bc1dc8. Mar 7 14:44:43 Tower kernel: br-8038ba180b14: port 1(veth7a733d2) entered disabled state Mar 7 14:44:43 Tower kernel: veth7f5366a: renamed from eth0 Mar 7 14:44:43 Tower avahi-daemon[10171]: Interface veth7a733d2.IPv6 no longer relevant for mDNS. Mar 7 14:44:43 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface veth7a733d2.IPv6 with address fe80::a89e:7dff:fe9b:6b6. Mar 7 14:44:43 Tower kernel: br-8038ba180b14: port 1(veth7a733d2) entered disabled state Mar 7 14:44:43 Tower kernel: device veth7a733d2 left promiscuous mode Mar 7 14:44:43 Tower kernel: br-8038ba180b14: port 1(veth7a733d2) entered disabled state Mar 7 14:44:43 Tower avahi-daemon[10171]: Withdrawing address record for fe80::a89e:7dff:fe9b:6b6 on veth7a733d2. Mar 7 14:44:48 Tower kernel: docker0: port 10(veth82f62ce) entered disabled state Mar 7 14:44:48 Tower kernel: veth7af951d: renamed from eth0 Mar 7 14:44:49 Tower avahi-daemon[10171]: Interface veth82f62ce.IPv6 no longer relevant for mDNS. Mar 7 14:44:49 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface veth82f62ce.IPv6 with address fe80::c86c:beff:fefd:e3c4. Mar 7 14:44:49 Tower kernel: docker0: port 10(veth82f62ce) entered disabled state Mar 7 14:44:49 Tower kernel: device veth82f62ce left promiscuous mode Mar 7 14:44:49 Tower kernel: docker0: port 10(veth82f62ce) entered disabled state Mar 7 14:44:49 Tower avahi-daemon[10171]: Withdrawing address record for fe80::c86c:beff:fefd:e3c4 on veth82f62ce. Mar 7 14:44:49 Tower root: stopping dockerd ... Mar 7 14:44:50 Tower root: waiting for docker to die ... Mar 7 14:44:51 Tower avahi-daemon[10171]: Interface docker0.IPv6 no longer relevant for mDNS. Mar 7 14:44:51 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface docker0.IPv6 with address fe80::42:c2ff:fe45:3fc5. Mar 7 14:44:51 Tower avahi-daemon[10171]: Interface docker0.IPv4 no longer relevant for mDNS. Mar 7 14:44:51 Tower avahi-daemon[10171]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1. Mar 7 14:44:51 Tower avahi-daemon[10171]: Withdrawing address record for fe80::42:c2ff:fe45:3fc5 on docker0. Mar 7 14:44:51 Tower avahi-daemon[10171]: Withdrawing address record for 172.17.0.1 on docker0. Mar 7 14:44:51 Tower emhttpd: shcmd (9923956): umount /var/lib/docker Mar 7 14:44:52 Tower cache_dirs: Stopping cache_dirs process 4448 Mar 7 14:44:53 Tower cache_dirs: cache_dirs service rc.cachedirs: Stopped Mar 7 14:45:04 Tower unassigned.devices: Unmounting All Devices... Mar 7 14:45:04 Tower unassigned.devices: Unmounting partition 'sda2' at mountpoint '/mnt/disks/WD_Green_4TB_714'... Mar 7 14:45:04 Tower unassigned.devices: Unmount cmd: /sbin/umount -fl '/dev/sda2' 2>&1 Mar 7 14:45:04 Tower ntfs-3g[15177]: Unmounting /dev/sda2 (WD Green 4TB 714) Mar 7 14:45:04 Tower unassigned.devices: Successfully unmounted 'sda2' Mar 7 14:45:04 Tower sudo: pam_unix(sudo:session): session closed for user root Mar 7 14:45:05 Tower emhttpd: shcmd (9923957): /etc/rc.d/rc.samba stop Mar 7 14:45:05 Tower wsdd2[9999]: 'Terminated' signal received. Mar 7 14:45:05 Tower winbindd[10075]: [2023/03/07 14:45:05.569343, 0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler) Mar 7 14:45:05 Tower winbindd[10075]: Got sig[15] terminate (is_parent=1) Mar 7 14:45:05 Tower winbindd[10077]: [2023/03/07 14:45:05.569373, 0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler) Mar 7 14:45:05 Tower winbindd[10077]: Got sig[15] terminate (is_parent=0) Mar 7 14:45:05 Tower winbindd[11433]: [2023/03/07 14:45:05.569416, 0] ../../source3/winbindd/winbindd_dual.c:1957(winbindd_sig_term_handler) Mar 7 14:45:05 Tower winbindd[11433]: Got sig[15] terminate (is_parent=0) Mar 7 14:45:05 Tower wsdd2[9999]: terminating. Mar 7 14:45:05 Tower emhttpd: shcmd (9923958): rm -f /etc/avahi/services/smb.service Mar 7 14:45:05 Tower avahi-daemon[10171]: Files changed, reloading. Mar 7 14:45:05 Tower avahi-daemon[10171]: Service group file /services/smb.service vanished, removing services. Mar 7 14:45:05 Tower emhttpd: Stopping mover... Mar 7 14:45:05 Tower emhttpd: shcmd (9923960): /usr/local/sbin/mover stop Mar 7 14:45:05 Tower root: mover: not running Mar 7 14:45:05 Tower emhttpd: Sync filesystems... Mar 7 14:45:05 Tower emhttpd: shcmd (9923961): sync Mar 7 14:45:06 Tower ProFTPd: Running unmountscript.sh... I checked the log after startup, and can't see anything related to the array until this entry: Mar 7 20:12:03 Tower Parity Check Tuning: DEBUG: Automatic Correcting Parity-Check running
  5. Over the last couple of months I've shut the server down twice. Both times a parity check as auto-started on startup with a message about an unclean startup. Each time I stopped the automatically started sync (as I'd had trouble with that before) and started a correcting check again using Edge. The first time the check completed without error, but I've done the same thing again and I'm back to where I was when I started the check with Firefox: Mar 10 18:06:11 Tower Parity Check Tuning: DEBUG: Manual Correcting Parity-Check running Mar 10 18:09:37 Tower kernel: md: recovery thread: P corrected, sector=39063584664 Mar 10 18:09:37 Tower kernel: md: recovery thread: P corrected, sector=39063584696 Mar 10 18:09:37 Tower kernel: md: sync done. time=148148sec Mar 10 18:09:37 Tower kernel: md: recovery thread: exit status: 0 This is the same errors on the same sectors as previous. I'm at a loss as to what to do now. It appears to not be related to the browser I'm using. Is there anything else I can try? Now a new issue has appeared, when I click the History button under Array Operations I get a blank box overlaid with no means of closing it, I have to go to another page and back to view the Main page again. This happens in Firefox and Edge.
  6. If you compare your i9 with a couple of 13th gen CPUs, you'll see how far things have developed: https://www.cpubenchmark.net/compare/3334vs5008vs5156/Intel-i9-9900K-vs-Intel-i5-13600K-vs-Intel-i3-13100F The only 13th gen i3 on that site is the 13100F, which doesn't have an iGPU (you'll want an iGPU for Plex transcoding). but that'll give you an idea of the sort of performance you can expect. I recently upgraded from a Thinkserver with an old Xeon to a 13th gen i5 (13600k) and although its Max TDP is higher, I'm getting better energy efficiency with my new build with general usage, maybe due to the efficiency cores. I have yet to do much in the way of power optimisation, so I'm hoping I can get further reductions. I went for an ASRock Z790 RS Pro board with 32GB of DDR5 as I plan on holding on to this for a good few years and figured I'd get the latest stuff now so I have better upgrade options in the future or it'll be suitable for a gaming PC build. I looked into getting a Z690 and DDR4, but there really wasn't much difference in cost so I decided to go with the newest kit. I also have a pair of 1TB 770s as I got them for a good price during the recent Black Friday sales. I'm running them in RAID 1 and plan on using them as a dedicated pool for my Nextcloud storage. Am yet to get Nextcloud working yet, so can't comment on their performance. They'll likely be more than fast enough. Up to you, but I think if it was me I'd be looking to unload your old kit while it's still worth some money and investing in some more recent parts with better efficiency and upgradeability.
  7. If you read of my recent experience and my post explaining how Firefox gave me nearly a month of headaches, it might be safest to reconsider using Firefox with Unraid.
  8. Sync of parity 2 has now completed without error. I find it worrying that using Firefox with the Unraid GUI has been an issue since at least v6.11.1 and there is no message displayed to Firefox users to warn of potential issues and that it's safest to avoid Firefox. I've spent many hours on this and have avoided using my server since building it nearly a month ago until this was resolved. I know that Firefox is a marginal browser nowadays, but I'd imagine there's likely a higher percentage of users among the Unraid customer base than globally. There must be a fair few people like me experiencing weird issues and the last thing you'd imagine in 2022/3 is that the browser you're using could cause technical issues with your server's functionality.
  9. After the check that finished on the 8th, while attempting to remove the 2nd parity drive, the array wouldn't start back up. The message in the GUI footer was "Array Stopped... stale configuration". From that I found this Reddit post, which pointed to the issue being due to using Firefox, which has been my daily driver since the Firebird days. I have seen a fair few "resend the last request" dialogues since I started using Unraid and always chosen the 'Cancel' option and it seemed there were no consequences. In the Reddit post they reference v6.11.5, yet I've been on v6.11.1 since I first tried Unraid. They also mention that this issue is related to making changes to the array, yet I'm fairly certain I've seen it at a few different places throughout the GUI. Someone there even found that their /mnt/user folder was missing until they downgraded to v6.11.4 To be clear- I have not been getting the "resend the last request" dialogue while attempting to get this parity check to complete successfully. If I had that would have rung alarm bells and I would have investigated why that dialogue was appearing. I rebooted the server and the array started back up. I keep a portable version of Chrome around for testing, so reran a correcting check (with only the parity 1 drive) using that, and had success: I've now reconnected the Parity 2 disk to the array and a parity sync is running. Looks like I'm going to have to run 2 browsers until I hear that this Unraid incompatibility with Firefox has been rectified.
  10. Correcting check with parity 2 disconnected finished earlier. It says it corrected errors on the same 2 sectors as previously: Jan 8 16:18:01 Tower Parity Check Tuning: DEBUG: Automatic Correcting Parity Check running Jan 8 16:24:01 Tower Parity Check Tuning: DEBUG: Automatic Correcting Parity Check running Jan 8 16:30:01 Tower Parity Check Tuning: DEBUG: Automatic Correcting Parity Check running Jan 8 16:31:20 Tower kernel: md: recovery thread: P corrected, sector=39063584664 Jan 8 16:31:20 Tower kernel: md: recovery thread: P corrected, sector=39063584696 Jan 8 16:31:21 Tower kernel: md: sync done. time=144317sec Jan 8 16:31:21 Tower kernel: md: recovery thread: exit status: 0 Yet the status and history are both saying there's still 2 errors:
  11. That is somewhat comforting : ) OK- willing to try anything to get this back to working. Is the process to stop the array, remove the parity 2 disk, restart the array and run a correcting parity check? Edit: have now done as above, will report back
  12. I first set this system up on an older PC with the 2 parity drives connected via USB. With that setup everything worked OK. I didn't do a check, but the sync completed successfully after I added the 2nd parity. I then moved the parity drives from USB to SATA with the new system- I checked parity as Unraid was seeing them as different drives due to being connected directly rather than via their USB controllers. That check and all that I've done since they've been connected directly have come back with 2 errors, but only the check that finished on the 3rd was definitely run as a correcting check, confirmed by the syslog saying that the 2 problematic sectors (39063584664 & 39063584696) had been corrected. These are the same sectors listed with errors in today's result. Not sure if it's relevant, but while the check's in progress I see it run through the drives and finish with each one in order of size as expected. My biggest array drive is 18TB, the 2 parties 20TB. It seems like these errors are being found after it's finished checking the 18TB, so the error is somewhere in that last 2TB.
  13. Yeah, all drives (other than the NVMes) are now connected to the motherboard's SATA ports.
  14. Dang it- I'm caught in a loop! Jan 6 04:51:01 Tower Parity Check Tuning: DEBUG: Manual Non-Correcting Parity Check running Jan 6 05:00:01 Tower Parity Check Tuning: DEBUG: Manual Non-Correcting Parity Check running Jan 6 05:02:08 Tower kernel: md: recovery thread: PQ incorrect, sector=39063584664 Jan 6 05:02:08 Tower kernel: md: recovery thread: PQ incorrect, sector=39063584696 Jan 6 05:02:08 Tower kernel: md: sync done. time=25740sec Jan 6 05:02:08 Tower kernel: md: recovery thread: exit status: 0 I'm not sure why this one took so much longer than previous checks to complete. At times it sounded like it was random seeking for hours, with very slow read speeds. Have been avoiding using the array during these checks as much as possible- barely any writes and only a few reads. tower-diagnostics-20230106-1229.zip
  15. Parity check's now complete, but I don't see any difference- The status: ...and the history: both look the same as the previous non-correcting checks, with no mention of the errors having been corrected, which is of concern considering what @itimpi said previously: However, in the syslog I see this: Jan 3 12:48:01 Tower Parity Check Tuning: DEBUG: Manual Correcting Parity Check running Jan 3 12:49:22 Tower kernel: md: recovery thread: PQ corrected, sector=39063584664 Jan 3 12:49:22 Tower kernel: md: recovery thread: PQ corrected, sector=39063584696 Jan 3 12:49:22 Tower kernel: md: sync done. time=148056sec Jan 3 12:49:22 Tower kernel: md: recovery thread: exit status: 0 Can I now consider the Parity valid? Should I now run a non-correcting check as @trurl recommended previously?
  16. Hello all- I've followed the guide here: https://unraid.net/de/blog/wireguard-on-unraid to set up Unraid and clients on both my Windows laptop (via generated .zip config) & Android phone (via generated QR config). Have opened a UDP port in my Mikrotik router's firewall. I have a static IP so don't need to worry about dynamic DNS setup. Have double-checked all settings on Unraid's Wireguard page- it all looks good to me. However, when I try to click the Inactive switch to toggle to Active, it switches for a split second, then goes back to Inactive. The same happened the first time I tried it, so I deleted the tunnel and input everything from scratch in case I messed something up. Before clicking the Add Peer button, when reentering everything, I tried clicking on the Inactive toggle and it stayed activated. It was only once I'd added the peer that it would not stay active. Any ideas? Edit: have found that this is a known error (possibly only with 6.11.1) and in case anyone else is looking for the fix, it's here:
  17. I do indeed- how could I not be making use of such a fine piece of software? ; P Thanks so much, that's confirmed: root@Tower:~# parity.check status DEBUG: Manual Correcting Parity Check running Status: Manual Manual Correcting Parity Check (9.6% completed) P.S. Be super great if this info could be displayed on the dashboard within the Parity block.
  18. Thanks. I've removed it, rebooted and started the parity check again. It looks like this: Is there any way to confirm that it's running as a correcting check without having to wait for it to complete?
  19. Happy New Year all It's just finished, found 2 errors again as expected, but listed the same in history as before, as a check without correction. On my system (6.11.1) the box is checked by default. I saw @trurl's last post after starting the sync, so stopped it and refreshed the Main page to double-check and this is what mine looks like by default: So I unchecked it and re-checked it, just to be sure, and started it again. But that hasn't made any difference- it has run a check again and not corrected the errors. This confirms what I was fairly sure of previously, that I'd run with this box checked on the check that finished on the 21st. I'm getting a lot of these in my syslog while the server's sitting idle: Jan 1 18:05:39 Tower rc.diskinfo[6358]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:05:47 Tower rc.diskinfo[6742]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:05:54 Tower rc.diskinfo[7030]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:06:02 Tower rc.diskinfo[7807]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:06:09 Tower rc.diskinfo[8114]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:06:17 Tower rc.diskinfo[8598]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:06:24 Tower rc.diskinfo[9008]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:06:32 Tower rc.diskinfo[9357]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:06:39 Tower rc.diskinfo[9738]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:06:46 Tower rc.diskinfo[10176]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:06:48 Tower rc.diskinfo[10318]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:07:03 Tower rc.diskinfo[10980]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 Jan 1 18:07:03 Tower rc.diskinfo[11040]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413 I've looked into what could be causing this, and not found anything yet. I don't currently have any unassigned devices connected. I've also been getting a GUI crash, seemingly due to unassigned devices, when running rsync manually, or via the unbalance plugin to move files around. Please see my post here for more info. Could these be causing issues with me not being able to run the parity check as correcting?
  20. OK, and that's achieved by leaving the "Write corrections to parity" box checked before I click the Check button? As I said, that's what I thought I'd done on the one that finished on the 21st. I need to be clear that's the way to do it so I'm not wasting another 2 days+ on the wrong type of parity check... Thanks
  21. Here it is: As both of the last ones are showing as Parity-Check, it looks like I didn't do a Sync as intended. Is a parity sync achieved by keeping the "Write corrections to parity" checked before clicking the Check button? (I thought that's what I'd done when I ran the one that ended on the 21st).
  22. I'd originally done a correcting parity sync and got this result: Then was advised to do a non-correcting that finished on the 25th with 2 errors. Won't running a correcting parity check again be doing the same as the one that completed on the 21st?
  23. Just got back from xmas break (haven't managed to get Wireguard working yet) and this was the non-correcting parity check's result: