Jimmeh

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by Jimmeh

  1. Fair enough, I'll make that change. Thanks again!
  2. Thanks for taking the time to respond. I noticed that the entries corresponding to the ones in my previous screenshot are identified as "recon D5" rather than "check P" so I assume that's where the "Action" column comes from. config\parity-checks.log 2022 Dec 27 14:38:55|101329|177.6MB/s|0|0|check P|131928|2|SCHEDULED Correcting Parity Check 2023 Jan 31 16:14:10|51244|351.3MB/s|0|0|check P|51244|1|SCHEDULED Correcting Parity Check 2023 Feb 9 22:54:05|105093|171.3MB/s|0|0|recon D5|105093|1|AUTOMATIC Parity Sync/Data Rebuild 2023 Feb 27 02:00:22|3|6.0 TB/s|0|0|recon D5|17578328012|3|1|Scheduled Correcting Parity Check 2023 Mar 1 08:06:32|18361|980.4 MB/s|0|0|recon D5|17578328012|18361|1|Manual Correcting Parity Check 2023 Mar 30 04:34:41|83202|216.3 MB/s|0|0|recon D5|17578328012|336267|5|Scheduled Correcting Parity-Check Requested files attached. Thanks for your help. parity.check.tuning.progress.save parity-checks.log
  3. Firstly thanks for this plugin, have been using it for a while and your work is greatly appreciated. I have a few strange issues which I'm unsure are due to configuration errors on my part, let me try to give an overview. My default parity check options are to trigger a custom parity check on the last Monday of every month, and cumulative parity checks are disabled here as shown: My settings for your plugin are to resume daily at 3:00, pause at 17:30 and pause if mover gets in the way. I have just now enabled the debugging option to see if that provides any more detail. Looking through the syslogs I can see the parity check is resumed as expected at 3:00, is correctly paused when mover interferes, but throws exit status 255 after the mover exits and does not resume. Apr 26 03:00:01 medianator Parity Check Tuning: Resumed: Scheduled Correcting Parity-Check Apr 26 03:00:01 medianator Parity Check Tuning: Resumed: Scheduled Correcting Parity-Check (71.7% completed) Apr 26 03:00:07 medianator kernel: mdcmd (63): check resume Apr 26 03:00:07 medianator kernel: Apr 26 03:00:07 medianator kernel: md: recovery thread: check P ... Apr 26 06:00:24 medianator Parity Check Tuning: Mover running Apr 26 06:00:29 medianator kernel: mdcmd (64): nocheck PAUSE Apr 26 06:00:29 medianator kernel: Apr 26 06:00:29 medianator kernel: md: recovery thread: exit status: -4 Apr 26 06:00:29 medianator Parity Check Tuning: Paused: Mover running: Scheduled Correcting Parity-Check (82.6% completed) Apr 26 06:04:11 medianator crond[1200]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Apr 26 06:06:26 medianator Parity Check Tuning: Mover no longer running Apr 26 06:06:31 medianator crond[1200]: exit status 255 from user root /usr/local/emhttp/plugins/parity.check.tuning/parity.check.tuning.php "monitor" &>/dev/null Additionally, despite the syslog showing "Scheduled Correcting Parity-Check", the parity operation history now references my last 3 scheduled checks as "Data-Rebuild" rather than a scheduled check. Once the check is complete each month there are no errors and everything looks fine. Does this give you an idea of what could have gone awry? Any suggestions would be welcome.
  4. 5.11.4 was released on July 10th. Will you be pushing an updated docker image or is there something I need to do on my side? https://github.com/pi-hole/pi-hole/releases/tag/v5.11.4 Edit: Thanks
  5. Looks like the repo isn't carrying anything later than 1.0.0-RC1 and we're on RC4 now. Not sure what's going on there, any ideas?
  6. From what I've read it's to do with SMB3 on the new Windows 10 1510 rollup. Here's how to sort it if you aren't prepared to wait for a patch from MS. https://support.microsoft.com/en-us/kb/2696547
  7. I had a similar message when I was trying to remove my lost+found directory after I had copied what I wanted out of it. In the end I deleted it from the command line via telnet. However, the share only disappeared from the UI once I stopped and started the array.
  8. Thanks for the advice, I'm running the rebuild-tree now so we'll see in a few days what I can get back. Seems to be going very slowly. For future reference, how should I have gone about moving the files? Edit: Looks like I've managed to recover the majority if not all of the data, so thank you very much for that. I've seen reference to users manually copying files to a specific disk, rather than to a share and allowing split level to dictate it's location, which is basically what I was trying to do. /mnt/disk1, /mnt/disk2, etc appear to be mount points for physical drives, so I'm trying to figure out why copying data from disk1 to disk2 could cause such a catastrophic failure after a reboot. I am very keen to get a handle on my mistake so that I don't repeat it. Any insight would be appreciated.
  9. Hi guys, I'm hoping its possible to get out of this corner I've backed myself into, but I'm not particularly hopeful I'll try to give you a bit of background. I have a 4 disk array using the latest version 5 beta: 1x 2TB Parity 2x 2TB Data Disks 1x 200GB Cache I setup the array initially with only 1 2TB data disk, configured the shares, and copied my data across. Once this had completed, I added the second 2TB drive, enabled the parity disk, and ran a full parity check which completed without errors. Since all my data was on 1 drive, I wanted to move some of the data to the second disk, since my split levels kept the shares mostly together. From the command line I used rsync (because I like the options) to copy the data from /mnt/disk1/movies to /mnt/disk2/movies, with a mind to deleting the original data on disk1 when the copy had completed. The rsync completed suceessfully and I varified a few of the files on disk2 were fine. I then ran the permissions fix to make sure everything was set correctly. Happy that the data had been moved, I tried to run an rm -rf on the /mnt/disknt -l showed it was mounted rw. A bit confused, I stopped the array and rebooted via the web interface. After the reboot both my data drives are completely empty but mounted successfully in the array. My shares have been removed. Running ls on /mnt/disk1 and /mnt/disk2 show that there are no files or folders on them at all. The parity sync started automatically but I stopped it on the off chance I can salvage the situation. So my questions are; * What the hell could have happened to cause this, as far as I understood it I had shut down the array cleanly? * Is is possible to recover the data using the parity which was completed before any file copy took place? * If so how would I go about that? If anyone can shed any light on this it would be greatly appreciated, even if only to confirm recovery is futile Thanks for your time.