Jump to content

mo0oh

Members
  • Posts

    4
  • Joined

  • Last visited

Posts posted by mo0oh

  1. Yes, there are also files named "scrub_alpha.cron" and "scrub_bravo.cron".

    The file "scrub_ssd.cron" contained the exact lines that were also present in /etc/cron.d/root I was trying to remove and nothing else.

     

    For now i made a flash backup and deleted the file and also the entries in /etc/cron.d/root.

    Now I have to wait a week to see if it really cleared any traces of the schedule but I think that should be it.

     

    Will report back if the issue still is present the next scrub interval.

    For now this seems to be the solution.

     

    Thank you very much!

  2. Until recently my unraid build consisted of the array and a zfs mirror pool "ssd".

    I then added a second mirror pool "bravo" and renamed "ssd" to "alpha" via the GUI.

     

    Last week in the syslog I noticed that there still seems to be an active scrub schedule for "ssd" that exits with status 1. I assume that means it fails run because there is no pool with that name anymore:

     

    Quote

    Feb 12 19:00:01 FelixServer crond[1139]: exit status 1 from user root /usr/local/emhttp/plugins/dynamix/scripts/zfs_scrub start ssd &> /dev/null

     

    As there is no such pool I cant deactivate the schule via the GUI.

     

    I found the following entry in /etc/cron.d/root after searching the forums where this cron job might be stored and poking around with midnight commander:
     

    Quote

     

    # Generated zfs scrub ssd schedule:

    0 19 * * 1 /usr/local/emhttp/plugins/dynamix/scripts/zfs_scrub start ssd &> /dev/null

     

     

    But even after deleting the lines in the root file the schedule still seems to run as I still got the error today, also it seems to restore those lines.

     

    Is there a way to permanently disable the scrub of "ssd" without the GUI?

    felixserver-diagnostics-20240212-2236.zip

×
×
  • Create New...