Jump to content

itimpi

Moderators
  • Posts

    20,701
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. At the moment there is no place to sensibly report the speed of each individual increment (although I have all the required information). It does not seem right to add each individual increment to the parity history which has always recorded the detail for a single run. one thing that occurs to me is that at the moment the History button on the plug-in Settings page just shows the same information that is on the History button on the Main page. This could be changed (Or an extra button added) to display the history in a more detailed form including individual increment details. The question is what value this would add to decide if it is worth investing the time to implement such a feature.
  2. This is implicit if I know how much time each increment was actually running for When retrospectively analyzing the running of the parity check (as recorded in the file 'parity.check.progress') I was already tracking both the total size of the parity disk and the point reached by each increment. Correcting the speed calculation was just a case of using $reachedSector instead of $totalSectors. I will need to revisit this calculation when/if allowing for partial checks that do not necessarily start from the beginning to also take into account the start sector (currently assumed to be 0) but that is not required yet. Not sure yet whether I want such partial checks recorded in the History - do you have a view on that?
  3. This SHOULD in principle be accurate. The plugin takes into account pause/resume by working out the length of time that the process was actually running when calculating speed. However the fact you queried it made me check the calculation and I see the speed calculation is currently always being based on the total number of sectors on the parity disk rather than the point reached. This means at the moment it will be correct if the parity check completes but wrong if it does not. I will fix this. A missing $ on a variable name so I will fix this. The effect should not be serious as it is where the plugin is trying to tell the difference between whether a check being run completed before an unclean shutdown occurred or was still in progress at the time. The error means that the check will always (possibly incorrectly) be assumed to have been completed and recorded/reported that way. Another thing to fix though. What is surprising is that I did not get this flagged up in my system as I have the PHP warning level turned up on my test systems - but it is a rather obscure code path.
  4. You probably do not want to leave that setting on once you are happy that the problem is solved: It can generate a LOT of log output You may not want that information in posted diagnostics as by its very nature it cannot be anonymized. It is normally recommended that appdata is kept on the cache for performance reasons and to stop array disks spinning up unnecessarily. At the moment appdata is spread across disks 1, 2, 3 and has no files on the cache. If you want appdata on the cache (and have the space on the cache) then you would need to change the Use Cache setting for appdata to "Prefer" to tell mover that the files should be transferred from array to cache.
  5. Worth mentioning that this is found under Settings >Scheduler - may not be obvious to new users
  6. I have pushed a new version that fixes the csrf issue on the plugin's Settings page on my servers and means the Apply button works again - I would like confirmation it has done it for others. The only change I made was to make the names of some variables recently added to the code less generic (and sure to be unique to my code) so it looks like the problem was that I had inadvertently used a name used elsewhere in existing Unraid code where it had a different meaning. I suspect that this is a coding trap other developers could fall into if they use short variable names.
  7. Anything reading or writing to the array disks will slow down a parity check. Quite how severely is difficult to determine - it depends on how much activity and how much the drive heads have to move each time to switch between the position required for the file access and for the position currently reached in the parity check. The interaction between normal file access and parity checks was one of the drivers behind developing the Parity Check Tuning plugin so that parity checks can be run in increments outside prime time to minimise the impact on users in their daily use of their Unraid server.
  8. I am going to have to back out the most recent change and introduce them one at a time to see what causes this. The only thing that springs to mind is a variable name collision with something in the standard Unraid code on the Settings page. As I say it is weird as there have been no changes to the actual plugin’s settings page for weeks.
  9. I have just cleared my browser's cache and the problem has disappeared. See if the same applies to you? That makes some sense as csrf_token messages are typically seen when you have browser windows open to the Unraid GUI across a server reboot. EDIT: weirder - I now have this happening one server and not on another !!!!
  10. Weird I can reproduce that error but as I did not change anything that relates to the Settings page I have no idea why it should suddenly occur. I did not even think to test that area I must admit
  11. Update now available. Hopefully no other regressions surface. I did ensure this version successfully runs the CLI commands that were failing in your earlier report. Please to not hesitate to report any other issues or even what you think might be just minor anomalies.
  12. I see the problem - I introduced a regression in the action Description function within the code. I will fix and push out an update shortly. It was very obvious as you used the CLI option. I must add doing more comprehensive tests that way to my regular test plan
  13. I do not believe there is code in place to update share settings if you rename a cache pool. Whether this should be a bug or a feature request is a moot point.
  14. not that I know of - but it should only take a few minutes so that is not normally a concern. If it appears to be taking longer then try refreshing the ‘Main’ tab in case it is just a GUI update issue.
  15. @wjmiller it is not clear to me what you think you need to copy to the unassigned drive in your Use Case if you have configured Plex to use the unassigned drive then it’s working files will go there. The media files used by Plex I would expect to remain on the main array. Am I missing something or is just that you are not familiar with using Unraid.
  16. Yes in principle. if you just assign the drives as data drives to an array that is already parity protected then Unraid will carry out a ‘Clear’ operation on the drive to ensure parity remains valid before you can format and use the drive for storing data. This will take about the same time as the 1st phase of pre-clear (about 20 hours in this case). This ‘Clear’ step is skipped if the disk has been pre-cleared so you do at least get back some of the time the pre-clear would take. The array will remain available for use while this ‘clear’ operation is in progress. This was not true in Unraid v5 which was the original impetus behind the creation of the pre-clear script as that script could be run as a background task with the array online. Nowadays the only real reason for using pre-clear is to act as a stress test of a drive before adding it to the array as it is much easier to handle a hardware failure of a drive before it is added to the array. As well as being useful for older drives of unknown provenance/status this can help pick up ‘infant mortality’ on brand new drives.
  17. This is the setting that is most likely causing problems. With that value once a folder has been created within a share all further content of that folder is forced to the drive where the folder has been created regardless of the other settings for the share as Split Level always takes precedence when contention arises between the settings for selecting the target drive.
  18. Not all steps are of equal length so 100 hours will be pessimistic. Basically there are 3 time-consuming steps with the pre-read and write phases taking about the same time, and the post-read being a bit slower. However as you are seeing this is definitely not a fast process with the very large sizes now reached by modern drives This is probably why more people are not bothering to use the pre-clear process as it is not mandatory (unless you want to do an initial disk stress test).
  19. If you have no files on the cache and nothing configured to use it then you can simply stop the array; unassigned the cache drives; and then restart the array. if you have files on the cache or have dockets/VMs configured to use the cache then you would first need to take steps to rectify this before removing the cache drives.
  20. Not sure why you think this - it has always been possible to use HDD in a cache pool. Perhaps you are getting confused by the inverse fact that it is not recommended that SSD be used in the main array?
  21. @vr2gb are you sure you are on the latest version of the plug-in? Permission issues were a problem at one point but these have been fixed for a while as far as I know. I am not seeing them on my test systems. If you had the version with permission issues installed you may need to go into the plugin settings; make a nominal change; and then re-apply the settings to get scheduled pause/resume to work correctly again (although thinking about it I may be able to do make a change that does that automatically as part of the plug-in installation). EDIT: On checking the code I see that the cron schedules are already being re-generated during plug-in installation.
  22. After using New Config then when you start the array Unraid will recognise any data drives are in Unraid format; leave their data intact; and build new parity based on the new drive set when you start the array.
  23. The way to do this will be to use the New Config tool and then assign the drives to remove. When you now restart the array it will rebuild parity based on the remaining drives. To get the drives back into the array use the same approach and rebuild parity again. you need to be aware that this leaves your array unprotected during the periods while parity is being rebuilt.
  24. It is is worth pointing out that the 2TB disk showing as sdb look like it is not in a good state: 5 Reallocated_Sector_Ct PO--CK 086 086 010 - 18184 whether this is contributing to you problems I do not know.
×
×
  • Create New...