Jump to content

itimpi

Moderators
  • Posts

    20,701
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. You do not want to run a correcting check with a failing drive as if you are getting read errors you are likely to end up corrupting parity for the sectors corresponding to the read errors. This is one reason why it is recommended that any periodic scheduled checks should be set to be non-correcting. I would suggest that your best approach is probably to simply rebuild onto a new drive but keep the old drive intact in case you need to get any data off it. If before you rebuild you start the array with the suspect drive unassigned then Unraid will emulate the missing drive. Whatever shows up as the contents of the emulated drive is what you will end up with after a rebuild as all the rebuild process does is make a physical drive match the emulated one. Once you have things sorted you should certainly run non-correcting periodic parity checks to monitor array health. You can use the Parity Check Tuning plugin to minimise the impact by making sure the check is only running during idle periods (although you would have to upgrade to at least Unraid 6.7.0 to be able to run the plugin).
  2. Not sure what you mean by this as there is no such thing as a Parity Write flag? Any time any sector on any drive is written, the corresponding sector on the parity drive(s) is written as part of the same operation and this happens immediately before the write is deemed to have finished. If you are constantly changing drives this can lead to excessive head movement slowing things down.
  3. The syslog reports lots of read errors on disk2, and there is no SMART information for the drive suggesting it has dropped offline for some reason.
  4. That may be overkill. unraid does not gain any significant performance advantage by using USB3 instead of USB2 as once loaded it runs from RAM other than occasionally storing small amounts of configuration information back to the USB stick. USB2 drives have proved in practice to be more reliable during the booting process and more likely to be long-lived (probably because they run much cooler).
  5. If you back it up first, it should be easy enough to test by seeing if you can successfully write the expected amount of data to the drive and then also read it back intact. If it IS a fake one of those steps will fail.
  6. This can sort of be done with the current increments capability - just that it the bands are time limits rather than percentage. I feel something like this should be in a different plugin that is specifically geared to testing disks. Then it makes sense to have specific sectors specified. Adding it to the current plugin seems a bit off-topic to its general purpose and potentially confusing to many users. In addition although I can easily start a check at a specified offset it is not easy to stop at a defined point with any sort of accuracy. I am thinking of adding a column to the parity history that shows what percentage of the disk was checked on each record so it becomes clearer when a parity check is quickly abandoned or if it gets aborted for any reason (including unclean shutdowns). I do not think that the information required to implement something like this is readily available to the plugin. It feels like it would require support right at the md driver level that is not currently there. Still something to think about.
  7. In the diagnostics posted you do not have the array set to autostart which would explain why the only share visible initially is the flash drive.
  8. What I am trying to assess is the pros and cons of providing such a feature. In particular how it might be misused in a way that could lead to data loss. If I DO implement it I would give positions as a percentage rather than a sector number.
  9. Now that the plugin is released with restart of array operations capability I am wondering if there is a sensible Use Case for allowing a partial parity check operation starting from an explicitly given point? Perhaps something like run the check from 20%-30% of the normal check range. At the moment I do not intend to implement such a feature, but it is technically possible so I thought I would at least float the idea to see what others thought and what reservations there would be about such a feature being misused. At the moment this is just a thought experiment (and not any sort of commitment) to try and get feedback.
  10. If you are running the latest Unraid 6.9.0 rc release then the Parity Check Tuning plugin can now restart array operations after a shutdown/reboot or array stop/start.
  11. Since some of these drives failed even the SMART short rests I would strongly recommend yhat you run the extended tests on the remaining ones. The short test is by no means definitive.
  12. I do not think that will work The option should be added to the ‘extra parameters’ section of the container’s template. When editing the settings for the container you need to switch to the Advanced view using the toggle at upper right to see that field.
  13. According to your screen shot you have not unassigned disk12 and parity2 so they are shown as 'missing' and disk5-disk8 are showing as assigned but 'wrong' (presumably because their serial numbers are being reported slightly differently). This will need correcting before you can start the array.
  14. Just pushed an update that fixes the permissions issue. Still not worked out what changed to mean this happened in the first place but the 'brute force' fix I have applied should mean it can not happen again in the future.
  15. Strange - that implies execute permission is not present on files used internally by the plugin. I have just checked and if I remove and then re-install the plugin I can see the permissions are not as expected. Not sure why that is suddenly an issue but I should be able to easily fix it by explicitly setting them as part of the plugin install processing.
  16. You would be better of using the Parity Swap procedure that is documented here in the online documentation available via the ‘Manual’ link at the bottom of the Unraid GUI.
  17. What version of UnRAID? You need to use the 6.9.0 rc to have support for the 2.5 Gbps NIC.
  18. Does not sound hopeful then the only good thing is that it is better to discover a disk is going to be a problem before you get around to adding it to the array. If it fails after adding it to the array you are then looking at having to carry out a disk rebuild and potentially data recovery actions. That is the main reason people now run pre-clear - to carry out an initial stress test of a drive to give confidence in its reliability.
  19. No. You restored the working files (in appdata) but it sounds like not the actual binaries for each container. The easy way to do this is to use the “Previous Apps” option on the Apps tab to tick of the ones to be re-installed and to cause your docker containers to be re-downloaded with all their settings intact.
  20. Since all pre-clear does is read and write to the disk, if it caused it to fail then it was almost certainly going to fail shortly anyway. it might be worth seeing if you can get any SMART information from the disk, and if you can whether it can pass the extended SMART test. Either of those failing means the disk will need replacing.
  21. You can try but since the workaround I gave is so trivial I cannot see it going anywhere as it is likely to be quite a lot of work on Limetech’s part to achieve this (assuming they are even interested).
  22. Not as far as I know without paying money to dockerhub I thought the limits were not expected to impact most users but maybe this is not the case.
  23. That normally means that you have a docker running that is writing files internally to the image instead of using a location mapped to storage external to the image.
  24. I think you will find it is not the LSI but something running on UnRAID trying to access the drive.
  25. I wonder if this a side-effect on the new pull limits being implemented on dockerhub?
×
×
  • Create New...