Jump to content

itimpi

Moderators
  • Posts

    20,787
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. What version of Unraid as this command did not exist in v5. If so it could be worth booting with the latest release so that you can get diagnostics (no key needed) so we can get an idea what is going on.
  2. There is no obvious error showing in the SMART reports but you could rerun an extended SMART test to be sure. More often than not it is an issue with either the SATA cabling or power to the dtive so worth checking these are well seated. Your syslog is being ‘spammed’ with autofan messages which makes it very difficult to find anything else useful there. I would you suggest you either adjust the autofan settings to get less messages in the syslog (or perhaps even disable it completely) and get new diagnostics to see if we can spot anything.
  3. Did the drive show as unmountable before you started the rebuild? Just asking as a rebuild will not clear that state and it is normally better to try to clear the unmountable state before doing the rebuild. Just to clarify - it is not parity that is being rebuilt, but the 'failed' disk is being rebuilt using parity plus the other data drives.
  4. Alternatively try the Manual install method. Unraid does not use GRUB so the message you showed definitely means the flash drive was not correctly created.
  5. The lost+found files/folders are those where the repair process cannot find the directory entries to give them the correct name. It takes manual inspection to determine what they used to be. Normally easier to restore from backups.
  6. Glad to hear that. Have you been getting the array operation restarting correctly as the drives cool down? If you spot any anomalies then please let me know so I can work on resolving them.
  7. At the fine detail level you are correct when you have drives with a lot of different sizes. If however you are adding additional drives that correspond to a size you already have then they will almost certainly not affect the overall time.
  8. You did not go to the current documentation that is here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release.
  9. The xfs_repair always requires the partition number to e supplied so the device should have been /dev/sdb1
  10. The speeds you quote are more than one would normally expect for writing to a parity protected array so doubt there is anything wrong. You will never get close to the disk raw speed because of the way Unraid writes data as described here the online documentation accessible via the Manual link at the bottom of the Unraid GUI. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release.
  11. I think the multiple arrays support may have slipped to the 6.14 release with the 6.13 release concentrating on improving ZFS support. I could be wrong about that though.
  12. Note that the number of drives does not affect the speed of parity checks. The length of time a parity check takes is primarily determined by the size of the largest parity drive.
  13. What was the exact command you used? You can get this symptom if the command is not quite right.
  14. Booting in Safe Mode stops any plugins from being loaded.
  15. @Zerginator From your description of your Use Case you might also find the option to resume array operation after a restart is also of use. I personally use it so I can shutdown my server overnight when it is not being used and then resuming any running array operation from the previous point reached when I restart it the next day.
  16. I cannot reproduce the settings issue. My custom settings appear/disappear as I change the frequency setting. glad to hear that you correctly got the pause. I guess you now need to wait to see if the resume happens as expected.
  17. As far as I know this works fine. Each User Share can specify which pool is used for caching purposes. what is not supported at the moment is moving files directly between pools or one pool acting as a cache for another pool. I think both of these are on the roadmap although no idea of the ETA.
  18. Sorry about that - the code that handles temperature issues had not been touched for some time and I assumed that the lack of problem reports meant it was working as I thought it was. I have just pushed an update that seemed to pause and resume as expected in my quick test when disks overheat (without reaching critical value). Let me know if now works for you. The critical value should not have been relevant to the pause/resume on temperature. It was intended for the option to shutdown the whole server if disks reached that value.
  19. Thanks - that was exactly what I needed. On that line it should be ‘$temp’ rather than simply ‘temp’. At some point the $ must have been accidentally removed because at one point that code was definitely working. It makes sense that it shows up for you as that is a bit of code that is only executed when drives are detected to actually overheat. I will make a quick fix; do a quick test; and push out an update later today. When I have done so it would be very useful if you could confirm the plugin is now working as expected or if there is something else for me to track down.
  20. It could have meant that the power supply was marginal in some way and changing the parts took it over some limit.
  21. I can see that something has gone wrong in that the script has exited prematurely but not exactly why. If you can go to Tools->PHP Settings and set the error reporting to All Categories then it will hopefully give me the exact line in the code that is causing the problem and why it is a problem.
  22. FYI: There is a good write up of this process and the options here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release.
  23. No - nowadays the Clear runs with the array online. In terms of time it is probably around ~2 hours per TB (about the same as the write phase of a pre-clear).
  24. If you add a drive without pre-clearing it then Unraid will Clear it after adding it and starting the array before it is available to be used. The Clear is much faster than a pre-clear but you do have this delay before you can format and use the drive. If you pre-clear then this is done before you add the drive so it becomes immediately available when you finally get around to adding it. Going the pre-clear route is longer in total elapsed time but it does stress test the drive before adding it.
×
×
  • Create New...