Jump to content

itimpi

Moderators
  • Posts

    20,787
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. Pending sectors are never a good sign unless they are false positives and subsequently clear themselves (or at the very least get changed to reallocated sectors).
  2. Yes if you do not have Turbo Write enabled as described here the online documentation accessible via the Manual link at the bottom of the Unraid GUI. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom which describes the Unraid write modes. The Unraid OS->Manual section covers most aspects of the current Unraid release.
  3. There are a lot of pending sectors and the reallocated sector count for that drive is very high so I would think it really is failing. The reallocation of sectors is internal to the drive so it seems very unlikely to be the HBA. The only think that just might contribute to this that I can think of would be power related issues. Much more likely I would think is that the drive got damaged in transit.
  4. I would suggest using the Dynamix File Manager plugin.
  5. Order is not relevant as disks are recognised by serial number not by where they are connected. it is worth checking it not something more obvious such as the HBA no longer being firmly seated in its motherboard slot.
  6. Have you checked that you do not have the Export setting for the shares set to No (which is the default). You need one of the other settings for them to be visible on the network.
  7. Lots of the array drives started simultaneously getting errors and they do not show up in the SMART reports so they went offline. This suggests a problem with something they share such as power or the HBA they are attached to.
  8. The syslog in the diagnostics does not include the syslog server captured file which needs posting separately.
  9. This is not how the Unraid cache feature works! The purpose of the cache facility on Unraid is to provide a faster perceived write speed when writing to User Shares than happens if you write directly to the parity protected array. Under Unraid if you have a User Share set up to use a pool for caching purposes, then this is where and new file gets written (space permitting). It later gets moved across to the main array (as long as you have this specified as secondary storage) when mover runs (default is scheduled for overnight). The file only ever exists EITHER on cache OR on array - never on both.
  10. The parity drive has been disabled which means a write to it failed so Unraid has stopped using it and is now merely performing a read check on the other drives so probably no reason to continue. The parity drive is showing a lot of reallocated sectors (832) so I suspect it may be a genuine problem with the drive and it will need replacing rather than our normal first suspect of being the power or sata cabling to the drive. You could try running an extended SMART test on the drive to see if that passes as if it does not the drive definitely needs replacing.
  11. If you have checksums that is the way to confirm whether any files that are corrupt. If you are using btrfs or ZFS as the format of drives in the array these format have built-in check-summing of files. However XFS is more frequently used on array drives as it is more performant and less prone to file system corruption.
  12. I expect this is just the difference between TB and TiB values.
  13. I have had this happen occasionally (particularly after an upgrade) but found that rewriting all the bz* type files fixes it and it then runs fine without further problems.
  14. No practical way to find what files might be affected (if any). Since the write order is data drives before parity then as long as you do not have a hardware error you have to assume it is the parity drives that are out of sync. The remedial action is to run a correcting parity check. This should report the same number of ‘errors’ but it corrects them so that subsequent checks should find 0 errors.
  15. You can get cases where all sticks work fine individually but when you have them all plugged in simultaneously you start getting errors. I assume this is related to the load on the memory controller on the motherboard.
  16. That is one reason why i suggested that you check the cabling. it is quite possible for a disk to fail with no obvious SMART indication. You can try the extended SMART test as if that fails the drive needs replacing.
  17. What makes you say this - there are lots of errors relating to disk1 in the syslog. They started off as read errors but eventually you got write errors as well which would be why the drive was disabled. could just be a power or SATA cabling issue so worth checking these out.
  18. The problem is that your screen shot shows you have two disabled drives and only single parity. It looks as if at some point in the past you unassigned disk1 and then started the array and have been running unprotected since then. Not sure at the moment of the best way to proceed. Cannot think of any way to recover the data off that drive if it really has failed.
  19. You should make sure that the Minimum Free Space value on the cache is set to be more than the largest file you want cached to avoid it ever completely filling up and causing problems.
  20. Did you run without the -n option so something actually got fixed? The output you posted was a read-only check .
  21. Yes as these are ones where the directory entry giving the name could not be found by the repair process, although it can be easier to restore from backups. If you need to go through them manually then the linux 'file' command can at least give you the content type for each file.
  22. You might want to check the Minimum Free Space settings for both the pool and your User Shares to see if any of these are larger than the current free space. And this triggering such messages. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread so we can check how you have things set up.
  23. You only have the syslog server set up to listen at the moment. You need to set one of the last 2 fields to get it to actually start recording messages from the host server. This is mentioned in the link.
×
×
  • Create New...