Jump to content

itimpi

Moderators
  • Posts

    20,789
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. I would definitely not bother with 2 parity disks with 1 data drive. Even with the 5 data drives you currently have 2 parity disks feels like a bit of overkill. Another option to consider is to run without any parity disks and then leave one of the 20TB disks to do regular backups while keeping the other as a hot spare. This way you have two independent copies of your data. If you move to 2 data drives then I would consider 1 parity drive more than enough. You should always have backups of of any important or irreplaceable data as there are other ways to lose data than a disk failing (which is what parity protects you against).
  2. Yes. That setting is covered here in the online documentation accessible via the Manual link at the bottom of the Unraid GUI. In addition every forum page has a DOCS link at the top and a Documentation link at the bottom. The Unraid OS->Manual section covers most aspects of the current Unraid release. If you do not set it then I think the current default is based on a percentage of the size of your largest disk.
  3. If this was 1 data drive and 1 parity drive then this is a special case where it just works out that parity is a mirror of the data drive. In any other case one would not expect the parity drive to be mountable. Sounds as if the share you have for backups could have been set up to be on the 'cache' pool?. Do you know what file system that was using and whether it was set up for redundancy? Do you by any chance have the Connect plugin installed? if so you may have set that up to make a backup in the cloud.
  4. There are continual messages along the lines of: Mar 5 09:37:53 crookserver2 kernel: sd 1:0:5:0: attempting task abort!scmd(0x000000001af04f39), outstanding for 30416 ms & timeout 30000 ms Mar 5 09:37:53 crookserver2 kernel: sd 1:0:5:0: [sdg] tag#629 CDB: opcode=0x88 88 00 00 00 00 00 00 02 7f 80 00 00 04 00 00 00 Mar 5 09:37:53 crookserver2 kernel: scsi target1:0:5: handle(0x000d), sas_address(0x4433221105000000), phy(5) Mar 5 09:37:53 crookserver2 kernel: scsi target1:0:5: enclosure logical id(0x500605b003f99080), slot(6) Mar 5 09:37:53 crookserver2 kernel: sd 1:0:5:0: task abort: SUCCESS scmd(0x000000001af04f39) This implies there is either a problem with sdg (disk1) or with the hardware to which it is attached.
  5. What share is it? You might want to check the Minimum Free Space setting for that share as you may find that is larger than the free space on any of your array drives. Working directly with the disk share bypasses that setting.
  6. I do not see how this follows? Once one realises it is is going to be difficult to separate out bug fixes releases from releases that incorporate new features as the company is probably not big enough to have multiple parallel development streams then one has to pick a timeframe and 1 year seems easy to quantify.
  7. Probably worth pointing out that the minimum recommended memory for current Unraid releases for everything to function as expected is 4GB. Any chance of you increasing the RAM on your system?
  8. Perhaps you should try this in Maintenance mode as that would guarantee there is no file level access going on.
  9. Disk2 and disk3 are still getting a lot of accesses.
  10. Not sure but I think it does continue - someone else will probably chime in with the definitive answer.
  11. The official online documentation is accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The Unraid OS->Manual section in particular covers most features of the current Unraid release. Is that how you have been getting to it? Do you have suggestions for improving the layout?
  12. You can only run with drives OR shares - mixing them can lead to data loss. click on the leftmost icon of a drive on the Main tab and that will show the contents of that drive. It will allow you to move them to a different drive..
  13. No need to run pre-clear as the parity rebuild will overwrite every sector anyway. Replacing disks is covered here in the online documentation.
  14. Easiest thing is to use Dynamix File Manager plugin. Might as well get used to it as it is going to be built into future Unraid releases.
  15. That looks fine. Building parity onto it will provide a good additional test.
  16. The name is always something cryptic (since the directory entry cannot be found). If it is empty you can simply delete it.
  17. That is always a good sign - means the recovery process did not find anything it could not handle. The results after a rebuild should be identical.
  18. Just to clarify the /?? part indicates how many bits in the subnet mask should be considered to be 1. /24 is therefore 24 bits (3 bytes).
  19. The dtive should now mount when the array is started innNormal mode. You should check whether this a Lost+Found folder on the drive - this is where the repair process would put any files/folders for which it could not find the directory entry to give them their correct name.
  20. That implies that the drive that contained them is no longer present (or is not mounting.).
  21. Yes. The SMART test would be automatically aborted if you power off to replace the drive.
  22. You should also check whether you have a Lost+Found folder on the drive. That is where the repair process will put any files/folders for which it could not locate the directory entry giving the correct name.
  23. Probably irrelevant. The rebuild process just makes the physical drive match the emulated one.
  24. Those messages are perfectly normal when docker containers start up. If you are getting it frequently then you probably have a container that is crashing - you can check the uptime for your containers to spot a culprit. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs. The syslog in the diagnostics is the RAM version that starts afresh every time the system is booted. You should enable the syslog server (probably with the option to Mirror to Flash set) to get a syslog that survives a reboot so we can see what leads up to a crash. The mirror to flash option is the easiest to set up (and if used the file is then automatically included in any diagnostics), but if you are worried about excessive wear on the flash drive you can put your server's address into the remote server field.
×
×
  • Create New...