Jump to content

itimpi

Moderators
  • Posts

    20,775
  • Joined

  • Last visited

  • Days Won

    57

Everything posted by itimpi

  1. The diagnostics show that disk2 IS disabled (that is what the red ‘x’ means). The SMART information for it looks OK. However since the array was not started before taking the diagnostics we cannot see if the emulated disk2 is mounting correctly.
  2. The only way I could think that might happen is if you had a browser window kept open after a reboot is that a possibility in your case?
  3. You can change as many disks as you like after using New Config. Any data on existing disks being kept will be left intact. Any data on drives being removed will no longer be there. Shares will remain intact, but you may have to make adjustments if you have set includes/excludes to match the new set of drives.
  4. The naming of pools is whatever the user wants them to be.
  5. Have you rebooted since installing the key?
  6. That suggests you need to run again without the -n option so it can be fixed. If it asks for -L give that option.
  7. I would suggest you start with the process documented here in the online documentation for handling unmountable disks.
  8. According to the diagnostics posted many of your configuration files on the flash are 0 length so it looks like you DO have corruption of some type. You might want to consider putting the flash drive into a mac/pc to run a check on it, and after doing that deleting (or renaming) any .cfg files in the 'config' folder that are 0 length so that new ones with default values get created when you next boot Unraid.
  9. At that time you started getting crashes from the tdarr server. There have been reports that this docker container can cause issues on Unraid so I would suggest you try to stop running that container to see if your problems go away.
  10. Those settings would only make sense in the context of files also being on the main Unraid array. You would use the ‘Only’ option if you wanted all of the share on a ZFS pool.
  11. You can do such a transfer once every 12 months using the automated system without having to contact support. The time you DO have to contact support is if you want to do another transfer less than 12 months after the previous one.
  12. When you create a ZFS pool you specify the raid level you want and that automatically builds in parity if you select a redundant level. If you set any share to only be on that pool then not much point in trying to cache them. if you are talking about ZFS in the main array then each drive is its own single drive pool with no redundancy, and you still need a parity drive if you want to be protected against a drive failing. In this case caching shares DOES make sense as the main array still has the performance cost of updating parity even though individual drives may be using ZFS.
  13. The first line is when a spindown is issued, and the second one is Unraid trying to read the SMART data because it thinks the drive has just been spun up again. The issue is trying to determine if something IS spinning it up again.
  14. You might want to open this file on the flash drive - it should be a simple text file with 1 line for each entry in the parity history. Sounds as if it might have gotten corrupted. I think if it is deleted it gets recreated next time any check gets run.
  15. You can use the Parity Swap procedure which is designed for exactly this Use Case.
  16. It does not look as though the drives are even being recognised at the hardware level How are they currently attached?
  17. There is no point in this since adding the old parity drive to the array will cause it to be 'Cleared' which wipes out any format you have just done.
  18. You mention the testing being optional, but since the parity build will test the disk anyway this seems superfluous and probably just a waste of time.
  19. That is not a valid unique GUID. It should not change as it is meant to be set at the hardware level. Have you tried rebooting the server (possibly with a power cycle involved) to see if your flash drive gets back to reporting a valid GUID. If it continues to report that GUID of all zeroes then there is a problem with the flash drive and your only option will be to move to a different flash drive that DOES report a valid GUID.
  20. Difficult to tell anything from your diagnostics as the syslog is continually being spammed by error messages of the form: Apr 21 16:09:16 Rack kernel: megaraid_sas 0000:02:00.0: 3368374 (735372555s/0x0004/CRIT) - Enclosure PD 21(c Port 0 - 7/p1) hardware error You really need to get this fixed. It was enough to tell me that the plugin detected mover running and ending, but not why it did not resume after mover ended. Perhaps you can let me have a copy of the following files (either post them or PM them to me) from the flash drive so I can check out your configuration and what happened: config/plugins/parity.check.tuning/parity.check.tuning.cfg config/plugins/parity.check.tuning/parity.check.tuning.progress.save (or if there is one the version without a .save extension) If I cannot see the issue from examining the above files then It might also be useful to me if you set the logging option in the plugin's settings to Testing mode but select the option to log only to flash (to avoid all the spam in the main syslog) and then recreate the issue and then let me have the file 'config/plugins/parity.check.tuning/parity.check.tuning.log'. After doing that you want to reset the logging option to a lower level to avoid excessive writes to the flash drive.
  21. When you add a new drive to an existing Unraid array it is expected for it to show as unmountable with no file system. At that point Unraid will be offering to format unmountable drives to create an empty file system on it. Sounds like all you did wrong the first time was omit the format step?
  22. You cannot know which files are involved. You can find what sectors were involved by looking at the syslog, but not what drive those sectors were on. Have you had any unclean shutdowns - if so then a smallish number of errors are expected. Since Unraid does not know what files are involved when you run a correcting parity check Unraid updates parity to match the data drives as it is most likely they are OK and it is parity at fault. If you use XFS as the array file system then you need to have check-sums of your files to identify any that might be corrupt. The File Integrity plugin can help with this going forward. If you are using BTRFS (or ZFS with 6.12) then running a check on a drive will cause any corrupt files to be listed in the syslog as those file systems have built-in check-summing on all reads or writes to a file.
  23. Glad to have confirmation. Was not the biggest code change as it simply involved changing the word ‘false’ to ‘true’ as the return value from a function I had written but got the logic backwards
  24. Have you made sure you have a current backup of the flash drive just in case the flash drive is on its way out and will need replacing?
×
×
  • Create New...