Jump to content

itimpi

Moderators
  • Posts

    19,793
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. Using the Parity Swap procedure the system does not start over-writing the contents of the old parity drive until it has successfully copied its contents to the new parity drive. If for any reason you get a failure during this stage you still have the contents of the old parity drive intact for recovery purposes. As long as the parity copy completes the new parity drive takes over the task of emulating the failed data drive while that rebuilds.
  2. The Parity Swap procedure is specifically designed for the case where a data drive has failed and you want to put in a bigger parity drive and reuse the old parity drive to replace the failed one as a single procedure. It work in two phases the contents of the old parity disk are copied to what will become the new parity disk. During this phase the array is offline as there must be no changes to parity during this stage. when that completes the new parity disk is now used to rebuild the failed data drive onto the old parity drive. The array is online during this phase albeit with reduced performance due to disk contention. just thought I should check - are you actually sure the old array drives failed? It is more common for drives to be disabled by Unraid (because a write failed) due to other factors such as cabling, power, etc than actual failures of a physical drive. You never posted your system’s diagnostics zip file so we could check the SMART information for the drives.
  3. Yes - an important one rebuilding the disk makes the contents of the physical disk match what you see on the emulated disk. Therefore any problems on the emulated disk (e.g. file system corruption) will also apply to the physical disk after a rebuild. rebuilding the parity assumes the physical disk contents are Ok and makes parity agree with what is on the physical disk.
  4. I tend to do similar things when I wake up in the middle of the night for some reason and post something then
  5. Yes with dual parity. Is one (or both) of the failed drives a parity drive or is it two data drives? If the latter then you could run the Parity Swap procedure on two drives at once rather than doing them consecutively although you may prefer to do them one at a time.
  6. Unraid will not allow the process. You can never put a drive larger than either parity drive into the main array. Assuming you mean Parity Swap then since you have dual parity (which means you can handle two simultaneous drive failures) you CAN use it to put a 16TB drive as parity and use the 14TB parity drive to replace a failed array drive.
  7. That is indicating that a significant amount of corruption is being found on the emulated drive which could lead to data loss if you simply rebuild the disk. I would suggest try mounting the physical drive in read-only mode using Unassigned Devices to see if that mounts OK. If it does then the best way forward is likely to be to assume the physical disk is OK and rebuild parity instead. If you still have the disabled disk plugged in then you could also try running xfs_repair -n /dev/sdX1 from a console session (where X corresponds to the drive on the Main tab) to see what that reports. What is the state of your backup strategy? You should always have backups (preferably offline) of anything important.
  8. Cabling issues (power or SATA) are much more frequent reasons for a disk being disabled than actual disk failure, so these is a good chance the disk is fine. Handling of disks being unmountable is covered here in the online documentation that can be accessed via the Manual link at the bottom of the GUI.
  9. Never heard of it not working if configured correctly. Commonest mistake is to set a Split Level setting that is too restrictive and thus forces files/folders to specific drives
  10. Why? level 3 is less restrictive than level 2.
  11. Are all drives spun up? Temperatures are not displayed for drives that are currently spun down.
  12. High Water does not try to keep files together - it is the Split Level setting that controls this. Switching to Most Free will not help as Split Level over-rides Allocation method.
  13. Removing a parity dtive is covered here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI. You can now use the standard procedure for adding drives to the array as documented in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI.
  14. You are probably confusing the system by having both drives with the same ‘UNRAID’ label?
  15. Because there if there is a problem on a drive there is no way to identify which one it might be. With modern drives the assumption is that they will return an error if they do not read a sector successfully, but it is possible that is not always the case. checksums is the only way to be certain (either built into the file system or via an add on are the only way to validate this.
  16. When ‘correcting’ parity the assumption is that the data drives are good and parity needs to be updated to match. This is the same whether you have single or dual parity, in neither case is a problem drive identifiable as the cause of a parity error.
  17. You also need to have the docker and VM services disabled to allow the files to be moved to the cache pool by mover as they keep files open (and mover cannot move open files).
  18. I am confused - the cache drive is part of the share.
  19. CRC error counts are nearly always caused by the cabling to the drive. Since they never reset you need to aim to stop them increasing at any appreciable rate,
  20. If you do not think that you have a drive playing up then the only thing you can sensibly do is run a correcting parity check. If you have not rebooted then you can post your diagnostics covering the period in question to see if anyone can spot anything. If you want to know if you have 'bit rot' on array drives then you need to either be using BTRFS as the file system or use the File Integrity plugin to maintain checksums.
  21. It would be nice if this was the case but apparently it is not always that easy
  22. I would not expect it to run natively on Unraid. If there is a docker container for it then that should work.
  23. Nothing has changed in this area for a long time. It could be coincidence and a Windows update happened around the same time and this is behind the issue?
  24. First time I have heard of such a limitation. I would expect IDE mode to give performance issues
  25. You could try opening up the syslog in a window before starting the array? at the moment the symptoms feel very like they must be hardware related in some way. Are you sure that the power supply is OK for the hardware. Building parity is a time when the system will be most demanding. Have you tried running memtest (one of the options on the boot menu) to check out the RAM? What speed are you clocking the RAM at? Make sure it is within the rating for your motherboard/CPU combination. Overclocking RAM is a not infrequent cause of crashes.
×
×
  • Create New...