Jump to content

itimpi

Moderators
  • Posts

    20,696
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. I do not see any benefit to this as the moment the data on any of the array disks changes then the parity disk contents will no longer be valid. As I said before what is it you are actually trying to achieve by doing this?
  2. What are you trying to achieve? I cannot see why you would want this?
  3. A much easier approach would be to start with the New Config to get the 8TB into the array and then format that. Having done that you can connect the old 4TB disk using USB and copy its contents onto the 8TB drive you just added to the array.
  4. Q1: with one parity disk then if a single array disk drive the system will emulate the failed drive and you can continue operating as if it is there (but with your data no longer protected) Q2: If you have 2 parity drives and 2 drives fail then the system will emulate both the missing drives and the array continue to operate. If you have only a single parity drive and 2 array disk fail then Unraid cannot recover any of their data (although rescue utilities outside Unraid might have some success).
  5. When you place a file with the same name into a user share then the existing copy gets overwritten wherever it is placed regardless of whether it is on the main array or a pool so in that sense you should not end up with a duplicate. One way I can think of to end up with a duplicate unexpectedly is if you are using something like a docker container that bypasses the normal user share system and you end up with a manifestation of the behaviour described here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI. Another possibility might be using the unBalance plugin and aborting it in mid process.
  6. Mover never overwrites existing files. In normal operation duplicates should not exist. You must have done something earlier to create such a duplicate in the first place - any idea how that happened.
  7. That share has a very restrictive Split Level setting (1) so you may get Unraid trying to put files on drives without sufficient space for the file in question. As described here in the online documentations accessible via the ‘Manual’ link at the bottom of the GUI the Split Level setting takes precedence over any other setting when trying to decide where to put a file.
  8. Rsync is included as standard with Unraid. What made you think it was not?
  9. The loading of each plugin is controlled by the .plg files in the ‘config’ folder on the flash drive. You can rename any you do not want loaded to have a different extension and then reboot in normal mode
  10. It which case you should provide more detail - I would suggest posting your diagnostics to see if we can work out why you have disk shares active
  11. Under Settings -> Global Share settings. You must have gone there originally to turn them on as they are off by default.
  12. It occurs to me that there is a very faint possibility that something like UFS Explorer on Windows might be able to get data off the rebuilt drive if you have not yet written anything to it despite the fact you did a format.
  13. You might want to read this part of the online documentation accessible via the ‘Manual’ link at the bottom of the GUI.
  14. The URL will not work from the GUI if you have no internet access from your server. have you tried simply using the URL from a browser to download the key file and then copying the key file into the ‘config’ folder on the flash drive?
  15. Yes - you just need to unassigned it and restart the array. Physically removing it is optional.
  16. It was the emulated drive that got formatted - not the physical drive. The subsequent rebuild made the physical drive match the emulated one. Are you sure the original physical drive has really failed (in most cases a drive being disabled is from something other than drive failure). If not you can almost certainly get most of the data off it as long as you keep it intact.
  17. There is no ‘right’ answer - it is all about the amount of risk you are prepared to take and how good your backup strategy is. For most people with only a few data drives they may prefer to use an additional drive for backup rather than as a second parity. Do not forget that parity is primarily about high availability and simple hardware failures, it does not guarantee you will not lose data. There are lots of ways to lose data other than drive failure and thus the need for a backup strategy that suits your needs. having 2 parity disks allows for a second drive failing while you are trying to rebuild a failed drive. The likelihood of this is relatively low but goes up as the number of drives and the drive size increase. I know of people who have a good backup strategy running with no parity at all even though they have quite a few drives.
  18. When you attempted to format you would have got a big pop-up telling you that format is never part of data recovery and the result would be an empty disk.
  19. The update I released today should fix all issues I know about, so please report any new anomalies you spot. For those who use a language other than English with Unraid please not that I have NOT yet updated the plugin’s translations file to include any changes for this version of the plugin. I will start working on this but if you spot any text unexpectedly coming out in English then please let me know so I can check that specific text against my translation file.
  20. The version of memtest included with Unraid will not detect errors being corrected by ECC. I believe the version you can download from memtest86.com can do this.
  21. The 6.10.0-rc4 release now gets it correct for scheduled checks, but still gets it wrong for Manual or automatic checks. As a result I have left in place the code in the plugin that calculates these figures correctly regardless of the type of check or whether increments were run.
  22. not really. There is the Parity Swap procedure but that only applies when simultaneously upgrading parity and replacing a failed data drive with the old parity drive, but since you want to keep all drives that does not apply. If you use @JonathanM approach but avoid formatting any of the ‘unmountable’ drives until the parity build has finished the old parity dtive will be left unchanged to give you a fallback if something goes wrong. Alternatively you could simply not assign the old parity disk to the array initially, and only assign it after the new parity is built but that would take longer as the old parity would now need go through a ‘clear’ operation to zeroize it when adding it to the array.
  23. Tht does not sound right, but maybe posting a screenshot and new diagnostics might allow us to see why.
  24. That is not actually an error. It is an informative message that I only intended to happen when logging level was set to Testing, but I accidentally set it to occur at all log levels. I have fixed the error you reported, but have found I seem to have also created a regression in the Parity Problems Assistant that I am currently tracking down.
×
×
  • Create New...