Jump to content

itimpi

Moderators
  • Posts

    20,701
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. I agree there is no problem if the rebuild has worked as expected. that is true if the recovery software you use knows how to recover encrypted volumes. Not sure this is always the case. unRaid does not use the 2nd parity to identify the failed drive after a parity check fails. Not knowing the algorithms involved I do not know whether this is not done because it is computationally too expensive or because there is too much chance of a false positive. I agree as good backups makes any discussion about recovery of data almost moot - but many people do not have that sort of backup. The question many people do not think to even ask themselves is why they need encryption in the first place. They simply assume it must be a 'good thing' to have.
  2. It is not that simple In the case of a parity check detecting the drives do not agree with parity unRaid will not know which drive is at fault and the only action you can take is to update parity to match the drive. If the write failed , but the corresponding parity write worked unRaid would disable the drive where the write failed and subsequently detect that the parity is out of step with the drive. You now have to decide if parity is right and rebuild the drive to match parity (normal action) or decide to rebuild parity to match the drive (which means parity now reflects any file system level corruption). You can also get the case where a software or hardware error causes an incorrect sector to be written to both the drive and the parity drive without any apparent error indication at the hardware level. In such a case both parity and the drive are in agreement but the file system is corrupt. Finally you have the case where you get more disks failing than you have parity drives. In such a case parity cannot help you, but often the majority of the data can still be recovered off a failing drive as long as it is still working at even a basic level. All of these are low probability but in my view are still higher probability than the use case where encryption would matter to me.
  3. yes - corruption in any part of the directory structure. Not that unusual if a write to the drive fails for any reason.
  4. The fact that the key was never installed is irrelevant - it is just that it is bound to the original GUID. The licence transfer will simply count as your one ‘automated’ transfer allowed per year. It is only if you need another transfer before a year has elapsed (or you encounter a problem with the transfer) that you will need to contact Limetech by email to get them to help.
  5. Option #1 would work and as you say is cheap Option #2 would mean that you do not get the best out of the HVME drive as in raid1 you get the performance of the slowest drive. I personally would go with option #3 rather than #2, except that I would use the NVME drive for VMs and dockers as they will gain more from the better performance. One advantage with option #3 is that as they are single drive pools you can use XFS as the file format which tends to be more resilient against any system crashes (although with any luck you will not get these) that BTRFS format pools.
  6. The default profile is the btrfs version of raid1. With mixed drive sizes this makes the available disk space equivalent to the smaller drive. If you want the space to be additive then you need to switch the profile to be Single.
  7. it is a Samba specific issue, so if it is going to be settable at the share level that was probably as good a place as any to put the setting.
  8. The release I have just pushed will create syslog entries for pause, resume and completion regardless of whether notifications are enabled. Please let me know if there are some syslog entries that are not there that you think it would be a good idea to have present.
  9. Unfortunately not as these are folders/files for which no directory entry could be found so their name is unknown. If you want to try and manually identify what they are then this has to be done by examination. For folders the contents can normally give you a good idea. For files the Linux 'file' command can at least help by indicating what type of content each file contains.
  10. I have pushed an update that includes a number of small updates. I have not, however, been able to reproduce the problem that has been reported of getting a parity check paused because the drives apparently overheat when they have not really done so. It is possible that I have inadvertently fixed this issue but I am not optimistic. I would be grateful therefore if someone who is experiencing this issue can set the plugin to pause/resume a parity check on drive temperature set the plugins logging level to "Testing" recreate the issue for at least one unexpected pause let me have the syslog (or system diagnostics as they include the syslog) covering this period so I can get a better idea of what is going wrong. set the logging level back to a lower level to avoid filling up your syslog disable pause/resume on drive temperature until I can report the issue has been resolved
  11. Looking at those diagnostics the USB drive is larger than the maximum officially supported size of 32 GB Not sure if that is the cause of your problems or not.
  12. Have you tried going into the Settings, making a nominal change; and then pressing the Apply button? That regenerates the cron entries.
  13. What did you do with the .key licence file? It is meant go into the ‘config’ folder on the flash drive.
  14. Does the ‘EFI’ folder on the boot drive have a trailing ‘~’ character? If it does then it is set to boot in legacy mode. If it does not then it is set to support UEFI booting.
  15. as stated in the syslog message those are set to “Pass Through” which means they will not be mounted by UD.
  16. Do you mean easier than using Apps -> Previous Apps to reinstall them?
  17. One con for file encryption is that if you ever get file system level corruption then the chances of a successful data recovery are lower. For me this is the main reason I do not use it on my personal unRaid servers.
  18. Option 1) applies as to whether such files are even viewable in the first place across the network and is independent of what you have set at the Windows level. You say that you have tried different permissions (and presumably ownership) but does this mean that you have run Tools -> New Permissions against the share in question?
  19. CRC counts never reset to 0 - they can only stay stable or increase.
  20. Virbr0 will not allow inbound connections as far as I know.
  21. At the moment I still have not managed to recreate this issue in any of my environments There is no log entry automatically being created on a pause - there probably should have been. There IS one created if you have notifications enabled but you are correct in that there should probably be one on a pause/resume regardless. I have been sorting out some issues I have spotted which can arise if you have your system is set to use temperatures in Fahrenheit rather than Celsius, but I do not think these are the cause of the issue. I am going to soon (probably tomorrow) upload a new version with all the odds-and-ends I have spotted resolved. While I do not think it will solve the problem (as far as I can see) it will provide a basis against which I can ask someone to recreate the issue with Testing level logging enabled to see if I can spot what might be the reason for this behaviour.
  22. I took me a while, but I got to the root cause I think. I was surprised it had suddenly happened when I had not changed anything in the package creation area (for which I run a script). I found it depended on which of my various unRaid environments I used to run the package creation It still surprises me that the effect was to change the ownership of existing folder rather just the new ones being added. The reason the problem arose in the first place might be of interest to other developers in case they encounter a similar scenario. I have a Dropbox container using a Dropbox share on my main unRraid server which holds the source of the plugin. I use my Windows machine (which is also running Dropbox) as the main editing environment as I have better tools there. I use Dropbox to synchronise the source between all my unRaid environments so any change made in Windows appears in my unRaid environments within a few seconds. I was mounting this share on the other environments using a Unassigned Devices SMB share My script that builds the package specifies that the ownership of the files should be root:root and if the package is built on the main server this is how the ownership ends up. On the environments using the SMB share the ownership was showing as nobody:users and the command to change it to root:root was being silently ignored. By changing the Dropbox share to be a NFS share the build script CAN change the ownership to the expected values.
  23. I am afraid that you have run into an oddity due to the fact that your docker containers are working at the Linux level which is not directly aware of the User Share system. If it thinks source and target are on the same mount point Linux implements 'move' by first trying a rename, and only if that fails doing a copy/delete. In this the rename is working which is leaving it on the same drive, and because you have the Download share set to Use Cache = No, the mover ignores the file. The easiest 'fix' is to set the Media share to Use Cache = Yes so that mover will later move the file to the array.
  24. as far as I know you never get any sort of file system maintenance that is not initiated by the user.
  25. This gets created if there are flies/folders found where the directory entry giving the name is not found. Unfortunately when this happens identifying the original name is a manual process by examining the contents although the 'file' command can at least help with identifying the file type.
×
×
  • Create New...