Jump to content

itimpi

Moderators
  • Posts

    20,703
  • Joined

  • Last visited

  • Days Won

    56

Everything posted by itimpi

  1. The fastest way to copy on an initial setup is to not bother with the cache and without the parity drive assigned. You can then add the parity drive and set up how you want the cache drive to be used after this initial copy. Not having parity assigned means copies run significantly faster but your files are not yet protected so you have to decide if this suits you.
  2. The default,configuration for the cache is BTRFS version of raid1
  3. Nobody else seems to be having this problem so it sounds like it is something specific to your system. The issue is going to be to track down the cause. Providing any more information you can and a copy of your systems diagnostics zip file (obtained via Tools -> Diagnostics) might help with spotting something amiss.
  4. You do not need IOMMU to run Unraid in a VM The technique described here works fine without IOMMU being needed. It is what I use to run a VM for plugin development/testing.
  5. You have uncovered a bug in the code for pausing on drives over-heating and I will rectify this. When you have activated temperature related pause/resume then you will get a monitor task running every 5 minutes to check temperature - that is the "Monitor" log entry that is in your log. If temperature monitoring is not active then this task is not needed. so you get a lot less logged. The other resume/pause log entries are from the standard (not temperature related) pause/resume entries What is MEANT to happen is that the Monitor task will list the overheated drives (assuming debug logging is active) each time following the summary message and then pause the parity check that is running. It is this list + pause code that has the bug so that the pause is not taking place. I think the fact you have 19 "cool" phantom drives listed will be because you have 24 slots set on the Main tab (i.e. you have not reduced it to the number of drives you actually have). Can you confirm if this is the case? If so I need to add a check for slots allowed but not currently having a drive assigned to them to correct this count. If that is not the case then I need to do further investigation to determine why the drive count might be wrong in your case. I am in the middle of adding/testing multi-language support (ready for others to add translations to other languages) to the plugin so it will be at least a few days before I can release the fixes for the issues mentioned above. Hopefully this will not inconvenience you too much.
  6. Once a drive has been marked as ‘disabled’ with a red’s’ it always takes manual action to remove this status. You may find this section of the online documentation to be of help.
  7. What tests. Depending on what you actually did it may still be possible to get something off the drive.
  8. This could mess up docker containers that do not want those permissions. It is normally recommended to leave the permissions on appdata alone as far as possible, and if you do make changes limit them as much as is practical.
  9. Are you sure there are still files there? The 11.2GB sounds about right for the space used for the file system control structures.
  10. The file you attached is 0 bytes in size so not much use
  11. You might want read this part of the documentation to get a better idea of how UnRAID parity works.
  12. The parity drive is the only one that holds parity information. The parity is calculated by performing the appropriate calculations across all data drives so you now have the one parity drive protecting your 3 data drives against any one of them failing.
  13. Once we have had a chance to examine you diagnostics to determine if the disk looks healthy and it you need to rebuild the drive then guidance on rebuilding drives is here in the online documentation.
  14. CRC errors are connection issues and the operations causing it normally retried recovered automatically. Points to note are: an occasional crc error is not something to worry about, although if the automatic recovery does not work you could get a read or write error so you do want to know they are occurring. tye CRC error count never resets to 0 so you only need to worry if it keeps increasing the commonest cause of CRC errors is the SATA/Power cabling to a drive.
  15. Because of the fact that Linux is case sensitive and Samba is not then If you have two different folders with the same spelling and different case then there’s is a level of pot-luck which one gets picked up by Samba to link to a share. You will find that you can only see the contents of one of the folders.
  16. Normally Limetech respond to emails reasonably quickly. You might want to check your spam folder in case a reply got missed. Their visits to the forum can be sporadic so you should never rely on a forum message being seen by Limetech.
  17. That drive looks very sick and should be retired ASAP in my opinion. If you want to be sure then run the SMART extended test (not the short one) and see if that completes without error. I would be surprised if it did.
  18. CRC errors are connection issues and are normally recovered automatically. You need to note that the count never gets reset to zero so you only want to get worried if it keeps increasing. You probably want to click on the orange icon against the drive on the Dashboard and hit the acknowledge option so that you only get further notifications for CRC errors on that drive if it changes. If the CRC count does keep increasing you want to carefully check the power/SATA cables to the drive as cabling issues is the commonest cause of CRC errors.
  19. The licence is tied to the GUID of the flash drive used to boot Unraid. you basically have 2 options: copy the contents of the new USB stick back to the old one overwriting it’s contents (good idea to first make a backup just in case) making sure you keep the licence file for the old USB intact. Copy the licence file for the old usb stick onto the new one in the config folder and then follow the procedure for transferring the licence to the new one (which will result in the old one being blacklisted).
  20. How are you trying to access the web interface? You would thot be able to do it on a directly attached monitor if you have used the vfio method to stop Unraid from using that monitor for its own purposes, but you should still be able to access the GUI via the network.
  21. Parity1 is not affected by order, but parity2 is. The safest thing is to not assume parity is valid and generate new parity as then it does not matter.
  22. You can get 1 automated replacement a year without having to contact Limetech. If for any reason you need a replacement in less than a year you need to contact Limetch support by email to explain why and giving them enough information to locate your current licence and the GUID of the replacement USB drive you want to use.
  23. Which is the share that contains files you want to be moved? Mover will not move open files so you may need to close down the docker and VM services if you want files relating totem to be moved.
  24. It is MEANT to zero the remainder of the parity drive so this is actually a symptom of a bug. I do not know if the trigger has been identified and it is now fixed for the 6.9.x series of releases.
  25. I found this research article to be of great interest as it indicates that a large amount of write amplification is inherent in using the BTRFS file system. I guess this raises a few questions worth thinking about: Is there a specific advantage to having the docker image file formatted internally as BTRFS or could an alternative such as XFS help reduce the write amplification without any noticeable change in capabilities. This amplification is not specific to SSD's. The amplification is worse for small files (as are typically found in appdata share). Are there any BTRFS settings that can be applied at the folder level to reduce write amplification. I am thinking here of the 'system' and 'appdata' folders. If you have the CA Backup plugin to provide periodic automated backup of the appdata folder is it worth having that share on a single drive pool formatted as XFS to keep amplification to a minimum. The 6.9.0 support for multiple cache pools will help if you need to segregate by file format.
×
×
  • Create New...