Jump to content

itimpi

Moderators
  • Posts

    19,967
  • Joined

  • Last visited

  • Days Won

    55

Everything posted by itimpi

  1. If you install the Parity Check Tuning plugin then even if you do not use its other features the parity history entries will be enhanced to tell you whether a parity check was correcting or not (and how it was initiated). If the plugin is installed then any messages from it about duration should also be correct. I think the standard Unraid release sometimes gets confused between the last check run and the one before that. I expect this to be fixed in a forthcoming release.
  2. You will need to contact support as issues dealing with licencing cannot be handled in the forum. When you try to contact them you should get an automatic email acknowledging this. I it does not arrive you need to try again via the contact form as it means the original request did not get through.
  3. This will be disk5. Any time you see a 'md' type device name the number refers to the disk with the corresponding number in the Unraid GUI.
  4. You are getting a small number of retries on ata1 and ata4 in the syslog but I would not have thought they were enough to have much impact on the speed. Others have found that this frequently means that you have other disk activity going on at the same time to array drives which is badly impacting the speed. If you provided your system diagnostics we could determine if this is the case.
  5. It is a standard Linux file system so will be readable on any system that can handle a BTRFS file system. The container binaries and working data are stored wherever you tell Unraid to store them (the 'system' and 'appdata' shares being the default locations. It is normally recommended that these are sore on a pool for performance reasons. The USB drive also stores templates for docker containers you install via the Unraid Apps tab in case you need to edit their settings or reinstall the containers with their previous settings intact. Definitely. The USB stick stores all you Unraid specific settings in its 'config' folder and this would be required if you ever need to transfer to a new USB stick. You want a backup made any time you make a significant configuration change. There are currently 3 standard ways available: By clicking on the Boot device on the Main tab and selecting the option to make a backup (which is then downloaded as a ZIP file). By using Unraid Connect to have automated backups made in the cloud on the Limetech servers. By using the "Backup/Restore Appdata" plugin which has an option to make a backup to a location of your choice when it runs to backup docker containers working data.
  6. Issues around licencing cannot be handled in the forum, for them you need to contact support.
  7. No. The disks will be Cleared so that parity is unaffected.
  8. If using SATA->SATA power splitters then you do not want to split a SATA connection more than 2 way as the host end is limited by the current carrying capacity of its SATA connector. If using Molex->SATA then you can get away with a 4-way split.
  9. Not sure. In principle you want the fastest drive as parity. Suspect this would mean one of the SAS drives but you would need to check their specs to be sure.
  10. Not sure it would make any difference. When building parity all the data drives are read in parallel so the time is almost completely determined by the size of the parity drive - not by the number of data drives.
  11. When you tried ZFS was that in the array or a pool? ZFS in the main array is know to be slow - it needs to be in a pool to get any decent performance from it.
  12. That would be the normal default way to handle such a combination of drives.
  13. UnraId emulates a drive using the combination of the other drives plus parity. In effect all writes intended for the emulated drive make the same update to the parity drive that would have been made if the physical drive was still there. Your data will be OK as long as you do not get another drive failing so that you have more failed drives then you have parity drives. In such a case you could potentially lose all the data on the failed drives. Are you sure the disabled drive has actually failed? More often than not the reason a drive is disabled (because a write failed) it is nothing to do with the drive itself but is caused by cabling or power issues. Running the extended SMART test on a disabled drive is a good way to determine if it has really failed.
  14. Have you disabled health checks for your docker containers as I believe they can cause excessive logging.
  15. You have no choice except to contact support as we cannot sort out licence related issues in the forum and there is no guarantee a Limetech employee will see the message in the forum. You should get an automated email acknowledging your request when you try to contact support, and if not then it did not get through and you need to try again via the contact form.
  16. It might be worth posting a screenshot of the syslog server settings just in case there is something you missed?
  17. It would continue to function as normal, although performance might be degraded if there is much other disk activity at the same time due to drive contention. If there is only light disk activity then it may be more convenient to leave everything available running as normal.
  18. All of your options look viable so it will be a time/risk tradeoff. If you can afford the array downtime it would probably be quickest to assign both 18TB drives to parity1 and parity2 and build them in parallel while in Maintenance mode. I mentioned maintenance mode to ensure no array updates happen while building the new parity which means the old 14TB parity remains valid. While this is going on keep the old parity1 intact just in case an array drive has problems as this gives a fall-back path. On completing the move to 2 x 18TB parity drives the old 14TB parity drive can now be assigned as a data drive. If you cannot afford the time in maintenance mode then your third option looks safest as you would have a valid parity drive throughout. An option you have not mentioned which is the quickest is to use the Tools->New Config to allow you to assign both 18TB drives as parity and the 14TB drive as a new data drive and then build the new parity based on the new drive configuration. The downside of this is that a failure of a data drive before the new parity is built could lead to data loss. Note that Unraid will not allow you to combine making changes to parity drives and adding data drives at the same time if you have not gone the New Config route.
  19. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs. I wonder if as a result of your recent hardware change the IDs of passed through hardware have changed and you are now passing through the new NIC to a VM?
  20. Unraid will not let you add a new parity drive and new data drives in a single step. You have to split it into 2 operations - one for parity and the other for the data drives (and you can do them in either order).
  21. The Main tab shows the sd? part against each drive. You need to select the letter appropriate to the drive.
  22. It will forget where it was and want to restart from the beginning.
  23. No. Unraid uses the whole drive when in the array, so only the 4TB drive could be the parity drive and no data drive can be larger than the smallest parity drive.
  24. Have you got the array set as secondary storage? You need that to be allowed to use Mover.
  25. This is a known issue if you are using a HBA with a built in SATA port multiplier to get extra SATA ports as drives connected to the multiplier part are not seen. It is apparently a Linux kernel bug that should be fixed in the next Unraid release and in the meantime you need to stay on 6.12.8.
×
×
  • Create New...