itimpi

Moderators
  • Posts

    19709
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by itimpi

  1. If using SATA->SATA power splitters then you do not want to split a SATA connection more than 2 way as the host end is limited by the current carrying capacity of its SATA connector. If using Molex->SATA then you can get away with a 4-way split.
  2. Not sure. In principle you want the fastest drive as parity. Suspect this would mean one of the SAS drives but you would need to check their specs to be sure.
  3. Not sure it would make any difference. When building parity all the data drives are read in parallel so the time is almost completely determined by the size of the parity drive - not by the number of data drives.
  4. When you tried ZFS was that in the array or a pool? ZFS in the main array is know to be slow - it needs to be in a pool to get any decent performance from it.
  5. That would be the normal default way to handle such a combination of drives.
  6. UnraId emulates a drive using the combination of the other drives plus parity. In effect all writes intended for the emulated drive make the same update to the parity drive that would have been made if the physical drive was still there. Your data will be OK as long as you do not get another drive failing so that you have more failed drives then you have parity drives. In such a case you could potentially lose all the data on the failed drives. Are you sure the disabled drive has actually failed? More often than not the reason a drive is disabled (because a write failed) it is nothing to do with the drive itself but is caused by cabling or power issues. Running the extended SMART test on a disabled drive is a good way to determine if it has really failed.
  7. Have you disabled health checks for your docker containers as I believe they can cause excessive logging.
  8. You have no choice except to contact support as we cannot sort out licence related issues in the forum and there is no guarantee a Limetech employee will see the message in the forum. You should get an automated email acknowledging your request when you try to contact support, and if not then it did not get through and you need to try again via the contact form.
  9. It might be worth posting a screenshot of the syslog server settings just in case there is something you missed?
  10. It would continue to function as normal, although performance might be degraded if there is much other disk activity at the same time due to drive contention. If there is only light disk activity then it may be more convenient to leave everything available running as normal.
  11. All of your options look viable so it will be a time/risk tradeoff. If you can afford the array downtime it would probably be quickest to assign both 18TB drives to parity1 and parity2 and build them in parallel while in Maintenance mode. I mentioned maintenance mode to ensure no array updates happen while building the new parity which means the old 14TB parity remains valid. While this is going on keep the old parity1 intact just in case an array drive has problems as this gives a fall-back path. On completing the move to 2 x 18TB parity drives the old 14TB parity drive can now be assigned as a data drive. If you cannot afford the time in maintenance mode then your third option looks safest as you would have a valid parity drive throughout. An option you have not mentioned which is the quickest is to use the Tools->New Config to allow you to assign both 18TB drives as parity and the 14TB drive as a new data drive and then build the new parity based on the new drive configuration. The downside of this is that a failure of a data drive before the new parity is built could lead to data loss. Note that Unraid will not allow you to combine making changes to parity drives and adding data drives at the same time if you have not gone the New Config route.
  12. You should post your system's diagnostics zip file in your next post in this thread to get more informed feedback. It is always a good idea to post this if your question might involve us seeing how you have things set up or to look at recent logs. I wonder if as a result of your recent hardware change the IDs of passed through hardware have changed and you are now passing through the new NIC to a VM?
  13. Unraid will not let you add a new parity drive and new data drives in a single step. You have to split it into 2 operations - one for parity and the other for the data drives (and you can do them in either order).
  14. The Main tab shows the sd? part against each drive. You need to select the letter appropriate to the drive.
  15. It will forget where it was and want to restart from the beginning.
  16. No. Unraid uses the whole drive when in the array, so only the 4TB drive could be the parity drive and no data drive can be larger than the smallest parity drive.
  17. Have you got the array set as secondary storage? You need that to be allowed to use Mover.
  18. This is a known issue if you are using a HBA with a built in SATA port multiplier to get extra SATA ports as drives connected to the multiplier part are not seen. It is apparently a Linux kernel bug that should be fixed in the next Unraid release and in the meantime you need to stay on 6.12.8.
  19. Unraid does not rebuild parity when you add new disks to a parity protected array - instead it Clears them (sets them to be all zeroes) so that parity is unaffected and only allows you to format and start using the drives once they have been Cleared. If they have been pre-cleared then this Clear stage is skipped and they are immediately available to be formatted and used when added. The Unraid Clear is functionally equivalent to the write phase of a pre-clear and takes about the same time as that would take, but is not as thorough a test of the drive. Normally with pre-clear one tends to run some read phases as well to check the disk better which is why pre-clear typically takes longer. Preclear is normally slower if you run the read phases as well as the write phase so no idea where you heard this? Perhaps that running multiple pre-clears in parallel can put more stress on the server?
  20. With macvlan you need to DISABLE bridging on eth0 to have stability. Was it a typo when you said enable?
  21. Have you followed the procedure for this as documented here in the the online documentation accessible via the Manual link at the bottom of the Unraid GUI.
  22. This is not an option in Unraid. If there is a Samba combination of settings that provides this it would be nice to know as it could then be implemented. Not sure it is even possible on a Windows server although I could be wrong about that.
  23. It is new in the 6.12 series of Unraid releases as described here in the Release Notes.
  24. That USED to always be true, but it is no longer true with User Shares in Exclusive mode which get the same performance as using the disk shares.
  25. Had you actually gotten around to formatting the disk from within Unraid? When you first add a new drive it will shows as unmountable until you format it to create an empty file system ready to receive files.