gberg

Members
  • Posts

    132
  • Joined

  • Last visited

Posts posted by gberg

  1. I got this issue since upgrading from unraid version 6.10.3 to version 6.12.6 that I get array utilization warnings.

     

    I get the warning on tree disks, and I have set the "Default warning disk utilization threshold (%):" to 99%, and the tree disks getting the warning are 10TB with 9.88TB of used disk space which means there is more than 1% free space.

    image.thumb.png.7aad212e10a41775ed1fe74bb96e81ec.png

     

     

    But still it says the disks are using 99% at the dashboard?

    image.png.0f3440d1c6bc6a9d3b105a16a653df1b.png

     

    Does this mean I need to have more than 150GB free space for the utilization not to be rounded to 99% ?

     

    I think it would be nice if it was possible  chose to set the utilization thesholds in GB rather then percentage, what do you think?  

     

  2. I haven't updated my unraid in a while and currenty I'm running 6.10.3.

    I read in the 6.12.6 release notes the issue with Out of date plugins, an I have three plugins that I can't update because I'm running to low unraid version, and the out of date plugins are:

    Community Applications (2023.02.17), SNMP (2021.05.21) and Unassigned Devices (2023.02.20)

    Do you think this will cause any issues?

  3. I'm planing to ad a new larger parity drive to my array.

    Today I have all 10TB drives one parity drive and six data drives, and I plan to add a new 18TB parity to be able to swap my existing data drives to larger ones in the future, and after the pari check is don I'll add the old parity drive as a data drive.

     

    My question is how will the parity check work when the th parity drive is 18TB, will it do a 18TB parity check taking almost twice the time as previous even thoght the data drives are only 10TB, or will it do a 10TB parity check untill I actually add a larger data drive?

  4. If you can't read the flash content from unraid, I guess you have a issue there, have you tried it in another USB port.

    But I'm not really sure how unraid reacts when you unplug an plug the flash drive again while running, maybe Unraid need to be rebooted for the flash drive to be detected?

  5. 14 hours ago, Frank1940 said:

    I use the preclear Docker and I try to get about 70+ hours on the hard drive.  My objective is to make sure that the HD is not going to fail because of infant mortality.  I do this for two reasons. 

     

    1-- I test as soon as I get the disk.  I only purchase from vendors who provide for 'no-hassle' returns with in X days of purchase for DOA drives.  This means I don't have to deal with warranty issues with the manufacturer.

     

    2-- I have a policy that I want a cold spare ready for installation if a disk 'red-balls'.  I pull the disk and replace with the cold spare and start the rebuild.  When that is complete and the server is working normally again, I can take my time to test and determine what to do with 'defective' disk...

     

     

    Ok, that seems like a smart strategy. 

     

    For my 10TB WD My Book it takes about 56 hours for each pass (pre-read, zeroing and post-read), so two passes might be more than enough.

    Ana I have also thought about having a cold spare at hand, maybe not a bad idea.