Wayne66

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by Wayne66

  1. I'm jumping on this post as my situation is similar. Do I have an issue here? The status shows 'No balance found'. I have a cache and cache2 drive, both 1TB SSDs. It appears to be working, but I saw this and wondered if I shouldn't take action.

    Just wonder if I should click the 'Balance' button?

    Balance.thumb.JPG.43ada9e10a85669692d35cfcd0949b3a.JPG

    tower-diagnostics-20201015-1827.zip

  2. Ok. It's rebuilding content now. I'm thinking I'm going to let it get healthy and then shrink or maybe I'll have a new drive to install, so I can return the 8tb. Or just chalk it up to a $7.50 lesson and let the rma expire. I should have posted here before going down that road. Oh well. Thanks for your help and I'll let you know the outcome. 😀

     

  3. Thanks @trurl. The first thing I started to do was shut down all applications that could try to write to the disks and used unbalance to move all from disk 7. While that was happening, the docker service stopped even though unbalance was continuing to run normally. So I let that complete today. I have a RMA for the 8TB drive to WD (which has 28 days of warranty left) so I'm inclined to return it and get a new one. But if I can shrink the array, I'm ok with that. I can always add space later. It's the second parity that is giving me concerns. I have never done any of this. But here (https://wiki.unraid.net/Shrink_array) on the "Clear Drive, then Remove Drive method" looks very much like I would like to do. So I'm thinking for now, I'm going to shutdown and see if I can even get some emulated drives after restarting. Then I'll post new diagnostics. Maybe I'll be able to see SmartData after that.  Thanks for your quick responses!

  4. At the same moment, I woke to find that my disk 7 and 2nd parity drives are disabled. I read where if you can remove everything from a drive you can shrink the array without losing parity protection. But in my case I would have to shrink the array by 8TB AND remove the second parity disk. Can I do this or should I simply do a new config and rebuild parity, effectively putting my data at risk until it completes? Any help is appreciated. 

    tower-diagnostics-20200721-1721.zip

  5. Thanks for the reply. I just happened to have 2 extra drives available to replace those disabled drives and it is performing a parity sync/rebuild currently just like you said. But you are saying that I could have left the disabled drives in place and when the array started back up it would have done the same for them, rebuild the data? Even if they show with "disabled"? Is the "disabled" flag not related to that specific disk, but rather to it's spot in the array?

     

    Darn, I should have waited for your response, but I felt compelled to replace the disabled drives and I haven't lost any data. So I guess it's still a win. I also should be able to add those drives back into the array later for more storage, right? I expect they are just fine.

  6. On 10/12/2019 at 11:46 AM, Can0nfan said:

    HI @binhex sorry to be a pest but an important update that fixes a Live TV issue (I only have Over the Air for my local news for my mother in law and our family) it has been glitching the last few updates to Plex and i thought it was my HD Homerun or ATSC antenna but the issue is only happening with Plex not with the HD Homerun App.

    hoping the latest update can be pushed to your container in the next day or so

    cheers

     

    I had the same issue, but reverting back to 1.17.0.1841 resolved the issue. No loss of functionality. Just have to see notifications that there is an update, which of course you don't want to do. :) 

  7. I just bought 2 - 10tb enterprise drives to replace my existing dual parity. I was thinking that the 7200rpm drives would improve parity checks and ultimately write speed, probably more so as the data drives which are 5400 now get replaced with faster drives. So I posted about this on another forum and the general consensus is that I wouldn't see any real performance improvements and that I should just leave the 5400 drives in as parity and add as data drives. 

     

    It's obviously time consuming to change the parity drives. Is it worth it?

  8. First off, thanks for creating this. The data it provides is fantastic. 

     

    So now here's my issue. I installed an LSI 9211 SAS controller flashed to IT mode. I have one drive connected to it right now and the drive is not assigned to the array yet. When I run the "Benchmark drives" (or the individual drive) it finished the scan, but then reports the speed gap is too large and retries. I've let it go up to 100 retries before aborting. I have tried checking the box to "disable speed gap detection" but it seems to have no affect. I'm using Google Chrome. I read that the speed gap is supposed to increase by 5 MB each try, but mine stays the same (45 MB).

     

    Am I doing something wrong?

     

    SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]: Scanning sdf at 8 TB (100%) - Speed Gap of 79.95 MB (max allowed is 45 MB), retrying (22)

  9. I have the 9/29/2018 version running and then did the 2018.10.06c update. Not the vnstat is no longer running. 

     

    I did go to the plugin settings and tried to click "Start". No luck.