Wayne66

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by Wayne66

  1. I'm jumping on this post as my situation is similar. Do I have an issue here? The status shows 'No balance found'. I have a cache and cache2 drive, both 1TB SSDs. It appears to be working, but I saw this and wondered if I shouldn't take action. Just wonder if I should click the 'Balance' button? tower-diagnostics-20201015-1827.zip
  2. The Fix common problems reports that an "out of memory" condition exists and that I should post my diagnostics to this forum. Hope someone can tell me what is causing this problem. Thanks in advance! tower-diagnostics-20200918-0731.zip
  3. Ok. It's rebuilding content now. I'm thinking I'm going to let it get healthy and then shrink or maybe I'll have a new drive to install, so I can return the 8tb. Or just chalk it up to a $7.50 lesson and let the rma expire. I should have posted here before going down that road. Oh well. Thanks for your help and I'll let you know the outcome. 😀
  4. Alright. I think you are right. I'm looking at SMART data and I don't see anything wrong. If I want to just rebuild the same drives without changing them, how do I make UR not disable the drive so it will rebuild? Thanks!! tower-diagnostics-20200721-1836.zip
  5. On the unRAID Users & Help Group on facebook, someone mentioned that my power supply could be the cause. That might explain what I saw yesterday morning... attached.
  6. Thanks @trurl. The first thing I started to do was shut down all applications that could try to write to the disks and used unbalance to move all from disk 7. While that was happening, the docker service stopped even though unbalance was continuing to run normally. So I let that complete today. I have a RMA for the 8TB drive to WD (which has 28 days of warranty left) so I'm inclined to return it and get a new one. But if I can shrink the array, I'm ok with that. I can always add space later. It's the second parity that is giving me concerns. I have never done any of this. But here (https://wiki.unraid.net/Shrink_array) on the "Clear Drive, then Remove Drive method" looks very much like I would like to do. So I'm thinking for now, I'm going to shutdown and see if I can even get some emulated drives after restarting. Then I'll post new diagnostics. Maybe I'll be able to see SmartData after that. Thanks for your quick responses!
  7. Sorry, I guess I should fill in the gaps... I don't have any spare drives to use in place of the disabled ones, at least right now and I just want to get it back up and running. I have moved all data from disk 7 so it is currently empty.
  8. At the same moment, I woke to find that my disk 7 and 2nd parity drives are disabled. I read where if you can remove everything from a drive you can shrink the array without losing parity protection. But in my case I would have to shrink the array by 8TB AND remove the second parity disk. Can I do this or should I simply do a new config and rebuild parity, effectively putting my data at risk until it completes? Any help is appreciated. tower-diagnostics-20200721-1721.zip
  9. Thank you for an incredible job. I am very happy with the results. You have accomplished what you set out to do, provide an easy way to manage my library. Thank you!!
  10. Thanks for the reply. I just happened to have 2 extra drives available to replace those disabled drives and it is performing a parity sync/rebuild currently just like you said. But you are saying that I could have left the disabled drives in place and when the array started back up it would have done the same for them, rebuild the data? Even if they show with "disabled"? Is the "disabled" flag not related to that specific disk, but rather to it's spot in the array? Darn, I should have waited for your response, but I felt compelled to replace the disabled drives and I haven't lost any data. So I guess it's still a win. I also should be able to add those drives back into the array later for more storage, right? I expect they are just fine.
  11. This afternoon I had 5 drives start showing read errors with 2 failed. I have no idea what happened or what to do to fix. Since one of the read error drives is my parity drive and the parity 2 has failed, am I screwed and have lost everything on the drives lost? tower-diagnostics-20200131-1746.zip
  12. I had the same issue, but reverting back to 1.17.0.1841 resolved the issue. No loss of functionality. Just have to see notifications that there is an update, which of course you don't want to do.
  13. OK, Thanks! I changed the cron job to weekly, so hopefully that solved that issue. Guess I can disregard the error. Thanks for your help.
  14. Fix common problems is reporting a HW error and I have no idea what might be wrong. Please review my diagnostics file. Thanks in advance. tower-diagnostics-20190811-1624.zip
  15. I just bought 2 - 10tb enterprise drives to replace my existing dual parity. I was thinking that the 7200rpm drives would improve parity checks and ultimately write speed, probably more so as the data drives which are 5400 now get replaced with faster drives. So I posted about this on another forum and the general consensus is that I wouldn't see any real performance improvements and that I should just leave the 5400 drives in as parity and add as data drives. It's obviously time consuming to change the parity drives. Is it worth it?
  16. Thanks for the reply. Sorry I'm a complete noob when it comes to linux. I'm guessing it is a command to issue on terminal. How do I initiate a memtest? Thanks!
  17. This morning I noticed that Fix Common Problems is reporting a hardware error. I don't notice any issue, but it recommended that I install mcelog and post my diagnostics to the forum. I'm hoping someone can shed light on what it found because I have no clue what is going wrong. tower-diagnostics-20190406-1031.zip
  18. First off, thanks for creating this. The data it provides is fantastic. So now here's my issue. I installed an LSI 9211 SAS controller flashed to IT mode. I have one drive connected to it right now and the drive is not assigned to the array yet. When I run the "Benchmark drives" (or the individual drive) it finished the scan, but then reports the speed gap is too large and retries. I've let it go up to 100 retries before aborting. I have tried checking the box to "disable speed gap detection" but it seems to have no affect. I'm using Google Chrome. I read that the speed gap is supposed to increase by 5 MB each try, but mine stays the same (45 MB). Am I doing something wrong? SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]: Scanning sdf at 8 TB (100%) - Speed Gap of 79.95 MB (max allowed is 45 MB), retrying (22)
  19. I have the 9/29/2018 version running and then did the 2018.10.06c update. Not the vnstat is no longer running. I did go to the plugin settings and tried to click "Start". No luck.