Jump to content

OrangePeel

Members
  • Content Count

    93
  • Joined

  • Last visited

Community Reputation

0 Neutral

About OrangePeel

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. Hi everyone, My Unraid server is not booting now. When I load the ESXi command line, this is the error I keep seeing. Seems like it can't load the drivers or the card may be shot. Has anyone else seen this before? I am just about to call it quits on this server and move to a different solution, specifically raid 1 and Cloudberry to Glacier for my most important info. Brandon
  2. Ah, ok. Good news... 0 errors there. Brandon
  3. Nevermind... I was looking at this wrong. The read errors seem to be sky high based on the raw value. Is that what I should be looking at? Not entirely sure here. Brandon
  4. Hmmmm... That's disheartening. I do have notifications enabled. When I check the raw read failure rate, it is elevated. It's showing prefail at a value of 117. Does this mean the rebuilt data will be corrupted? Brandon
  5. TL; DR My current parity drive is showing errors and is likely to fail soon. I'd like to use a current data disk as a parity disk. What's the best way to move the data off of that disk, get it out of the array as a data drive, and into the array as a parity drive? More details: Hi everyone, I had a relatively new drive in my array fail on me. Sending it in for a warranty replacement, but just after it failed, my parity drive started showing "Current pending sector" and "Offline uncorrectable" errors. I have no idea what these mean, but Google says it's a good indication of failure coming soon. I have more space than I'll ever use. I used to use this for movies, but streaming and fast internet have basically eliminated that use. So I don't expect my data to grow by a whole lot. Having said that, I have a 3TB drive that only has 200GB of data currently on it. I'd like to move that data off, remove that drive from the array, and turn that drive into my new parity drive. I'll probably set it up as a second parity drive if my current parity drive is still ticking along after my array rebuilds the bad disk (currently happening). What is the best way to remove data that is only located on this drive? Is this article still relevant? https://blog.linuxserver.io/2013/11/18/removing-a-drive-from-unraid-the-easy-way/ It would make me nervous to build a completely new config, but I guess I shouldn't lose any data if I don't make any mistakes. Any thoughts on the best way to approach this? Thanks! Brandon
  6. Ok, awesome. Thank you. I'll upgrade plop and maybe ESXi, too, and go from there. Brandon
  7. After doing a little digging, I'm wondering if I should try to use one of the VMDKs? Brandon
  8. Yes, all of my disks are directly connected to the M1015. I am using the Plop boot manager to boot the standard Unraid USB. I did this about 5 years ago based on the infamous Atlas build. Is there a better way to do it now? I need to upgrade ESXi, too, I just haven't done it yet. That could possibly help, I guess. Just having a hard time narrowing this down. Thanks for the replies! Brandon
  9. Thanks for the replies, guys. I apologize for the delay in response... I forgot to turn notifications on. I added two more network interfaces in ESXi and then they all 3 suddenly showed up and it's working fine. Very strange. And I was able to get my new key, itimpi. Brandon
  10. Hi all, I'm having an issue where one drive always gets shut down and I can't figure out why. I asked in the general forum and they recommended I come here as it seems to be virtualization related? This only happens in version 6. Version 5 works fine. Here is the original thread: This is running on ESXi 5.1 with an M1015 in HBA mode passed through. It's also an AMD system. Brandon
  11. Hmmm... What should be the next step? Thanks for looking. And for more detail, this is running on an ESXi 5.1 machine with an M1015 passed through. Doesn't have any issues in 5, only in 6. :-/ Brandon
  12. Hi all, I've had an interesting journey lately. First part of the journey can be found here: I reverted back to version 5, rebuilt the one disk, and all was well yesterday! Until I upgraded to version 6. Now it is not accepting disk 4 and has disabled it. Worked fine in 5, passes both quick and long SMART tests, but Unraid refuses to use it and I can't figure out why. I'm not sure if version 5 wasn't detecting a bad drive or if version 6 is falsely detecting a bad drive. How can I resolve this? Thanks, Brandon unraid-diagnostics-20180418-2223.zip
  13. Finally getting around to fixing this a year later... Going back to version 5.0 resolved the weird disk errors. I did have an HDD die, though, so I'm currently rebuilding the array with it's replacement, then I'll try to upgrade again. Hopefully the new versions will address whatever issue caused this a year ago, assuming it was an incompatibility or something. Brandon