autumnwalker

Members
  • Posts

    246
  • Joined

  • Last visited

Everything posted by autumnwalker

  1. I set the FS back to reiser and ran check, but got an error: reiserfsck 3.6.27 Will read-only check consistency of the filesystem on /dev/md4 Will put log info to 'stdout' Failed to open the device '/dev/md4': Unknown code er3k 127
  2. Got it. So flip FS back to reiser and rebuild superblock?
  3. What about formatting and rebuilding (again) from parity? I guess parity has a corrupt FS for this disk ...
  4. Switched to auto - still saying unmountable. Switch back to reiser and rebuild superblock?
  5. Where do I change the FS to auto? I am positive it was reiser. Am I better flipping to auto or rebuilding the superblock?
  6. After attempting recovery after a drive / controller (PSU?) failure (see link below) I fired up Unraid and let it rebuild parity (disk 4 was marked bad). Now that the parity rebuild is complete it is still saying disk 4 is "Unmountable: No file system" (screenshot below). I assumed the parity rebuild would have taken care of this. Thoughts?
  7. I'll try re-seating the LSI card and power it back on, see if everything mounts ok. The last time it ran for just over a day before going bad again. My fear is whatever this is is now intermittent - it initially powers on fine, but dies somewhere after a few hours of power. If it powers on ok (disk looks mounted properly) and it starts parity rebuild, but dies mid rebuild is my data trashed or am I exactly in the same spot I am now (one "bad" drive)?
  8. I'm not sure. Parity rebuild then? What happens if it's in the middle of a rebuild and goes bad again?
  9. I'll remove / reseat the card to be sure. When powering back on will I need to rebuild the array? Can I "trust" it?
  10. All was fine for a dayish and the same eight drives (controller) went bad again. This time disk 4 was disabled. I suspect the controller is bad (or the PCIe slot) ... should I do anything different this time where disk 4 was disabled vs. parity? Diagnostics attached.nas01-diagnostics-20190825-1409.zip
  11. Thank you @Frank1940 and @johnnie.black. Parity rebuild just finished 12h 2m, 92.3 MB/s, 0 errors. Safe to pull this out of safe mode now?
  12. Current diagnostics attached. System still in safe mode, array offline, parity disk disabled. nas01-diagnostics-20190822-2050.zip
  13. Before seeing your post I powered it down, removed and reseat all of the SATA cables and powered it back on. Now all disks are showing; parity still disabled. Dare I take this out of safe mode and let it attempt to rebuild parity? @Frank1940 I assume the diagnostics are no good now that all drives are visible?
  14. Ok. I opened it up to take a look. Surprisingly little dust. Nothing clogged / blocked. No obvious damage (blown capacitors, scorch marks, etc.). I powered it back up in safe mode. Parity is still disabled and two disks are missing - disk 5 and disk 6. Thoughts?
  15. Agree - I'll take the opportunity to do that now that it's offline.
  16. Ok! So just power it back on and see what happens? (afraid of data loss ... and dealing with restoring 18 TB from CrashPlan)
  17. Ok - once I'm back at the server I'll try powering it back on. Should I force maintenance mode? I wasn't able to check the box when powering down; can I force with "startArray=“no”" in disk.cfg? I'm concerned about all of the errors on the seven data drives. Are those "false" and will be clear with powering back on (assuming the controller isn't dead) or am I looking at data loss? I have diagnostics and syslogs. Not sure if those are helpful at this point.
  18. That was my first thought as well (re: controller problem). The server has not been physically touched in nearly six months so it's leading me to think cooling issue or failure versus being unseated. I know there was high I/O last night. The server is currently off. Are you suggesting I reboot it after it has had time to cool down and see where I am?
  19. Hello, I woke up this morning to a warning on my dashboard that my parity drive was in failed state. I also noticed that seven of my other drives were showing errors. Screenshot below: I am running 6.6.5 as I am afraid to upgrade to the 6.7.x series due to the ongoing SQLLite corruption bug. Parity and drives 1 - 7 are on a Dell PERC H310, drives 8 - 10 are on the onboard SATA on my motherboard. I'm running a few things in Docker and I have one VM. Otherwise this is just straight NAS. Thoughts on where I go from here? Did I lose my data across the first seven drives?
  20. Hello, I used to use the Docker Autostart Manager to start certain containers after a period of time and that plugin has been depreciated in 6.6. I understand that I can now switch to the advanced view and add a "wait" metric to each container, but I cannot find documentation on how it works. Is the "wait" metric seconds, minutes? Does "wait" instruct the docker with the "wait" to wait, or does it instruct Unraid to wait X before starting the next container? I've figured out that containers start sequentially and can be re-ordered by click + drag. Thanks!
  21. Thank you! I appreciate it very much. I'll provide any info you need.
  22. I just installed the plugin as-is and set SNMP-UPS as the driver (from the dropdown). I'm running a Liebert-GXT2U UPS with one of their early "Liberty" SNMP cards - I can walk it via "snmpwalk". The card is not part of the subdrivers in the link, but my understanding is someone (who understands the process) can generate a new driver based on the data from a walk / mib file.
  23. Thank you @dmacias for taking this on! I have installed your version without issue; unfortunately, my UPS / SNMP card is still not recognized. Are you familiar with NUT? I know you can extend the drivers with sub-drivers (http://networkupstools.org/docs/developer-guide.chunked/ar01s04.html#snmp-subdrivers), but I cannot wrap my head around it.
  24. While I don't disagree about avoiding the meta characters, it's probably worth putting some sort of check / catch into unRAID to prevent this from happening. I simply used a password manager to generate the password. I was not given any instructions or warning and I got locked out.