Jump to content

mattbr

Members
  • Content Count

    37
  • Joined

Community Reputation

0 Neutral

About mattbr

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. Yeah, figured this wasn't the one to cut my dockerisation newbie teeth on... looks like it's VM time, folks ! (and thanks for taking the time to look into it, @Squid)
  2. Hey guys, would anyone with the knowledge on how to do this be willing to take the time to make an UnRaid template for this PFSense log visualiser ?
  3. For those with adoption problems, this seems to have done the trick. It'd been mentioned here, but without a walkthrough. For some reason, the GUI approach didn't work for me. TL;DR: ssh into the AP. then mca-cli Now issue the set-inform command with the IP address of your Unifi controller [in our case, the IP of your UnRaid box. Don't forget to change to the correct port if you're not allocating 8080 to the controller]. set-inform http://192.168.3.2:8080/inform
  4. Figured - and I'd assume assigning the data drives randomly won't change a thing since parity is going to be rebuilt. (in case someone else runs into something like this: remember to turn Docker off when you restart the array, in case one of the apps start hitting one of the drives. Looks like it happened here, so my (possible) pain is your gain)
  5. There's a new config step before that, right ? So new config - check which don't have a FS - new config - assign the drives without a fs to parity - start.
  6. Oookay... so, it looks like something happened, either from me hitting the power button, or from the usb drive going down. Super.dat won't fly, and one of the drives shows as having no filesystem (I'd assume that was the parity drive, but I was running dual parity, so...). Is there a human-readable version of the previous drive assignments somewhere ?
  7. Well, wouldn't mount this time... but booted fine from a freshly flashed key. So I guess that's that... I'm a moron who didn't do a screenshot of my drive configuration, is that stored somewhere, or should I risk booting things up with the old super.dat ?
  8. Hi guys, my unraid box went down yesterday. Started looking into it this morning, and I'm a bit stumped. My feeling is it could either be an LSI controller or a cable gone bad (there's a bunch of drives missing from the LSi bios page), but the kernel panic message itself seems to point more towards a system thumbdrive issue, (it happens right after there's an attempted mount on it) even though it mounts and copies fine on an other machine. Anybody got ideas ? X10SDV kernel panic.mp4
  9. will do ! thanks again for all the help !!!
  10. Running 6.2.4 and a 950 pro, it hasn't missed a beat. Those things do tend to run a little bit warm, though.
  11. Ok, so, back at this (was travelling for a bit), and with a nice external drive full of mangled filenames - by the volume of data, there hasn't been much if any loss, and what I don't have in one of two other arrays, I should be able to, erm, rebuild anyway, so it isn't like I'm losing my life's work if I wipe the drive. Just booted the array up, problematic drive not plugged in. It shows as missing, array stopped, everything looks to be assigned as it should (minus the problem drive obviously), disk prefs set to start with a stopped array. What's the best course of action from here ? NewConfig, rebuild parity for the good drives and after that add the problematic one as an empty drive ? (in terms of data rebuilding, the plan is to wait for the screwed drive to come back online, then go read the rsync docs to figure out how not to touch anything that's been added recently to make sure any files that might have been upgraded on the still-good part of the array don't get stepped back to a previous incarnation - I'd assume du -hs * | sort -h would still work to, in this case, give me the names of the empty folders, right ?)
  12. Definitely should've asked for help sooner... I was mostly trying to prevent damage to the other drives in the array... hence the pulling of the "bad" drive and the shutdown of the array. Seeing the "bad" drive and parity being written freaked me out, since I figured it'd mean certainty of losing both - hence "ok, let's try to minimise risk to the rest of the data, see if anything at all can be salvaged, and take it from there" approach. So, practically, it's "reconnect the drive, press the go button", the array will then boot stopped, newconfig, don't trust parity, let it do it's thing, and then start getting whatever was on that drive back on, right ? No need to pre-clear it again ?
  13. Thought I'd done the stop - unassign - start unassigned - stop - reassign dance, though clearly messed up somewhere and just went "yay ! unassigned devices says it's mounting and ok <sigh of relief>!" or something stupid. I'm doing the repair from CLI on another machine - the server is shutdown, I didn't want to risk hosing the other data drives as well.
  14. Hey, yeah, know I should've asked... but, well, live and learn... Thing is there was no option to rebuild the disk that I could find, and stopping / reassigning just led it to staying emulated. It's formated in XFS. The superblock is hosed, says xfs_repair, which is running.