NVS1

Members
  • Content Count

    35
  • Joined

  • Last visited

Community Reputation

0 Neutral

About NVS1

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just an update on this. I was able to flash the H710 Mini, and the server is now back online, and the system is running through a parity check. Everything seems to be in order however.
  2. Sorry, guess that's not what I meant... I just mean the rest of my post and what I plan to do seems to be correct. Nothing obvious that I'm wanting to do or trying to do that I'm missing, etc.
  3. Everything else I wrote above seems inline with what I want to do then? Any concerns with losing any data, or should everything fairly gracefully move over? All the same HDD's, Parity, and cache drive is installed. Hmm.. this is interesting for sure. I assume worst case scenario, if I happened to brick the H710 doing this, it's just a matter of swapping it out with the H310, right? No other damage to the system, BIOS, etc.
  4. Was doing a bit more reading on this, and it seems that I could go with either the Dell H310 Mini, or the Dell H310 PCI card. The PCI card would install in the PCI bays just behind the H710 and it looks like I'd simply swap the cables that are currently plugged into the board for the H710 and plug those into the H310 card. Or I can get a mini, and replace the H710 Mini with that one. I don't think there's really any positives or negatives one way or the other. The two I'm looking at are here: https://www.ebay.ca/itm/Dell-H310-6Gbps-SAS-HBA-w-LSI-9211-8i-P20-IT-Mode-for-ZFS-Fre
  5. So, the past few days or so my UnRaid server was randomly restarting and generally having a fit. I haven't changed anything recently so this was relatively new. Running a memtest showed no memory errors, syslog showed nothing of note as it would just stop recording, then immediatley start recording the boot sequence in the syslog (no obvious kernel errors, panics, etc. from what I can tell)... I assumed it was a power issue, so I went about replacing the PSU with a new spare one I had. After booting, I found that two of my drives were no longer being detected (in the BIOS even), so I assumed i
  6. So, I've had some success! My latest parity rebuild was finally successful after I had ran the memtest (no errors), and re-applied a fresh coat of thermal paste. Not sure if I got lucky, or maybe the CPU was running warm and that was causing issues. That all aside, after the parity check finished, I still had the unmountable disk issue to resolve. I was able to run the xfs_repair tool, and ran that with the -L flag to destroy the log file, and allow it to fix the filesystem. After that was done, I was able to restart the array normally and everything appears to be back
  7. I suspected a hardware issue of sorts, but unsure what exactly could be the cause of it.. I do have a spare PSU I could toss in. Can't recall what size of PSU I have in there right now, but I know it's SeaSonic, and if I had to guess it'd be a 550w Gold. I'll have to take a look in the morning as it's currently setup out in my detached garage. I did end up running a memtest which it passed without issues, and I ended up putting some new thermal paste on as well as it had been several years since it was applied. We'll see if it continues to run overnight.. if not I'll double check t
  8. @itimpi I did, but I think the notification details were stripped down so it was just giving me generic details with no real information... Running into another issue now though... During this rebuild process I've had my system do a hard reset about 5x now. Each time it does this, it's completely reset all rebuild progress and starts back over with an estimated 16 hour rebuild time and 0% progress... Just recently it was at about 55% and then I got a notification about the restart, and sure enough it's back to 0% and only been running for 5 minutes. Now after thi
  9. I think I just figured out what was happening. I have Parity Check Tuning installed, and it was configured to pause the parity check when it was within 3 degrees below the warning disk temperature threshold. The warning threshold was at 45, and you can see even in my screenshot the parity drive was at 41c. Makes sense since these two drives have been running nonstop for the past 30 hours. I've inched that warning temp up to 50, and will keep an eye on it and see where it goes from there. Edit: Just grabbed the syslog file and I see parity drive temp warni
  10. So, I'm reviving this topic a bit here... I've done the parity swap, it recently just finished copying the contents of the one parity over to the next, and I've now started the array and it's going through attempting to do a Parity-Sync/Data-Rebuild... the problem is that it keeps pausing itself every ~2minutes. I have no idea why it pauses. I can click Resume, and it'll continue for another couple minutes, then pause again. I really don't want to be sitting here for 20 hours clicking Resume every couple minutes.
  11. Gotcha. That makes sense then. As for the parity swap... so if I'm reading this right, even given my current situation with a failed drive, I can still upgrade my parity and replace the dead disk in one go? I was under the impression that more or less I need to fix this dead disk issue, and then once fixed, I can then upgrade my parity, but I wouldn't be able to do both at the same time... Currently where I'm at right now, my setup looks like this: I did start the array and then stopped it after I saw it was emulating the drive and not attempting to do
  12. No, I can't because that drive has failed. It's inoperable (doesn't even show in BIOS). I understand that normally you'd swap a failed drive out for a fresh drive, but I don't have that ability at this time. My only option is to install a 12tb or 14tb, but because my parity is only 10tb, that drive will be capped to 10tb. So since I have around ~8tb of free unused space on my array (prior to this failure), I was thinking it'd just be easier to remove the 3tb drive all together. I'll still have space, and then my parity can rebuild the array with only 3 data drives (instead of the 4
  13. I've tried doing a bit of research on this already, and I think I understand what I need to do, but I'm also a bit paranoid of doing something wrong and losing all my data. I currently have a system with a single parity drive (10tb), 4 data drives (2x10tv, 6tb and 3tb). and a Cache drive. Just this morning I woke up to my find my array was down, and it appears that my 3tb drive has failed. Although I have an extra 12tb and a 14tb drive that I've yet to install, I can't install those yet as I was originally planning on tossing the 14tb in as a new parity dr
  14. I know you didn't speak directly to my post, but I looked at that and saw similar errors in my logs... turns out it was the VPN provider (PIA)... Not sure what's going on, but the CA locations don't appear to support port forwarding anymore, and their france location must be having locations. Resorted to de-berlin, and everything is back online now.
  15. So.. somehow the iptable_mangle module issue seems to have resolved itself, but I'm still unable to get my WebUI to load. It's still going through and displaying this output in the log So I still suspect it's to do with the UDP link local not bound warning there. That aside, I'm at a loss as to what to try next Edit: Was looking in Sonarr to see if it could even communicate with Deluge, and it's showing errors that Deluge is inaccessible. When I try and test the connection I get: