NVS1

Members
  • Posts

    36
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

NVS1's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Did you ever get an answer to this? It seems as though I'm encountering this issue too... only solution right now is to restart my container, and then everything clears up. After a couple days, my trackers will start showing HnR's and such, and they're not seeing any of my torrents being seeded.
  2. Just an update on this. I was able to flash the H710 Mini, and the server is now back online, and the system is running through a parity check. Everything seems to be in order however.
  3. Sorry, guess that's not what I meant... I just mean the rest of my post and what I plan to do seems to be correct. Nothing obvious that I'm wanting to do or trying to do that I'm missing, etc.
  4. Everything else I wrote above seems inline with what I want to do then? Any concerns with losing any data, or should everything fairly gracefully move over? All the same HDD's, Parity, and cache drive is installed. Hmm.. this is interesting for sure. I assume worst case scenario, if I happened to brick the H710 doing this, it's just a matter of swapping it out with the H310, right? No other damage to the system, BIOS, etc.
  5. Was doing a bit more reading on this, and it seems that I could go with either the Dell H310 Mini, or the Dell H310 PCI card. The PCI card would install in the PCI bays just behind the H710 and it looks like I'd simply swap the cables that are currently plugged into the board for the H710 and plug those into the H310 card. Or I can get a mini, and replace the H710 Mini with that one. I don't think there's really any positives or negatives one way or the other. The two I'm looking at are here: https://www.ebay.ca/itm/Dell-H310-6Gbps-SAS-HBA-w-LSI-9211-8i-P20-IT-Mode-for-ZFS-FreeNAS-unRAID/162834659601?hash=item25e9b3b911:g:UxcAAOSwWrxcPvQv https://www.ebay.ca/itm/Dell-H310-mini-monolithic-K09CJ-with-LSI-9211-8i-P20-IT-Mode-ZFS-FreeNAS-unRAID/163672512791?epid=2024756319&hash=item261ba45917:g:STwAAOSwi2xcyU5B Hopefully I'm on the right path here.
  6. So, the past few days or so my UnRaid server was randomly restarting and generally having a fit. I haven't changed anything recently so this was relatively new. Running a memtest showed no memory errors, syslog showed nothing of note as it would just stop recording, then immediatley start recording the boot sequence in the syslog (no obvious kernel errors, panics, etc. from what I can tell)... I assumed it was a power issue, so I went about replacing the PSU with a new spare one I had. After booting, I found that two of my drives were no longer being detected (in the BIOS even), so I assumed it had to be an issue with my motherboard at this point. I know the first option is to swap out the SATA cables, and I thought had some, but can't find some spares. So where I'm at now is I've pulled all the drives out, and placed them into the Dell R720. While I was busy going around and removing drives, and installing them, I did boot the system into Unraid just to confirm it would boot up without issues. It did and no immediate issues (though no drives were actually installed at this time). So after plugging all my various drives into the various bays, I then booted it up, and was greeted with a big old warning from Unraid showing that no drives were detected. I could see the system at the very least detected the drives since all the lights for each bay was lit up, but regardless Unraid didn't see any. I believe the issue may be surrounding my hardware RAID controller... I have not setup a hardware RAID, nor do I want to (I don't want to lose any data on these drives), and was hoping that Unraid would just come online and see everything. From what I can read however, it seems that I have a PERC 710 RAID controller (I think it's a Mini?), which does not play well with Unraid. I'm fairly new to the Dell server (previous build was just with consumer parts (Asus motherboard, Intel i5, etc.), so I want to make sure I'm getting right hardware for this. Moving forward, I believe all I would need to get is a Dell H310 RAID controller, right? Once I get that, I can replace the H710, and I still shouldn't need to bother creating any hardware RAID configuration, right? For what it's worth, this is my server here: And If I'm correct, this is the H710 mini? Guess my other concern is how to go about swapping that.. it looks easy enough, but I assume if this is the H710 mini, I need to replace it with a H310 mini that would install in the same spot, right? Another question that is of less importance, but I may as well ask... My Cache drive is an SSD.. It appears to be installed well enough in the drive bay (the bay light comes on), but the bays are not really designed for an SSD. Anyways, I would ideally like to get some sort of PCI card / adapter that would fit inside the R720, and allow me to mount the SSD internally which would still give me the usable cache drive, while also freeing up the drive bay. Alternatively, I could always go with an nVME drive if there was a similar adapter I could install. Anyways, any help is greatly appreciated.
  7. So, I've had some success! My latest parity rebuild was finally successful after I had ran the memtest (no errors), and re-applied a fresh coat of thermal paste. Not sure if I got lucky, or maybe the CPU was running warm and that was causing issues. That all aside, after the parity check finished, I still had the unmountable disk issue to resolve. I was able to run the xfs_repair tool, and ran that with the -L flag to destroy the log file, and allow it to fix the filesystem. After that was done, I was able to restart the array normally and everything appears to be back up and running Thanks everyone for your help and suggestions as I worked through this. Hope everyone has a Merry Christmas!
  8. I suspected a hardware issue of sorts, but unsure what exactly could be the cause of it.. I do have a spare PSU I could toss in. Can't recall what size of PSU I have in there right now, but I know it's SeaSonic, and if I had to guess it'd be a 550w Gold. I'll have to take a look in the morning as it's currently setup out in my detached garage. I did end up running a memtest which it passed without issues, and I ended up putting some new thermal paste on as well as it had been several years since it was applied. We'll see if it continues to run overnight.. if not I'll double check the PSU, but I don't think I've ever purchased anything less than 550watts. For the record, my system from what I recall is the following: Gigabyte H97N-WIFI Intel I5 4460 @ 3.2ghz (not OCed) 16gb DDR3 Memory You can already see my drive configuration in the details above. That all aside... I've been meaning to setup a new system, and have an R720 that is currently sitting unused... Has everything except for drives.... How does UnRaid handle hardware swaps like that? If I were to yank all the drives, the USB flash drive and all that fun stuff, could I just toss it all in that R720 and make sure the drive configuration is the same when I boot it all up? I know Windows would throw a fit, and am actually unsure how Linux would handle something like that. Unsure what to expect with Unraid. Edit: I know this is just an estimate and far from real world application, but tossed all that into PCPartPicker and it comes up with an estimated 217watt usage. Well below the likely 550watt PSU that I would have, still well below even if I happened to have a 450watt one. I don't have a video card, or sound card. Don't run VM's, etc.
  9. @itimpi I did, but I think the notification details were stripped down so it was just giving me generic details with no real information... Running into another issue now though... During this rebuild process I've had my system do a hard reset about 5x now. Each time it does this, it's completely reset all rebuild progress and starts back over with an estimated 16 hour rebuild time and 0% progress... Just recently it was at about 55% and then I got a notification about the restart, and sure enough it's back to 0% and only been running for 5 minutes. Now after this latest reset I see the following: I've not seen that warning about an unmountable disk before... I assume it's because it was the one being written to and getting in sync and now it's corrupted due to the restart, but I have no idea. Unfortunately after restarts the syslog gets completely cleared so I'm not sure how to tell what's going on. I've not had any issues with restarts, or anything until this one disk failed, and this just seems to have caused a rolling shitshow of one issue after the issue now.
  10. I think I just figured out what was happening. I have Parity Check Tuning installed, and it was configured to pause the parity check when it was within 3 degrees below the warning disk temperature threshold. The warning threshold was at 45, and you can see even in my screenshot the parity drive was at 41c. Makes sense since these two drives have been running nonstop for the past 30 hours. I've inched that warning temp up to 50, and will keep an eye on it and see where it goes from there. Edit: Just grabbed the syslog file and I see parity drive temp warnings in there and the pause state being enabled. Sure looks to be it.
  11. So, I'm reviving this topic a bit here... I've done the parity swap, it recently just finished copying the contents of the one parity over to the next, and I've now started the array and it's going through attempting to do a Parity-Sync/Data-Rebuild... the problem is that it keeps pausing itself every ~2minutes. I have no idea why it pauses. I can click Resume, and it'll continue for another couple minutes, then pause again. I really don't want to be sitting here for 20 hours clicking Resume every couple minutes.
  12. Gotcha. That makes sense then. As for the parity swap... so if I'm reading this right, even given my current situation with a failed drive, I can still upgrade my parity and replace the dead disk in one go? I was under the impression that more or less I need to fix this dead disk issue, and then once fixed, I can then upgrade my parity, but I wouldn't be able to do both at the same time... Currently where I'm at right now, my setup looks like this: I did start the array and then stopped it after I saw it was emulating the drive and not attempting to do any rebuild. So it seems as though I've covered the first 6 steps from that link you provided. From there I'm just supposed to toss in my 14tb drive into the bay that previously housed my 3tb. Boot up the system... it says to stop the array, but I believe that UnRaid won't even start the array in this situation will it? If so I guess I'll stop it anyways. From here I'm going to unassign my existing parity drive, and then assign the new 14tb drive as the lone parity drive, and then assign my existing 10tb parity as the Disk 3 data drive. From there I can choose to follow the copy command and so forth... I know this sounds redundant, but I just want to ensure before I start stripping out my parity, and doing all that, that the idea behind this all is to ensure I don't lose any data whatsoever. Following these steps here (as I laid out / what's laid out in that link), will simply upgrade my existing parity from 10tb -> 14tb, while replacing the 3tb with a 10tb drive, and rebuild the entire parity, and I should have no data loss anywhere, right?
  13. No, I can't because that drive has failed. It's inoperable (doesn't even show in BIOS). I understand that normally you'd swap a failed drive out for a fresh drive, but I don't have that ability at this time. My only option is to install a 12tb or 14tb, but because my parity is only 10tb, that drive will be capped to 10tb. So since I have around ~8tb of free unused space on my array (prior to this failure), I was thinking it'd just be easier to remove the 3tb drive all together. I'll still have space, and then my parity can rebuild the array with only 3 data drives (instead of the 4). Again, I don't want to lose whatever data is on the 3tb. Was hoping that the parity would be able to rebuild the missing data and spread that across the remaining space on the other 3 drives.
  14. I've tried doing a bit of research on this already, and I think I understand what I need to do, but I'm also a bit paranoid of doing something wrong and losing all my data. I currently have a system with a single parity drive (10tb), 4 data drives (2x10tv, 6tb and 3tb). and a Cache drive. Just this morning I woke up to my find my array was down, and it appears that my 3tb drive has failed. Although I have an extra 12tb and a 14tb drive that I've yet to install, I can't install those yet as I was originally planning on tossing the 14tb in as a new parity drive, and I obviously can't do that right now. So, I have enough space on my system where without the 3tb drive being replaced, I would still have room to spare. I've physically removed the drive and disconnected it from the system, and upon booting up, the UnRaid system still believes it's there of course. I can force to start the array, and it just tells me it's emulated, but again it notifies me that the disk is missing. It appears that I need to go through and run the tool New Config, and tell it to preserve my Parity, and Cache drive, while not preserving my Data drives (I hope this is correct?). At this point I have not come across any videos or images of what the next step looks like.. at least not on this version of UnRaid (v6.8.3). From what I can tell, the next step is going to allow me to pick which drives I wish to preserve (or do I need to tell the system to preserve all Parity, Data, and Cache drives?). I can then choose my 2x10tb and the 6tb and just forget about the 3tb. From there it'll rebuild the array without the 3tb, and it'll be back to business as usual. If there's any other important information that might be needed, let me know, but I'm more so posting this as a sanity check before I dive in and potentially destroy my array/data. Thanks!
  15. I know you didn't speak directly to my post, but I looked at that and saw similar errors in my logs... turns out it was the VPN provider (PIA)... Not sure what's going on, but the CA locations don't appear to support port forwarding anymore, and their france location must be having locations. Resorted to de-berlin, and everything is back online now.