Jump to content

PanteraGSTK

Members
  • Content Count

    99
  • Joined

  • Last visited

Community Reputation

0 Neutral

About PanteraGSTK

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Same here. Been using Sophos UTM for about 5 years or so. Fantastic product and pretty amazing what you get for the free license. It does have a learning curve though. I've had it running on an i3 with 8gb of ram and it hardly uses any resources with my config. I also run a pi-hole along side it and the combo is fantastic.
  2. Marking this solved...for the third time. I'm pretty confident this time that CPU0 is to blame. The memory all test fine, but the controller issue I got when I first got the CPU/Mobo/RAM combo makes it pretty obvious that is the case. It is possible that Socket 0 is the culprit, but I'll find that out when I move CPU1 to socket 0. I'll also moved the confirmed working RAM. If it all checks out I may grab another xeon 2670 off ebay and hope for the best. I was able to get through a parity check and multiple drives using the file integrity plugin. That plugin was an easy test because when I tried to use it in the past a reboot was pretty much always triggered on my 4TB drives. Another test that passed is downloading large files via usenet while also remuxing a file using makemkv. That's quite a lot of operations at once especially considering my kids were watching movies/tv via plex all at the same time. No reboot. If that didn't make it reboot, nothing will. Hopefully. Thanks for the help.
  3. While removing two sticks of ram seems to have helped, having that ram in netted the same behavior. Sometimes it would be fine, no reboots, sometimes the reboots would happen quickly. Removing CPU0 and associated RAM allowed an additional parity check to complete. No issues. I'll continue to test, but with the fact that I got memory controller errors with CPU0 when I first installed it, but they stopped when I moved the ram around the board points to CPU0 being the culprit. My PSU is a Corsair RMX850 and it's less than 5 years old. I tend to lean toward PSU in these situations too, but with all the other factors I'm confident that CPU0 is most likely the issue.
  4. Removing the memory may have helped, but it didn't fix it. I got through an entire parity check (with 2 parity drives at 50mbps). However, when I checked this morning it had rebooted again. Now to remove cpu0 and memory.
  5. It does sound like a normal power cycle. That's the strange part. I've not been able to capture anything that points me in any specific direction. When I first got the board and cpus, I had an issue where some memory wasn't recognized. I put them in alternate slots and forgot about it for a few years. I pulled those dimms and am testing again. If it reboots again, I'll pull the associated CPU and remaining dimms and just leave cpu1 and its memory.
  6. Yeah, I have. Many times. It's in a closet, but I'm 6 feet away from the actual server. I'll pull some memory, but it's a dual proc board so I'll have to be careful.
  7. Man, I really thought the new SAS card was going to fix this. I recently removed my SAS2LP card in favor of an LSI HBA, but during a parity check the system rebooted at 85%. I'm not finding anything in logs that tell me where to look. Any ideas on where to start?
  8. 3ware has 4 ports that control 4 drives each on a single card. The 2 LSI cards have 2 ports that control 4 drives each. I've got it down. I've got a breakout cable and a SATA cage with fan. I'll pull one drive at a time until 4 are done. Then I can swap cables between the 3ware card to the LSI. Then repeat until complete.
  9. I think we're talking about the same thing now. I've got two LSI cards for a total of 16 drives. These completely replace the 16 drive capable 3ware card. I was thinking the same. Put the disks in an external cage and rebuild one by one. Once that's complete remove the 3ware controller since it won't have drives connected any longer. Thanks for the confirmation.
  10. Right now I can pull individual drives and move them to the new LSI cards one by one. It will be a pain, but it's possible. Then I can rebuild them one by one as the drive removed will become the replacement drive. Your method would be me expanding the array with new disks, moving data over, then shrinking the array. Am I correct?
  11. Hello, I just wanted to validate the method I have in mind for migrating from my current 3ware card to the new LSI HBA (Fujitsu branded) cards I recently flashed into IT mode (thanks @johnny for the guide and batch files). I pulled out my SAS2LP and replaced it with one of the LSI cards and (as expected) no issues at all. Array started without issue. However, when I pulled my 3ware card and plugged the drives into the LSI cards, I got missing disks (as expected since 3ware doesn't support fully passing through the drive info or size). I then went to do a new config, kept the drives in their slots (because none of the slots changed from the SAS2LP to the LSI card). What I wasn't expecting was that now the 16 drives that came from that 3ware card now needed to be formatted in order to be used. I noped out of there real quick and put the drives back on the 3ware card and reverted to my previously saved config. Booted up and now everything is back to normal. What I'm not sure how to do is the migration without it causing issues. My thoughts are: 1. Keep the LSI cards in place and get a break out cable. 2. Plug that cable into a storage cage and pull one drive at a time and rebuild. 3. Once that's complete for all 16 drives, pull the 3ware card and put my current drives back into the SAS backplane. This is really the only logical way I can think to do this. When I take a drive from the 3ware card and try to read it on another computer, the file system isn't accessible for some reason. It sees the XFS partition, but my linux reader software can't read it. I don't have another linux PC to test on. Is there a better/smarter way to do what I'm propsing? 16 drives (2-4tb) is going to take quite a while. What do you guys think?
  12. It would seem everything is OK. 24hrs uptime and I've stress tested and no issues so far. Thanks again for the help.
  13. I will let you know either way.
  14. Thanks for the tip. I did that and didn't see any errors, but I'm not all that familiar with xfs_repair I let it repair as needed and all the checks finished very quickly. Only took a few seconds per disk. Not sure if that's good or bad.
  15. That did not solve the problem (got more space now though). Got through a 17hr parity check without any issues. As soon as I start downloading with NZBget the server restarts after about 10 min or so. Very odd. I've started using the syslog function so I've attached that log file. The reboot happened around 6pm (1800) syslog