FunkadelicRelic

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

FunkadelicRelic's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Agreed, though in this case they aren't leaking, just years of crud build up. As mentioned, don't think they were the cause of the issue but they certainly do look nice and clean now
  2. So I seem to have resolved this although not entirely sure what was causing the issue as I changed quite a few things in the process. To summarise, it must have been environmental. I switched PSU and changed out CPU fan and TIM. Still crashed. In the end, I had a spare 4u case of identical make, so I decided to lift and shift the components into the spare case, backplane included. When moving the motherboard, I did notice the board had quite a lot of thick black gunk (thought it was scorch marks to start with!) around some of the capacitors. I gave it a good clean with isopropyl alcohol 99 and it cleaned right up. Re-assembled and everything seems to be OK. It's now been up for over 24 hours so fingers crossed. If I were to guess, I'd imagine it was a fan cable or one of the case headers causing a short. Anyway - appreciate everyones help trying to diagnose this. Regards. Tom.
  3. Unfortunately not. I'm working now on the assumption that something is shorting out. The rackmount chassis has been in place for years so I wouldn't imagine it's the motherboard - but I do have questions about the backplane and disk trays. I'm going to swap into a spare case with no BP or caddies and see what happens. Guess that's my weekend project sorted.
  4. Well - the server just lost power again with the new PSU. Must have stayed up for an hour or so this time. Anyone else have any other suggestions?
  5. OK - new PSU is in. Replaced with a spare 450w EVGA CX modular PSU. First time power up is fine, parity check in-progress. Find out soon enough if that was the culprit. Keep you posted.
  6. Thanks everyone for the help so far. Going to try the PSU - fairly unanimous responses so far so we'll see how that goes.
  7. That's interesting to know - it certainly seems like a dirty shutdown then. Thermals are no probs - IPMI monitoring of temps and event logs indicate no issues. It's also cold in the UK at the moment - I reckon I could run this thing passively with no problems Joking aside that was my first hunch but I am certain that it's thermally sound. I'm going to try the PSU as you and others have mentioned. I'll keep you all posted!
  8. I can certainly give it a go - have a spare kicking around here so no harm in trying. I'll post back with my findings.
  9. Full power off. What is interesting is that when I power it back up the first time, the server doesn't boot from the USB key - I get the 'no OS' warning. Power-off and then on and it boots fine. This is new since 6.6.3. I thought it could be a bad stick which is why I rebuilt a new one. I've got the FCP app - I'll try and get it in to troubleshooting mode - haven't checked that option yet!
  10. It's a fair suggestion, but unless a cat could get into my garage, open my locked rack and climb up about 30u and power off then I'm going say no. Although I don't ever trust a cat In seriousness though, it's deffo not someone powering it down. It is locked away and secure. The GUI shutdown is interesting but I don't think it could be that, I'm behind a firewall which I monitor and the server itself is in a constrained VLAN with limited connectivity so I think I'm OK from that standpoint - strong password too. Parity check seems to kick in on the reboot. Most times it completes with zero issues, but as I mentioned in my last reply, this latest stop happened after about 30 mins so it will never have finished. Oh - forgot to mention - 4 HDD (2TB), two of which parity, and one 500GB SSD cache.
  11. Quite possibly, however, during the Memtest, which I ran for over a full day, there were no outages. Since typing the message above, it went again! That was up for about 30 minutes this time. The PSU is an EVGA BQ modular 500 watt if I recall. Graphics - none, all VM functionality disabled. I'm very much using this as a traditional NAS server. Nothing fancy.
  12. Hi all. I was wandering if someone might be able to help me troubleshoot this issue I have been seeing since upgrading from 6.5.3. Every now and then (usually after about a day or two) of uptime, my server shuts itself down. I've been having this problem since moving from 6.5.3. It was running just fine for weeks, and when 6.6.x released I upgraded using the GUI. Since then, the shut downs are occuring all the time. I've been upgrading as new releases have come about but still the issues persist. I've most recently deployed a new USB key straight on version 6.6.3 but still the same thing. What's odd is that it seems to be doing a shutdown or a hard crash as the server will be in a powered off state, not sitting at a console or panic screen. I've done everything I can think of to rule out hardware - temp monitoring through IPMI and BIOS, Memtest, reset BIOS settings, reseated power/reset headers on motherboard, checked event log on IPMI etc. Nothing appears to be wrong. I should note that this hardware was running NAS4Free for about 4 years with no issues prior to migrating to unRAID a month or two ago. Are there any diagnostics I can perform on the OS? The diagnostics tool in the web GUI seemed to reset after power loss so they don't show anything meaningful. Here are the stats for my server. I'm runnning very minimal docker instances only (Plex, Sonarr etc) and CPU/mem utilisation are very low. Model: Custom M/B: Supermicro - X9SCL/X9SCM CPU: Intel® Xeon® CPU E3-1230 V2 @ 3.30GHz HVM: Enabled IOMMU: Enabled Cache: 256 kB, 1024 kB, 8192 kB Memory: 32 GB Single-bit ECC (max. installable capacity 32 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 eth3: 1000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.18.15-unRAID x86_64 OpenSSL: 1.1.0i Uptime: 0 days, 00:06:45
  13. Hi all. I'm in the process of testing out Unraid after having switched over from NAS4Free. My primary use case is Docker containers. My Unraid server has 2 network adapters - one in the LAN (eth0) and one in a seperate network DMZ (eth1). My network is firewalled and routed using OPNSense and the network connectivity between the network adapters is fine. I have both network adapters configured in Unraid with Static IP and Gateways defined, with bridging and bonding disabled. I want to be able to set containers to use either of these networks. However, this is where I'm having problems. I can't seem to get the networks to appear in the Docker config. I see them both listed, but the DMZ network does not display a Gateway. When I try to create a container, I only see eth0 (LAN). If I turn on bridging for both networks, I see br0 and br1 listed in the Docker config, but when I create a container, only br0 shows in the list, despute both br interfaces being ticked in the Docker settings. Ultimately, I would like to be able to run containers on each of the host interfaces. If this means I can share the host addresses with containers, great, but if not I don't mind using individual addresses. The thought behind using the shared host addresses is that I use HAProxy to reverse proxy into all my services so accessing services would be a bit easier to configure by only having to remember a couple of IP addresses as opposed to one for each individual container. Am I doing something wrong here or do others see similar behaviour? Any help would be greatly appreciated.