rodan5150

Members
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

1 Neutral

About rodan5150

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm worried about this, but I'm trying to stay hopeful the second NIC will be the band-aid for me for now. Uptime is almost 14 days, and no issues to speak of thus far, knock on wood...
  2. Going the vlan route did not work for me. I ended up having a call trace and subsequent kernel panic after a number of days. So far, creating a second interface (br2 in my case) with the unused onboard NIC seems to be the "fix" for me at this point. I don't want to muddy the waters anymore than they have to be, but since it could be something external of Unraid on the network (multicast, who knows...) that Unraid is choking on and causing the issue, perhaps it is beneficial to also mention a brief summary of the network gear used? Maybe a common denominator will surface that can be
  3. In my experience, 9000 (9014 is what I have set, per Intel driver in my Windows box, same card in my Unraid server, so I set it identically) MTU is necessary for 10Gbe to see full bandwidth. Though, IIRC, I saw much faster than 1Gbe with 1500 MTU set. edit: For reference, I have same 10GBe switch as you as well, and Unraid server is on DAC cable to switch. Windows box is over copper CAT6A in the walls, close to 20M+ in length, and I get at or near theoretical max for 10Gbe both directions. NICs are Intel X540 T2's in both machines.
  4. I was close to doing this, but I figured I'd tough it out and give 6.9.x a shot. So far, the br2 network for my containers I want to have their own IP, has been working well. No call traces yet and certainly no kernel panics. Of course, it is barely over a week out since I made that change. If I can say this a month out, then I will consider it good to go.
  5. Yeah, I reverted the change of the C-states back to default. Only thing I have set now is the power supply idle control. So far, what has me "fixed" is I've move all of my docker containers that needed a custom network (static IP) over to a separate NIC (br2). I also disabled vlans in Unraid as well, since I wasn't using them anymore. No kernel panics or anything, yet anyway. It's been over a week now. Fingers are crossed!
  6. thanks for the Reply JorgeB. I'm going to give the second NIC assignment a shot. I had been trying to do all of this through a single 10Gbe connection. I've got several 1Gbe ports open on my main switch, so its not a huge deal to just assign the containers to a second NIC. With any luck, this will solve it.
  7. Bad news. So kernel panic is back. I thought I had figured it out by moving everything from br0 to br0.x but it looks like there has to be another issue going on, causing the call traces that ultimately end in a kernel panic. What else could I be missing? Darkhelmet syslog 5-21.txt
  8. I looked over it, I'm no expert, but you do have quite a few errors and warnings. Not sure what is critical or would cause hangups/crashes. Anyway, I took your CSV file, sorted it in descending orde r by dat e and time stamp, then exported as tab delimited txt file. Maybe this will help others to interpret it better. All_2021-4-28-10_7_11_tab delimited.zip
  9. The PassMark one is the one I've used to test ECC Memory. Be sure to boot to UEFI on the Memtest boot stick, not the traditional Memtest86 with the blue screen that is Legacy/BIOS boot. It will have both. The UEFI one is the one that did the trick for me testing ECC, i think the legacy one does not.
  10. That's exactly where it was. I enabled the C-states option, and then set the idle current to typical instead of low. I've also created a Docker specific vlan, and moved all of the br0 over to br0.x so hopefully that will keep my call traces and kernel panics at bay. I will update if anything changes. So far so good, but it has only been about 18 hours or so. Longest it has gone in the past was 10ish days. So if I can hit 2 weeks+ I'll consider it a win.
  11. Awesome, thanks for letting me know. I will revert the global C-state setting, and dig around and see if I can find the idle current setting.
  12. Hello all, I've been fighting a few issues since I "upgraded" to a Ryzen based system from an old dual Xeon Dell T410. New build is a Ryzen 3600, on a B450 chipset. It was getting unresponsive after a few days, and that seems to have gone away after disabling global C-states in the BIOS. The latest issue is I get a kernel panic after a week or so of uptime. I have syslog enabled, and was able to get it just before the panic. I also got a pic of the screen before I rebooted that tells the rest of the story after the syslog dropped. Looks like network related, maybe mac-vlan? De
  13. I had a very similar situation this morning. I pulled the plug today for replacing a "bad" flash disk, 32GB Usb 2.0 SanDisk Cruzer Fit. I replaced an old/cheap drive back in June of last year, that was actually bad. So I had to go through support to get reinstated. So not as hassle free, but they were pretty quick to respond and get me fixed up! Anyway, I was getting read errors: I thought for sure the flash drive was dead, because when I plugged it into my Windows machine to yank the config file at least, but it hosed it up proper. Windows explorer crashed, I could
  14. I just installed the plugin for the first time. It will not let me sign in, I just get the perpetual spectra wave of death, regardless of browser or pop-up blocker settings. If it is down for others, that may explain it. I still have access to my server, so no big deal.
  15. 10-4. I'll give that a shot and report back. Disk6 in coincidentally right next to the slot where I replaced the parity. So, there is a decent chance I disturbed the cable. Its almost done with a SMART extended test. If it passes that no issues, then I'll have higher confidence in it. Thanks JorgeB.