• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

NNate's Achievements


Rookie (2/14)



  1. Looks exactly like the problem I had. I followed this guide to deal with my dockers running with specified IPs and I haven't had a crash since putting them in their own vlan.
  2. https://www.spinics.net/lists/kernel/msg3457429.html This seems to suggest to keep pstats_status as passive, but use the schedutil governer. This is what I'm going to go with for the time being.
  3. I'm torn, hard to know what's the best approach: https://wiki.archlinux.org/index.php/CPU_frequency_scaling#Scaling_governors
  4. Yes, that would solve it as well. From what I've read, I don't think "On Demand" is as efficient with the pstates for Intel as the "Power Save".
  5. @jowe I haven't looked to see if your CPU is impacted, but this really helped me:
  6. See the post above yours. It gets loaded directly into ram. Speaking as someone who doesn't have a nvidia card, I personally don't want my ram used up for something I don't have.
  7. Oh yeah, wow "echo active > /sys/devices/system/cpu/intel_pstate/status" really gave my system a kick in the pants. HUGE difference This will hurt a lot of people's performance pre-Skylake. Being stuck at the lowest pstate supported is painful. How can I make sure this change sticks post-reboot? Anything that can be done for others beyond manually making those changes?
  8. This looks very suspect: https://bugzilla.kernel.org/show_bug.cgi?id=209085 when I `cat /proc/cpuinfo | grep "MHz"` it basically only shows 800MHz. cat /sys/devices/system/cpu/intel_p_state/status = passive cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor = powersave Seems like these findings line up with the above linked bug reports/reddit page. Sounds like a very significant issue if you're running Intel pre-Skylake (ie 6th gen processors).
  9. Initial findings are that disabling those mitigations won't make much difference.
  10. I do have the "Disable Security Mitigations" plugin installed. I don't mind turning off the mitigations and running again. Currently rebooting for those to take effect and then will test again once my system has time to stabilize after startup.
  11. Yeah, 1 core (hopped around to different cores along the way) was pegged at 100% during the entire check. I know it's not the fastest CPU out there, but I'd think an i5-4690k would have the muscle to power through. So that was certainly surprising, but I guess I never really paid attention to the CPU usage in the past during a parity check.
  12. OK, it finally finished after 1 day, 4hrs, 41min with 77.5MB/s vs previously consistently at 18hrs, 50min with 118MB/s. That's 10hrs slower (50%) - that's crazy. I have no idea what's gone wrong.
  13. I'll keep waiting, but it's now been the 18hrs 50min that it's been historically and I'm only at 62% complete. Things have sped up slightly (82ish MB/s), but still overall a far way away from previous runs. I can update when it completes, but it seems the original estimate will be pretty close.
  14. Update: Slow parity check is a symptom of a 5.8 Kernel Bug where pre-Skylake CPUs get stuck at the minimum pstate MHz (often 800MHz). Scroll down in this thread for links to the associated Kernel bug and discussion. Otherwise, this link will jump you to my updated post in this thread. I've been noticing that 6.9.0 has been slower than 6.8.3 for disk access when using dockers (also the high amount of disk reads on my cache even after making them 1MB aligned, but that's another issue). I did my first parity check and it's markedly slower than historical. Normally parity check will complete in under 19hours at 118MB/s. The current parity check run happening now has been running for 9.5 hours and is only 30% done at 74 MB/s - with another 21 hours estimated to remain. If that estimate is true (and I know things slow down toward the end), that'll be over 30 hrs vs 19 hrs. 10 more hours than history shows. I haven't made any hardware changes vs 6.8.3. I've attached my diagnostics as well.
  15. I installed Beta 30 last night (coming from the stable). I'm seeing my log file full of avahi errors (I get them every minute). It makes it very difficult to find meaningful information when this buries everything. Oct 9 08:52:01 Server avahi-daemon[11275]: Record [Ricoh\032Color\032Laser\032Printer\032\064\032Server._ipp._tcp.local#011IN#011TXT "txtvers=1" "qtotal=1" "rp=printers/RicohSPC250DN" "ty=RICOH SP C250DN PS" "adminurl=https://Server.local:631/printers/RicohSPC250DN" "note=Office" "priori Oct 9 08:52:01 Server avahi-daemon[11275]: Record [AirPrint\032RicohSPC250DN\032\064\032Server._ipp._tcp.local#011IN#011TXT "txtvers=1" "qtotal=1" "Transparent=T" "URF=none" "rp=printers/RicohSPC250DN" "note=Ricoh Color Laser Printer" "product=(GPL Ghostscript)" "printer-state=3" "printer-type=0 Oct 9 08:52:01 Server avahi-daemon[11275]: Record [_printer._tcp.local#011IN#011PTR Ricoh\032Color\032Laser\032Printer\032\064\032Server._printer._tcp.local ; ttl=4500] not fitting in legacy unicast packet, dropping. Oct 9 08:52:01 Server avahi-daemon[11275]: Record [Ricoh\032Color\032Laser\032Printer\032\064\032Server._printer._tcp.local#011IN#011TXT ; ttl=4500] not fitting in legacy unicast packet, dropping. Oct 9 08:52:01 Server avahi-daemon[11275]: Record [Ricoh\032Color\032Laser\032Printer\032\064\032Server._printer._tcp.local#011IN#011SRV 0 0 0 Server.local ; ttl=120] not fitting in legacy unicast packet, dropping. I'm running a CUPS Docker that has AirPrint, but I never saw these messages until 6.9 Beta. I'd love to hide them or get rid of them so I can better find meaningful info in the logs.