kaiguy

Members
  • Posts

    723
  • Joined

  • Last visited

Everything posted by kaiguy

  1. Ever since upgrading to 6.10-rc1, I've seen these errors recurring in the syslog: Aug 10 05:03:05 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:05 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:06 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:06 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:06 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:07 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:07 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:08 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:09 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Preceding many of these wsdd errors, I'm seeing log entries relating to ipv6 addresses (but only ipv4 is enabled in my network settings). It also seems to occur around the times where the docker network and avahi log entries are chatty. Diagnostics attached. titan-diagnostics-20210810-0628.zip
  2. Right. As I mentioned, I originally did have containers (unifi, adguard) with fixed IP addresses, but in the process of troubleshooting I changed the network settings. Through further troubleshooting I then realized that the host access to custom networks caused the trace even without those fixed IP addresses (and easily/quickly repeatable), so that's what I've been focusing on. Here you go! Thanks.
  3. Sure. Attached. Since I started experiencing these issues, I removed unifi, homebridge, and adguard from service but I didn't fully delete the containers should I be able to restart in the future (unifi and adguard were originally assigned static ips, but I even removed that from these unused container configs). When you say special network setup, what do you mean? Aside from using a defined network for containers, I don't believe I've made any other changes. Attaching network config screenshot as well.
  4. Unfortunately it looks like the macvlan trace happened for me immediately after I enabled host access to custom networks. I posted diagnostics in the appropriate bug report thread.
  5. Mirrored syslog to flash, enabled host access to custom networks. Within a few minutes I got a call trace, but not a hard hang (expected). I went ahead and captured a diagnostics, turned off that setting again, and rebooted (as experience shows that it will ultimately hang in hours or days once I experience this call trace). Hope this helps. Happy to try other procedures to aid the cause. Apr 8 12:35:55 titan kernel: ------------[ cut here ]------------ Apr 8 12:35:55 titan kernel: WARNING: CPU: 1 PID: 20324 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Apr 8 12:35:55 titan kernel: Modules linked in: macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs nfsd lockd grace sunrpc md_mod ipmi_devintf nct6775 hwmon_vid iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables igb x86_pkg_temp_thermal intel_powerclamp i915 wmi_bmof coretemp ipmi_ssif kvm_intel kvm iosf_mbi drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel drm crypto_simd cryptd intel_gtt glue_helper mpt3sas agpgart i2c_i801 syscopyarea sysfillrect rapl i2c_algo_bit i2c_smbus sysimgblt raid_class i2c_core intel_cstate acpi_ipmi Apr 8 12:35:55 titan kernel: fb_sys_fops nvme scsi_transport_sas wmi intel_uncore nvme_core video ahci ie31200_edac ipmi_si intel_pch_thermal backlight libahci thermal acpi_power_meter fan acpi_pad button [last unloaded: igb] Apr 8 12:35:55 titan kernel: CPU: 1 PID: 20324 Comm: kworker/1:0 Not tainted 5.10.28-Unraid #1 Apr 8 12:35:55 titan kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C246D4U, BIOS L2.34 12/23/2020 Apr 8 12:35:55 titan kernel: Workqueue: events macvlan_process_broadcast [macvlan] Apr 8 12:35:55 titan kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Apr 8 12:35:55 titan kernel: Code: e8 dc f8 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 36 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 6d f3 ff ff e8 35 f5 ff ff e9 22 01 Apr 8 12:35:55 titan kernel: RSP: 0018:ffffc90000178d38 EFLAGS: 00010202 Apr 8 12:35:55 titan kernel: RAX: 0000000000000188 RBX: 0000000000004d65 RCX: 00000000124a1cea Apr 8 12:35:55 titan kernel: RDX: 0000000000000000 RSI: 0000000000000338 RDI: ffffffffa02b3ee0 Apr 8 12:35:55 titan kernel: RBP: ffff88819acbca00 R08: 000000004bf757e0 R09: 0000000000000000 Apr 8 12:35:55 titan kernel: R10: 0000000000000158 R11: ffff8882a39b5e00 R12: 000000000000bb38 Apr 8 12:35:55 titan kernel: R13: ffffffff8210b440 R14: 0000000000004d65 R15: 0000000000000000 Apr 8 12:35:55 titan kernel: FS: 0000000000000000(0000) GS:ffff88903f440000(0000) knlGS:0000000000000000 Apr 8 12:35:55 titan kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Apr 8 12:35:55 titan kernel: CR2: 0000150c678fc718 CR3: 000000000400a006 CR4: 00000000003706e0 Apr 8 12:35:55 titan kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Apr 8 12:35:55 titan kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Apr 8 12:35:55 titan kernel: Call Trace: Apr 8 12:35:55 titan kernel: <IRQ> Apr 8 12:35:55 titan kernel: nf_conntrack_confirm+0x2f/0x36 [nf_conntrack] Apr 8 12:35:55 titan kernel: nf_hook_slow+0x39/0x8e Apr 8 12:35:55 titan kernel: nf_hook.constprop.0+0xb1/0xd8 Apr 8 12:35:55 titan kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe Apr 8 12:35:55 titan kernel: ip_local_deliver+0x49/0x75 Apr 8 12:35:55 titan kernel: ip_sabotage_in+0x43/0x4d [br_netfilter] Apr 8 12:35:55 titan kernel: nf_hook_slow+0x39/0x8e Apr 8 12:35:55 titan kernel: nf_hook.constprop.0+0xb1/0xd8 Apr 8 12:35:55 titan kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50 Apr 8 12:35:55 titan kernel: ip_rcv+0x41/0x61 Apr 8 12:35:55 titan kernel: __netif_receive_skb_one_core+0x74/0x95 Apr 8 12:35:55 titan kernel: process_backlog+0xa3/0x13b Apr 8 12:35:55 titan kernel: net_rx_action+0xf4/0x29d Apr 8 12:35:55 titan kernel: __do_softirq+0xc4/0x1c2 Apr 8 12:35:55 titan kernel: asm_call_irq_on_stack+0xf/0x20 Apr 8 12:35:55 titan kernel: </IRQ> Apr 8 12:35:55 titan kernel: do_softirq_own_stack+0x2c/0x39 Apr 8 12:35:55 titan kernel: do_softirq+0x3a/0x44 Apr 8 12:35:55 titan kernel: netif_rx_ni+0x1c/0x22 Apr 8 12:35:55 titan kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Apr 8 12:35:55 titan kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Apr 8 12:35:55 titan kernel: process_one_work+0x13c/0x1d5 Apr 8 12:35:55 titan kernel: worker_thread+0x18b/0x22f Apr 8 12:35:55 titan kernel: ? process_scheduled_works+0x27/0x27 Apr 8 12:35:55 titan kernel: kthread+0xe5/0xea Apr 8 12:35:55 titan kernel: ? __kthread_bind_mask+0x57/0x57 Apr 8 12:35:55 titan kernel: ret_from_fork+0x1f/0x30 Apr 8 12:35:55 titan kernel: ---[ end trace 57d37c5af5277fb5 ]--- titan-diagnostics-20210408-1236.zip
  6. @limetech @bonienl Is mirroring to flash necessary if I already have a local syslog server back to an unraid SSD pool?
  7. With the drop of 6.9.2, I'm going to go ahead and toggle host access to custom networks later today and see if I immediately get a hang. I know my experience is slightly different than others in this thread, but it is the single setting that causes my kernel panics so I think it could be related. Will probably have an update within the next 24 hours.
  8. Well that was quick. Already got a call trace about 2 or 3 minutes after I made that setting. Relevant logging below. Mar 29 11:32:19 titan kernel: ------------[ cut here ]------------ Mar 29 11:32:19 titan kernel: WARNING: CPU: 1 PID: 14815 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 Mar 29 11:32:19 titan kernel: Modules linked in: macvlan xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap veth xt_nat xfs nfsd lockd grace sunrpc md_mod ipmi_devintf nct6775 hwmon_vid iptable_nat xt_MASQUERADE nf_nat wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables igb i915 x86_pkg_temp_thermal intel_powerclamp ipmi_ssif coretemp kvm_intel wmi_bmof iosf_mbi kvm drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel drm mpt3sas crypto_simd intel_gtt cryptd raid_class agpgart syscopyarea glue_helper scsi_transport_sas i2c_i801 input_leds sysfillrect nvme video ahci rapl led_class i2c_algo_bit i2c_smbus sysimgblt nvme_core i2c_core wmi intel_cstate backlight intel_pch_thermal fb_sys_fops intel_uncore libahci acpi_ipmi ie31200_edac ipmi_si Mar 29 11:32:19 titan kernel: acpi_power_meter thermal button acpi_pad fan [last unloaded: igb] Mar 29 11:32:19 titan kernel: CPU: 1 PID: 14815 Comm: kworker/1:0 Not tainted 5.10.21-Unraid #1 Mar 29 11:32:19 titan kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C246D4U, BIOS L2.34 12/23/2020 Mar 29 11:32:19 titan kernel: Workqueue: events macvlan_process_broadcast [macvlan] Mar 29 11:32:19 titan kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 Mar 29 11:32:19 titan kernel: Code: e8 64 f9 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 d5 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 5d f3 ff ff e8 30 f6 ff ff e9 22 01 Mar 29 11:32:19 titan kernel: RSP: 0018:ffffc90000178d38 EFLAGS: 00010202 Mar 29 11:32:19 titan kernel: RAX: 0000000000000188 RBX: 0000000000007ca3 RCX: 000000000bdce55a Mar 29 11:32:19 titan kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff82009d78 Mar 29 11:32:19 titan kernel: RBP: ffff8882c2482780 R08: 0000000016e41bac R09: ffff88810185de80 Mar 29 11:32:19 titan kernel: R10: 0000000000000158 R11: ffff888252b3aa00 R12: 000000000000514e Mar 29 11:32:19 titan kernel: R13: ffffffff8210db40 R14: 0000000000007ca3 R15: 0000000000000000 Mar 29 11:32:19 titan kernel: FS: 0000000000000000(0000) GS:ffff88903f440000(0000) knlGS:0000000000000000 Mar 29 11:32:19 titan kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 29 11:32:19 titan kernel: CR2: 000014e336c24718 CR3: 000000000400c003 CR4: 00000000003706e0 Mar 29 11:32:19 titan kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 29 11:32:19 titan kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Mar 29 11:32:19 titan kernel: Call Trace: Mar 29 11:32:19 titan kernel: <IRQ> Mar 29 11:32:19 titan kernel: nf_conntrack_confirm+0x2f/0x36 Mar 29 11:32:19 titan kernel: nf_hook_slow+0x39/0x8e Mar 29 11:32:19 titan kernel: nf_hook.constprop.0+0xb1/0xd8 Mar 29 11:32:19 titan kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe Mar 29 11:32:19 titan kernel: ip_local_deliver+0x49/0x75 Mar 29 11:32:19 titan kernel: ip_sabotage_in+0x43/0x4d Mar 29 11:32:19 titan kernel: nf_hook_slow+0x39/0x8e Mar 29 11:32:19 titan kernel: nf_hook.constprop.0+0xb1/0xd8 Mar 29 11:32:19 titan kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50 Mar 29 11:32:19 titan kernel: ip_rcv+0x41/0x61 Mar 29 11:32:19 titan kernel: __netif_receive_skb_one_core+0x74/0x95 Mar 29 11:32:19 titan kernel: process_backlog+0xa3/0x13b Mar 29 11:32:19 titan kernel: net_rx_action+0xf4/0x29d Mar 29 11:32:19 titan kernel: __do_softirq+0xc4/0x1c2 Mar 29 11:32:19 titan kernel: asm_call_irq_on_stack+0xf/0x20 Mar 29 11:32:19 titan kernel: </IRQ> Mar 29 11:32:19 titan kernel: do_softirq_own_stack+0x2c/0x39 Mar 29 11:32:19 titan kernel: do_softirq+0x3a/0x44 Mar 29 11:32:19 titan kernel: netif_rx_ni+0x1c/0x22 Mar 29 11:32:19 titan kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Mar 29 11:32:19 titan kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Mar 29 11:32:19 titan kernel: process_one_work+0x13c/0x1d5 Mar 29 11:32:19 titan kernel: worker_thread+0x18b/0x22f Mar 29 11:32:19 titan kernel: ? process_scheduled_works+0x27/0x27 Mar 29 11:32:19 titan kernel: kthread+0xe5/0xea Mar 29 11:32:19 titan kernel: ? __kthread_bind_mask+0x57/0x57 Mar 29 11:32:19 titan kernel: ret_from_fork+0x1f/0x30 Mar 29 11:32:19 titan kernel: ---[ end trace b73556de35a696bd ]--- I went ahead and generated a diagnostics then went to go and disable the host access setting. While I was toggling docker to off (in order to access that setting), network connectivity went out. I was able to access the console via IPMI to ultimately make that switch and connectivity came back up. In the past, though, when I haven't actively intervened, by the time I realize I have no connectivity, the IPMI console is also hung. Going ahead and generating another diagnostics that may have captured what I just described above. It is attached. Edit: I should mention, ipv6 is disabled on unRAID and my network. Yet this call trace seems to explicitly call out ipv6... titan-diagnostics-20210329-1140.zip
  9. I too have been without call traces/kernel panics for a while (11 days uptime), but I will re-enable host access to custom networks and keep an eye out. In my case, I do get a build up of call traces which end up ultimately resulting in a full blown hang, but I will capture a diagnostics before that happens (and still have the logging server going). I anticipate a call trace within the next few hours, and possibly a hang in the early AM tomorrow if I don't intervene by then. FYI, I already transitioned off any containers from br0 but it will still happen for me, so I'm not even sure if my issue is the same as everyone else in this thread at this point. Or maybe it is. But hopefully it will lead to some answers.
  10. Disabling host access to custom networks has helped eliminate my kernel panics. Is this something you’d like me to re-enable for the cause?
  11. This board has a reputation of erroneous motherboard temp readings. I pretty much am always rocking an 80+ mobo temp via IPMI.
  12. Update: I seem to have narrowed down this issue to networking--some combination of utilizing br0 and also enabling "host access to custom networks." Even with containers not using br0 I get the kernel panic/hang when I have the host access option enabled under Docker settings. Something very strange going on. Disabled that setting and I've been good for 4 days. This report seems to have more action.
  13. Made the change back to host access and my server locked up sometime after 2:30am this morning. Looks like (at least for me) that's the primary culprit in general.
  14. Possibly. I still did get one when I removed them from br0 and turned off those containers entirely, but I don't recall if I rebooted between events. I'll maybe try re-enabling host access and see if it happens.
  15. Not sure if disabling host access to custom networks fixed it, or migrating the two containers that had static IPs assigned to br0 to a raspberry pi, but I’m no longer getting these syslog errors/locks. I would prefer to keep everything on the server, so next project will be setting up a docker VLAN on my pfsense and tplink smart switch. Not once did I see them before 6.9.x, so I am hopeful that whatever is going on in this kernel is corrected.
  16. I have also been getting this pretty constantly since upgrading to 6.9.x, which will eventually result in a system lock for me. Been trying to troubleshoot over the last few days. Older threads suggested it had to do with docker assignments on br0, so I removed all references to any containers using br0 (in fact, I moved 3 apps from unraid docker to a raspberry pi to try to fix this). Does not seem help. @CorneliousJD When you enabled the VLAN for br0.10, did you keep the setting for allowing host access to custom networks? Any chance you can share how you configured it? I'd like to give it a try, but truth be told my understanding of VLANs are pretty basic. I do have the ability to run VLANs, however.
  17. Not directed to me, but I can share my experience with this CPU. It runs hot. Airflow and HSF will play a big role in keeping temps down, but this processor just has a high TDP. Even an idle Windows 10 VM seems to raise my overall CPU utilization by 5-15%, which subsequently increases temps. Using the best HSF that your case can support would be my recommendation. I was limited by case clearance so the HSF I use is a little small for my liking, but still does an admiral job. Personally I wouldn’t be concerned with the temp you mentioned just based on my own averages. I think an item that would be more meaningful is what are your temps at full load?
  18. Truth be told I'm not 100% sure if this is due to 6.9.x, but the issues did start when I upgraded so I figured I'd give it a shot in here. Prior to 6.9.x, I had zero locks since deploying my current hardware from a year ago. The only material config change since the upgrade was that I began using my nvme drive as my primary cache drive, whereas before it was sitting idle in my server via unassigned devices. Also I no longer use unassigned devices but single- and multi-disk pools. At least one of the prior locks showed a CPU_CATERR event via my IPMI, but the last one did not (the forum thread on my motherboard had others running into this with prior BIOS versions and Intel Turbo Boost enabled, but I never experienced it). I just lost all connectivity and was unable to access the system via IPMI remote control. After one of the prior locks, I began running a local syslog server on the unraid server. This is what was captured prior to the lock: https://pastebin.com/raw/afA6Wd5a The only thing that stands out to me is CPU tainted errors. It does look like something similar happened earlier in the day, but the system continued to function. When this lock occurred, I believe there was writes occurring to the cache drive. I have since disabled turbo boost in the Tips and Tweaks plugin, and removed some plugins that are not commonly used. Diagnostics attached. titan-diagnostics-20210311-1331.zip
  19. Welp, after a couple days of success, it looks like I got another hard lock. I setup a syslog server (back to unraid) so I think I was able to capture some stuff, but I don't really know how to read this aside from noticing that I have a bunch of kernel warnings about the CPU being tainted. Anyone here have more experience reading this type of logging? Looks like something similar happened earlier in the day (but the server was still up), then it happened again and took everything down. I think. https://pastebin.com/raw/afA6Wd5a I have since disabled turbo boost. I should probably run a memtest (even though its ECC), but I think that means I need to disable EFI boot as when I try at bootup it just does a reboot. Ugh. Everything was running so smoothly for me before updating to 6.9.0/6.9.1... Edit: Well, disabling turbo boost seems to have helped--no system locks since. Crazy how this was something others experienced with older BIOS versions and I never did. Then with the upgrade to 6.9.x, I'm suddenly getting it. It probably has nothing to do with it, but I decided to remove the graphite thermal pad I was using and replaced it with Arctic MX-5 compound. Going to run the system a few more days with turbo boost disabled then try re-enabling it. My thought process is one of the cores may have not been getting enough contact with the pad and that may have been causing my lockups. Complete guess, but I'm going with it.
  20. @Hoopster I have a few VMs but by default they aren't running. I have had my system on Power Save and Turbo Boost since I started using this mobo. I also think the lock happened during the mover process, and the only change is I started using an nvme for my cache instead of one of my SSDs when I switched to 6.9.0, so in retrospect perhaps that's the culprit. In Performance, the CPU is at max freq the whole time, right? Think that's a material increase in power consumption? Heat?
  21. Are you still running with Turbo Boost disabled, @Hoopster? I never had to disable it before, but I wonder if I'm not being plagued with that system lock. Running a grep MHz /proc/cpuinfo while the system is running a parity check from the system lock results in cpu MHz : 3500.338 cpu MHz : 3500.505 cpu MHz : 3500.004 cpu MHz : 3500.169 cpu MHz : 3500.220 cpu MHz : 3500.430 cpu MHz : 3500.135 cpu MHz : 3499.397 cpu MHz : 3499.443 cpu MHz : 3500.380 cpu MHz : 3500.302 cpu MHz : 3502.510 cpu MHz : 3500.185 cpu MHz : 3500.332 cpu MHz : 3500.376 cpu MHz : 3500.164 but as I recall it was showing similar frequencies when idle... not sure if its normal for it to be at such a high freq all the time. Hoping for no more locks, but I'm not confident.
  22. Well, at 4:11am this morning my server had its first hard lockup. I am not home but VPN'd in via my router and saw that IPMI logged a Run-time Critical Stop. Nothing has changed with my hardware setup, so I have to assume something has changed with 6.9.0 that has exposed some instabilities with my system. So strange. Hopefully no one else starts experiencing issues.
  23. After not encountering it since pretty much the first few weeks of building this new server, I got the IPMI motherboard temp warning tonight. I didn't realize this was still a thing. Running the latest L2.34 BIOS. Are others still getting this from time to time?
  24. Upgraded to 6.9.0 recently. Changed around some of my SSDs/nvme uses, and started using single-disk pools instead of unassigned devices. Decided to just run a test on my nvme, which is now my cache drive (formatted XFS with the new 1MB offset). The result was really disappointing compared to when I first ran a test. A follow-up test showed a similar curve. My other SSD drives are fairly consistent with prior tests, albeit slightly slower (about 40MB/s). There is data on the nvme now compared to when I first ran the test, and a trim did run this morning, but this is not an expected result, right? Any suggestions on what I can do to troubleshoot? Could this be a result of 6.9.0? Thanks for any insight!