xorinzor

Members
  • Posts

    120
  • Joined

  • Last visited

Everything posted by xorinzor

  1. Hm, haven't had any new call traces to far since the reboot. Wonder what changed that it worked fine before, but started causing issues after this update. Either way, looks like it's solved, thanks
  2. A few days ago I updated unraid to 6.12.8, and this morning I woke up only to find that my server ended up crashing overnight. Unfortunately due to the log being stored in RAM I have no idea what the cause for this was. Now however I see a call trace appear in my logs and I'm not sure if this has to do with macvlans or not. I've always kept macvlan enabled and never experienced issues before. I'm hoping someone can help shed some light on what this call trace could potentially imply and where I should start looking. I'm also 99.99% confident it wouldn't be related to memory-issues since I have (multi-bit) ECC-enabled memory. Feb 21 09:17:27 STORAGE kernel: WARNING: CPU: 1 PID: 0 at net/netfilter/nf_conntrack_core.c:1210 __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Feb 21 09:17:27 STORAGE kernel: Modules linked in: udp_diag xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle iptable_mangle vhost_net tun vhost vhost_iotlb tap macvlan veth xt_nat xt_tcpudp xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat xt_addrtype br_netfilter xfs xt_MASQUERADE ip6table_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod tcp_diag inet_diag i915 drm_buddy video i2c_algo_bit ttm drm_display_helper drm_kms_helper drm backlight intel_gtt agpgart syscopyarea sysfillrect sysimgblt fb_sys_fops nct6775 nct6775_core hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs bridge 8021q garp mrp stp llc bonding tls igc atlantic zfs(PO) amd64_edac zunicode(PO) edac_mce_amd edac_core intel_rapl_msr intel_rapl_common zzstd(O) iosf_mbi kvm_amd zlua(O) zavl(PO) kvm icp(PO) crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 sha256_ssse3 sha1_ssse3 ftdi_sio aesni_intel Feb 21 09:17:27 STORAGE kernel: zcommon(PO) i2c_piix4 znvpair(PO) spl(O) crypto_simd cryptd intel_wmi_thunderbolt wmi_bmof asus_ec_sensors rapl thunderbolt i2c_core usbserial nvme k10temp input_leds ccp led_class ahci nvme_core libahci wmi tpm_crb tpm_tis tpm_tis_core tpm button acpi_cpufreq unix [last unloaded: igc] Feb 21 09:17:27 STORAGE kernel: CPU: 1 PID: 0 Comm: swapper/1 Tainted: P O 6.1.74-Unraid #1 Feb 21 09:17:27 STORAGE kernel: Hardware name: ASUS System Product Name/ProArt X570-CREATOR WIFI, BIOS 1002 02/03/2023 Feb 21 09:17:27 STORAGE kernel: RIP: 0010:__nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Feb 21 09:17:27 STORAGE kernel: Code: 44 24 10 e8 e2 e1 ff ff 8b 7c 24 04 89 ea 89 c6 89 04 24 e8 7e e6 ff ff 84 c0 75 a2 48 89 df e8 9b e2 ff ff 85 c0 89 c5 74 18 <0f> 0b 8b 34 24 8b 7c 24 04 e8 18 dd ff ff e8 93 e3 ff ff e9 72 01 Feb 21 09:17:27 STORAGE kernel: RSP: 0018:ffffc9000007c8d0 EFLAGS: 00010202 Feb 21 09:17:27 STORAGE kernel: RAX: 0000000000000001 RBX: ffff888623b42400 RCX: 2e6724b971b32715 Feb 21 09:17:27 STORAGE kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff888623b42400 Feb 21 09:17:27 STORAGE kernel: RBP: 0000000000000001 R08: 6a7b5487d6e2824c R09: 8915cd28b50ea8d6 Feb 21 09:17:27 STORAGE kernel: R10: 896dab7467ec32a1 R11: ffffc9000007c898 R12: ffffffff82a14d00 Feb 21 09:17:27 STORAGE kernel: R13: 0000000000000bef R14: ffff8881080de300 R15: 0000000000000000 Feb 21 09:17:27 STORAGE kernel: FS: 0000000000000000(0000) GS:ffff88900e840000(0000) knlGS:0000000000000000 Feb 21 09:17:27 STORAGE kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Feb 21 09:17:27 STORAGE kernel: CR2: 000000c000242000 CR3: 000000016c28c000 CR4: 0000000000750ee0 Feb 21 09:17:27 STORAGE kernel: PKRU: 55555554 Feb 21 09:17:27 STORAGE kernel: Call Trace: Feb 21 09:17:27 STORAGE kernel: <IRQ> Feb 21 09:17:27 STORAGE kernel: ? __warn+0xab/0x122 Feb 21 09:17:27 STORAGE kernel: ? report_bug+0x109/0x17e Feb 21 09:17:27 STORAGE kernel: ? __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Feb 21 09:17:27 STORAGE kernel: ? handle_bug+0x41/0x6f Feb 21 09:17:27 STORAGE kernel: ? exc_invalid_op+0x13/0x60 Feb 21 09:17:27 STORAGE kernel: ? asm_exc_invalid_op+0x16/0x20 Feb 21 09:17:27 STORAGE kernel: ? __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] Feb 21 09:17:27 STORAGE kernel: ? nf_nat_inet_fn+0xc0/0x1a8 [nf_nat] Feb 21 09:17:27 STORAGE kernel: nf_conntrack_confirm+0x25/0x54 [nf_conntrack] Feb 21 09:17:27 STORAGE kernel: nf_hook_slow+0x3d/0x96 Feb 21 09:17:27 STORAGE kernel: ? ip_protocol_deliver_rcu+0x164/0x164 Feb 21 09:17:27 STORAGE kernel: NF_HOOK.constprop.0+0x79/0xd9 Feb 21 09:17:27 STORAGE kernel: ? ip_protocol_deliver_rcu+0x164/0x164 Feb 21 09:17:27 STORAGE kernel: ip_sabotage_in+0x52/0x60 [br_netfilter] Feb 21 09:17:27 STORAGE kernel: nf_hook_slow+0x3d/0x96 Feb 21 09:17:27 STORAGE kernel: ? ip_rcv_finish_core.constprop.0+0x3e8/0x3e8 Feb 21 09:17:27 STORAGE kernel: NF_HOOK.constprop.0+0x79/0xd9 Feb 21 09:17:27 STORAGE kernel: ? ip_rcv_finish_core.constprop.0+0x3e8/0x3e8 Feb 21 09:17:27 STORAGE kernel: __netif_receive_skb_one_core+0x77/0x9c Feb 21 09:17:27 STORAGE kernel: netif_receive_skb+0xbf/0x127 Feb 21 09:17:27 STORAGE kernel: br_handle_frame_finish+0x43a/0x474 [bridge] Feb 21 09:17:27 STORAGE kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Feb 21 09:17:27 STORAGE kernel: br_nf_hook_thresh+0xe5/0x109 [br_netfilter] Feb 21 09:17:27 STORAGE kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Feb 21 09:17:27 STORAGE kernel: br_nf_pre_routing_finish+0x2c1/0x2ec [br_netfilter] Feb 21 09:17:27 STORAGE kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Feb 21 09:17:27 STORAGE kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter] Feb 21 09:17:27 STORAGE kernel: br_nf_pre_routing+0x236/0x24a [br_netfilter] Feb 21 09:17:27 STORAGE kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter] Feb 21 09:17:27 STORAGE kernel: br_handle_frame+0x27a/0x2e0 [bridge] Feb 21 09:17:27 STORAGE kernel: ? br_pass_frame_up+0xdd/0xdd [bridge] Feb 21 09:17:27 STORAGE kernel: __netif_receive_skb_core.constprop.0+0x4fd/0x6e9 Feb 21 09:17:27 STORAGE kernel: ? inet_gro_receive+0x23b/0x25b Feb 21 09:17:27 STORAGE kernel: __netif_receive_skb_list_core+0x8a/0x11e Feb 21 09:17:27 STORAGE kernel: netif_receive_skb_list_internal+0x1d2/0x20b Feb 21 09:17:27 STORAGE kernel: gro_normal_list+0x1d/0x3f Feb 21 09:17:27 STORAGE kernel: napi_complete_done+0x7b/0x11a Feb 21 09:17:27 STORAGE kernel: aq_vec_poll+0x13c/0x187 [atlantic] Feb 21 09:17:27 STORAGE kernel: __napi_poll.constprop.0+0x2b/0x124 Feb 21 09:17:27 STORAGE kernel: net_rx_action+0x159/0x24f Feb 21 09:17:27 STORAGE kernel: __do_softirq+0x129/0x288 Feb 21 09:17:27 STORAGE kernel: __irq_exit_rcu+0x5e/0xb8 Feb 21 09:17:27 STORAGE kernel: common_interrupt+0x9b/0xc1 Feb 21 09:17:27 STORAGE kernel: </IRQ> Feb 21 09:17:27 STORAGE kernel: <TASK> Feb 21 09:17:27 STORAGE kernel: asm_common_interrupt+0x22/0x40 Feb 21 09:17:27 STORAGE kernel: RIP: 0010:cpuidle_enter_state+0x11d/0x202 Feb 21 09:17:27 STORAGE kernel: Code: 91 f4 9f ff 45 84 ff 74 1b 9c 58 0f 1f 40 00 0f ba e0 09 73 08 0f 0b fa 0f 1f 44 00 00 31 ff e8 84 b0 a4 ff fb 0f 1f 44 00 00 <45> 85 e4 0f 88 ba 00 00 00 48 8b 04 24 49 63 cc 48 6b d1 68 49 29 Feb 21 09:17:27 STORAGE kernel: RSP: 0018:ffffc90000177e98 EFLAGS: 00000246 Feb 21 09:17:27 STORAGE kernel: RAX: ffff88900e840000 RBX: ffff888108e56400 RCX: 0000000000000000 Feb 21 09:17:27 STORAGE kernel: RDX: 0000040ae22edcb4 RSI: ffffffff820d8766 RDI: ffffffff820d8c6f Feb 21 09:17:27 STORAGE kernel: RBP: 0000000000000002 R08: 0000000000000002 R09: 0000000000000002 Feb 21 09:17:27 STORAGE kernel: R10: 0000000000000020 R11: 00000000000001f2 R12: 0000000000000002 Feb 21 09:17:27 STORAGE kernel: R13: ffffffff823237a0 R14: 0000040ae22edcb4 R15: 0000000000000000 Feb 21 09:17:27 STORAGE kernel: ? cpuidle_enter_state+0xf7/0x202 Feb 21 09:17:27 STORAGE kernel: cpuidle_enter+0x2a/0x38 Feb 21 09:17:27 STORAGE kernel: do_idle+0x18d/0x1fb Feb 21 09:17:27 STORAGE kernel: cpu_startup_entry+0x2a/0x2c Feb 21 09:17:27 STORAGE kernel: start_secondary+0x101/0x101 Feb 21 09:17:27 STORAGE kernel: secondary_startup_64_no_verify+0xce/0xdb Feb 21 09:17:27 STORAGE kernel: </TASK> Feb 21 09:17:27 STORAGE kernel: ---[ end trace 0000000000000000 ]---
  3. Definitely Intel Arc GPU support
  4. ZFS now has support for 6.2 too 🎉
  5. Only using "<smbios mode='host'/>" works for me too with Sniper Elite 5 (using EAC)
  6. I can reproduce this whenever I change the "Use cache pool" setting of shares. If I change the "Export" field for example it resolves itself. edit: I am on 6.11.5
  7. Ah, I wasn't really tracking it haha. Just looked and saw the updated stable version. Do you have any idea if this could mean we'll see this kernel come to unraid in a soon-ish update? or would it realistically take a while longer?
  8. Linux kernel 6.2.2 is in stable now!
  9. I think that given that Unraid is intended for servers, it's highly unlikely that a computer would normally have a battery installed. This sounds more like a case for a custom plugin, then something you'd include in the base unraid distribution.
  10. I'm using a pretty beefy server that runs pretty much everything; Webservers, Plex, Windows VM, Pihole, Unifi controller, etc. so reinstalling and reconfiguring all the vlans, docker networks, etc would be quite a bit of a hassle for me 😅 I do make backups, but I'm unsure how much backups will help me if the unraid packages get messed up. Since my CPU (ryzen 5900X) is powerful enough to transcode (if needed) I have no issue with just waiting for an official release
  11. Ah okay, good to know. Still a perfect card for me though. I had my RTX2070 before this shared with multiple docker containers, including plex. But wanted to move that to a dedicated VM, thus needed a new GPU for transcoding. I have that exact same card, brand new, same behaviour. I guess it's just normal for this card then. Hopefully they'll improve that in future updates. My RTX2070 fans dont spin either, but it doesn't get anywhere near as warm in idle as the A380
  12. This makes me a happy man haha. I purchased the Arc A380 with sole intention as a dedicated card for Plex, so this will work perfectly. Does your card's temps also get significantly lower at that kernel version? I'm noticing that my GPU is getting quite hot for a card that's not doing anything, which I'm attributing to the lack of any drivers currently.
  13. Awesome. This is useful information, thanks! I'll wait for that kernel to become available then, just in case. I can survive a week without utilizing the Arc A380 haha. Do you know where these kernels can be found / tracked as they become available? I'll just keep an eye on that then.
  14. @GRRRRRRR you need a serious reality check. Also tune down that attitude. I really do not care how high you think of yourself, there is absolutely no reason to act in the way that you are doing right now. Either stick to the topic on-hand, or just don't respond at all. Nobody gains anything from these nonsensical posts.
  15. You are aware that the Arc A380 has both an AV1 encoder and decoder right? Are you seriously insulting victims of an ongoing disaster right now? You should stop taking this topic off-track. It's a simple feature request, nobody is asking for different distro's and if you are you should open your own request. Anything not directly related to this topic should be put in it's own topic. Will this have any effect on future stable-branch updates? As in, I understand the kernel will be overwritten, but would there be any actions I'd have to take before doing any updates?
  16. Have there been any updates on this?
  17. Just gonna add this here for others to find if they're googling for it. I tried playing Halo Master Chief Collection (MCC) on Steam-headless. But the game would randomly freeze in the main menu, often when trying to change the settings. Audio would keep playing. After a bit of debugging it turns out that it tried to open too many files. Adding the following to the extra parameters in the docker container configuration fixed this: --ulimit nofile=100000:200000 Disclaimer: I just added random high values for both soft and hard limits here as I wanted to test if it would resolve my issue. I don't know what possible implications this will have, use at your own risk.
  18. Found some info on that back then, but wasn't able to find it again. I did however stumble upon another solution by using KeePassXC (tutorial here, if others are interested). This solution can easily be done via the init.d script and wouldn't require any modifications of the docker container Would still be nice to configure a variable for the docker container to add a NoVNC Password or (preferably) disable NoVNC entirely.
  19. You can add your own container init scripts to install any keyring software that you need Yes, but it's a real hassle to unlock them automatically due to the the auto login. There's a way to do it, but it requires changing config files which won't be persistent. Additionally, it'd be a nice addition if there's a boolean variable in the docker config that would enable password-protected access to the NoVNC (as well as another option to disable NoVNC entirely).
  20. @Josh.5 Any chance you can also implement a Password keyring that unlocks automatically too? Reason being, I also want to install the Minecraft Launcher, but it saves it's login via the Gnome keyring. Or if there's a better way I'd be happy to hear that too
  21. Ah okay, so once my server has updated to 6.10 this should no longer be happening? That's great news. Thanks. For now I'll manually edit the XML's that are problematic.
  22. Tried this method But only after enabling the Template Authoring mode and not finding the TemplateURL field I continued reading the thread only to read that that field got removed (does the Template Authoring Mode even serve a purpose now?). It'd be really great if you could re-implement any kind of way for us to take manual control, or just allow empty fields (which are then ignored entirely).
  23. @Squid Have you been able to think of anything to facilitate / fix this issue?
  24. I agree with @Squid here. Well, for docker containers, my use case isn't really all that weird It might in the unraid userspace not be all that common, but this is quite a normal way to use docker containers. Which isn't an issue, but I do think there's room for improvement in this particular case. Overall my experience with managing docker containers in unraid has been really positive.