sonic6

Members
  • Posts

    610
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by sonic6

  1. Here we are again: May 6 07:25:01 Unraid-1 kernel: ------------[ cut here ]------------ May 6 07:25:01 Unraid-1 kernel: WARNING: CPU: 7 PID: 93 at net/netfilter/nf_conntrack_core.c:1211 __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] May 6 07:25:01 Unraid-1 kernel: Modules linked in: cmac cifs asn1_decoder cifs_arc4 cifs_md4 dns_resolver tls nft_chain_nat xt_owner nft_compat nf_tables xt_nat xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle iptable_mangle vhost_net tun vhost vhost_iotlb tap macvlan xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat xt_addrtype br_netfilter veth xfs xt_MASQUERADE ip6table_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod tcp_diag inet_diag af_packet kvmgt mdev i915 iosf_mbi drm_buddy i2c_algo_bit ttm drm_display_helper drm_kms_helper drm intel_gtt agpgart syscopyarea sysfillrect sysimgblt fb_sys_fops nct6775 nct6775_core hwmon_vid efivarfs wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables 8021q garp mrp bridge stp llc x86_pkg_temp_thermal intel_powerclamp May 6 07:25:01 Unraid-1 kernel: coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 aesni_intel crypto_simd mei_hdcp mei_pxp i2c_i801 cryptd rapl intel_cstate nvme i2c_smbus wmi_bmof mei_me mpt3sas cp210x intel_uncore e1000e nvme_core i2c_core intel_pch_thermal mei joydev usbserial raid_class scsi_transport_sas tpm_crb video tpm_tis tpm_tis_core wmi tpm backlight intel_pmc_core acpi_pad acpi_tad button unix May 6 07:25:01 Unraid-1 kernel: CPU: 7 PID: 93 Comm: kworker/u24:3 Tainted: G U 6.1.27-Unraid #1 May 6 07:25:01 Unraid-1 kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H470M-ITX/ac, BIOS L1.22 12/07/2020 May 6 07:25:01 Unraid-1 kernel: Workqueue: events_unbound macvlan_process_broadcast [macvlan] May 6 07:25:01 Unraid-1 kernel: RIP: 0010:__nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack] May 6 07:25:01 Unraid-1 kernel: Code: 44 24 10 e8 f4 e1 ff ff 8b 7c 24 04 89 ea 89 c6 89 04 24 e8 76 e6 ff ff 84 c0 75 a2 48 89 df e8 ad e2 ff ff 85 c0 89 c5 74 18 <0f> 0b 8b 34 24 8b 7c 24 04 e8 2a dd ff ff e8 8b e3 ff ff e9 72 01 May 6 07:25:01 Unraid-1 kernel: RSP: 0018:ffffc900002a0d98 EFLAGS: 00010202 May 6 07:25:01 Unraid-1 kernel: RAX: 0000000000000001 RBX: ffff8881cb970700 RCX: da30c29a3d7bac1d May 6 07:25:01 Unraid-1 kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8881cb970700 May 6 07:25:01 Unraid-1 kernel: RBP: 0000000000000001 R08: 581d1522fa78d77e R09: bdb524e053fbe669 May 6 07:25:01 Unraid-1 kernel: R10: 1d850690a35a977f R11: ffffc900002a0d60 R12: ffffffff82a0e440 May 6 07:25:01 Unraid-1 kernel: R13: 00000000000021ac R14: ffff888102d85000 R15: 0000000000000000 May 6 07:25:01 Unraid-1 kernel: FS: 0000000000000000(0000) GS:ffff88883f7c0000(0000) knlGS:0000000000000000 May 6 07:25:01 Unraid-1 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 May 6 07:25:01 Unraid-1 kernel: CR2: 000000c004f76000 CR3: 000000000420a005 CR4: 00000000007706e0 May 6 07:25:01 Unraid-1 kernel: PKRU: 55555554 May 6 07:25:01 Unraid-1 kernel: Call Trace: May 6 07:25:01 Unraid-1 kernel: <IRQ> May 6 07:25:01 Unraid-1 kernel: ? nf_nat_inet_fn+0x123/0x1a8 [nf_nat] May 6 07:25:01 Unraid-1 kernel: nf_conntrack_confirm+0x25/0x54 [nf_conntrack] May 6 07:25:01 Unraid-1 kernel: nf_hook_slow+0x3a/0x96 May 6 07:25:01 Unraid-1 kernel: ? ip_protocol_deliver_rcu+0x164/0x164 May 6 07:25:01 Unraid-1 kernel: NF_HOOK.constprop.0+0x79/0xd9 May 6 07:25:01 Unraid-1 kernel: ? ip_protocol_deliver_rcu+0x164/0x164 May 6 07:25:01 Unraid-1 kernel: __netif_receive_skb_one_core+0x77/0x9c May 6 07:25:01 Unraid-1 kernel: process_backlog+0x8c/0x116 May 6 07:25:01 Unraid-1 kernel: __napi_poll.constprop.0+0x28/0x124 May 6 07:25:01 Unraid-1 kernel: net_rx_action+0x159/0x24f May 6 07:25:01 Unraid-1 kernel: __do_softirq+0x126/0x288 May 6 07:25:01 Unraid-1 kernel: do_softirq+0x7f/0xab May 6 07:25:01 Unraid-1 kernel: </IRQ> May 6 07:25:01 Unraid-1 kernel: <TASK> May 6 07:25:01 Unraid-1 kernel: __local_bh_enable_ip+0x4c/0x6b May 6 07:25:01 Unraid-1 kernel: netif_rx+0x52/0x5a May 6 07:25:01 Unraid-1 kernel: macvlan_broadcast+0x10a/0x150 [macvlan] May 6 07:25:01 Unraid-1 kernel: ? _raw_spin_unlock+0x14/0x29 May 6 07:25:01 Unraid-1 kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan] May 6 07:25:01 Unraid-1 kernel: process_one_work+0x1a8/0x295 May 6 07:25:01 Unraid-1 kernel: worker_thread+0x18b/0x244 May 6 07:25:01 Unraid-1 kernel: ? rescuer_thread+0x281/0x281 May 6 07:25:01 Unraid-1 kernel: kthread+0xe4/0xef May 6 07:25:01 Unraid-1 kernel: ? kthread_complete_and_exit+0x1b/0x1b May 6 07:25:01 Unraid-1 kernel: ret_from_fork+0x1f/0x30 May 6 07:25:01 Unraid-1 kernel: </TASK> May 6 07:25:01 Unraid-1 kernel: ---[ end trace 0000000000000000 ]--- unraid-1-diagnostics-20230506-0727.zip
  2. So, Server was running round about 4 Days. Last reboot was on Tuesday 7:14 AM . Logs from syslog server are attached. Pulling a diagnostic wasn't possible, Web and SSH are not reachable. I will now install RC5.3 and see what happen. download-2023.5.6_6.48.56-pi-(pi-3b).tar.gz
  3. So, DHCP for IPv6 is deactivated and router should use SLAAC. Unraid used IPv4+IPv6, br0 and host access: unraid-1-diagnostics-20230502-2137.zip
  4. Looks like it should: My pihole is reachable on ipv6 over fd00::99
  5. I am not sure. To be fair, i am not a IPv6 Professional But if i'm right, it should be enough deactivating the DHCPv6 function in my router: https://en.avm.de/service/knowledge-base/dok/FRITZ-Box-7590/573_Configuring-IPv6-in-the-FRITZ-Box/ Should i use no DHCPv6 Server, or using the M- and/or O- Flag? I need br0 for IP-Based traffic- and port-management. And i'm using a Debian Container for remote access. When i disable host access, i can't reach all my containers from the Debian Container.
  6. In this Case i used the "IPv4 only" mode, like you suggested. But my LXC Container was pingable over IPv6... i was just wondering. I am using both in my setup, br0 and host access.
  7. oh damn... geoblock. sorry. nevermind it is only kernel cut off from syslog. must be also in the diagnostic, right?
  8. so, after enabling "host access" on 16:13:44 it comes again: https://pastebin.cloud-becker.de/900ba9bc266a unraid-1-diagnostics-20230502-1626.zip
  9. Okay, this time IPv4 without Host Access. But what i noticed: Unraid itself and Dockers aren't pingable with ipv6, but the LXC Container is: PS C:\Users\domib> ping [fd00::99] Ping wird ausgeführt für fd00::99 mit 32 Bytes Daten: Antwort von fd00::99: Zeit<1ms Antwort von fd00::99: Zeit<1ms Antwort von fd00::99: Zeit<1ms Antwort von fd00::99: Zeit<1ms Ping-Statistik für fd00::99: Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), Ca. Zeitangaben in Millisek.: Minimum = 0ms, Maximum = 0ms, Mittelwert = 0ms PS C:\Users\domib> ping wpad.fritz.box -6 Ping wird ausgeführt für wpad.fritz.box [fd00::99] mit 32 Bytes Daten: Antwort von fd00::99: Zeit<1ms Antwort von fd00::99: Zeit<1ms Antwort von fd00::99: Zeit<1ms Antwort von fd00::99: Zeit<1ms Ping-Statistik für fd00::99: Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), Ca. Zeitangaben in Millisek.: Minimum = 0ms, Maximum = 0ms, Mittelwert = 0ms PS C:\Users\domib> ping piholelxc.fritz.box Ping wird ausgeführt für piholelxc.fritz.box [192.168.0.11] mit 32 Bytes Daten: Antwort von 192.168.0.11: Bytes=32 Zeit<1ms TTL=64 Antwort von 192.168.0.11: Bytes=32 Zeit<1ms TTL=64 Antwort von 192.168.0.11: Bytes=32 Zeit<1ms TTL=64 Antwort von 192.168.0.11: Bytes=32 Zeit<1ms TTL=64 Ping-Statistik für 192.168.0.11: Pakete: Gesendet = 4, Empfangen = 4, Verloren = 0 (0% Verlust), Ca. Zeitangaben in Millisek.: Minimum = 0ms, Maximum = 0ms, Mittelwert = 0ms PS C:\Users\domib> ping piholelxc.fritz.box -6 Ping-Anforderung konnte Host "piholelxc.fritz.box" nicht finden. Überprüfen Sie den Namen, und versuchen Sie es erneut. Inside of the LXC runs keepalived with the "virtual_ipaddress" 192.168.0.99 and fd00::99 unraid-1-diagnostics-20230502-1554.zip
  10. Booting only IPv4, but with Host Access https://pastebin.cloud-becker.de/976631d6fe43 i will also try IPv4 without Host Access. unraid-1-diagnostics-20230502-1537.zip
  11. I rebooted my Server for RC5. Before reboot i deactived "Host Access to custom networks". Got this again: https://pastebin.cloud-becker.de/11a75f37551e Diagnostic is attached. Seems that the "Host Access" isn't the trigger. unraid-1-diagnostics-20230502-0713.zip
  12. okay, i did. if this won't help, i can mirrow the syslog to my flashdrive in the next step.
  13. so, short update. @JorgeB todays morning my server crashed. the macvlan call trace was right after booting two days before. server runs without any problems from there for two days. when i tried opening the web gui i got messages from uptime-kuma and uptime-robot that the server becomes unresponsive. ssh, monitor/keyboard also didn't worked. anything to do next?
  14. In combination with "host access" the Port Forwarding etc got corrupted. AVM was contacted, and they said, that multiple IP from the same MAC address isn't support. To be fair, i am not very experienced with macvlan/ipvlan/linux, so i don't know if i am explained the problem valid to AVM. Maybe contacting from Unraid-Side, as a Developer-to-Developer can bring cearness into this? I am only "one of thousand user, with a special unique problem" in their eyes.
  15. Don't get me wrong i don't wanna be one of this "mimimi-User" who is blaming on the Devs, but i think this is a huge problem, especially for german user. The AVM Fritzbox is very common in Germany. And i think the German-Unraid Community is one of the biggest? I am very thankful for Unraid and they (Community-)Devs. But you got only "workarounds", but we need a fix for that. -Don't use Custom Bride br0 I need a dedicated IP for some Containers (Docker/LXC) for Traffic Management and Port Forwarding -Don't use Host Access Also needed for Bridged Container... (maybe i can handle it, putting all bridge Containers into a "custom user network" -Use ipvlan Isn't support by my Router (Fritzbox) -Use a second NIC. Can be a solution for me, but the most users doesn't have a second NIC onboard or the space for a second one. Again, please don't get me wrong, all in all i am very thankful for the development of Unraid which worked flawless till 6.12.x for me.
  16. Hello, I got the following output in my syslog: https://pastebin.cloud-becker.de/5ede1251c0f8 (Diagnostic attached) I know, the general fix for this is using "ipvlan" instead of "mcvlan". But in my (and other people) case, this isn't an option. The AVM Fritzbox (7595 in my case) isn't compatible with ipvlan. I came from the latest 6.11.x stable without any problems, the same for 6.10.x . @alturismo got the same problem with 6.12.x, also when he was problemless on 6.11.x . Maybe he can post some more details from his setup. So I hope it is fixable, especially when version before run without this problem. unraid-1-diagnostics-20230429-1014.zip
  17. Hi, thanks for support v6.12.x with your plugin. Is a global exlcuding list possible? I want to exlude file like cache.db,.DS_Store,.log,.log.,.tmp completely?
  18. Oh you are right. i had both threads opend and closed the wrong one, befor i posted.
  19. Hi, thanks for support v6.12.x with your plugin. Is a global exlcuding list possible? I want to exlude file like cache.db,.DS_Store,.log,.log.,.tmp completely?
  20. srsly? for short and stupid (sorry for that). Unraid Connect is a simple Cloud Service and isn't made for selfhost. I don't understand people like you... there is a simple service for beginners, not more not less. If you are a beginner, use it. If you are a "skilled user" search for another selfhost solution.
  21. You have. Set up your own ReverseProxy and do your own Service. Google will also not publish a selfhost "Google Drive". But you can use other similar Serverices selfhosted. It is the same with Unraid Connect. It isn't for selfhost, but you will find a similar solution for you. I got what you meant, but Unraid Connect isn't what you expected. I meant, when you are someone who are skilled enough hosting a VPS, then you can search and find your own solution and don't need Unraid Connect.
  22. Das sollte doch bestimmt über ne Schleife in der Automation klappen. Vielleicht in Verbindung mit ner Helfer Entität die vorher den Status der Steckdose übernimmt? Keine Ahnung ob das zu komplex gedacht ist?
  23. i think you said the whole point: if you are someone of this, you don't need unraid connect and host your own services, there are many ways to do this. looks like unraid connect is for people who are looking for quick overview or don't have the skill to host services like this. That is one of the greate thinks with OS's like Unraid: You can do it your own way.
  24. But, why i was able to benchmark my two SATA SSDs from my btrfs-raid1 pool in the past?
  25. Genau, den Output vom Browserupload habe ich mir gespart. Der gepostete Output war von der Nextcloud Windows App. *edit* Es könnte hieran liegen: Quelle: https://docs.nextcloud.com/desktop/3.2/advancedusage.html?highlight=chunk Habe gerade aber keine Möglichkeit das zu prüfen.