-
Posts
642 -
Joined
-
Last visited
-
Days Won
2
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by sonic6
-
-
3 minutes ago, JorgeB said:
or try limiting its RAM usage.
this is the limition, isn't it?
i didn't add this command, this is part of the template by default.
-
30 minutes ago, JorgeB said:
I believe I tested without docker and still got the call trace, but not 100% sure, I'll retest.
maybe something trigger by docker, or a specific container faster, than without any container running?
but okay... hard to find. -
for me, it feels like it has something todo with docker?
when i start my server, the call trace doesn't appear.
after i start docker, the call traces comes, immediately when all containers has started.
-
ipv6 is more and more common in germany. fd00:: addresses are static addresses for LAN communication. while the prefix on the ipv6-addresses you get from your ISP will change from time to time, the ULA address will stay static.
- 2
-
5 minutes ago, pkoci said:
I've got the same problem.
☝️please post diagnostics
-
1 minute ago, JorgeB said:
any idea where the second one is coming from?
2003:... is the public one from ISP
fd00::... is the internal one ULA (https://en.wikipedia.org/wiki/Unique_local_address)
there must be also a third one... (LLU)
-
4 minutes ago, JorgeB said:
Probably not, but also post output of
1 hour ago, alturismo said:cat /etc/ntp.conf | grep interface
To see if there's something else there
root@Unraid-1:~# cat /etc/ntp.conf | grep interface interface ignore wildcard interface listen 192.168.0.50 # eth0 interface listen fd00::aaa1:59ff:fe2b:ccfd # eth0 interface listen 2003:c0:cf2c:cd00:aaa1:59ff:fe2b:ccfd # eth0 root@Unraid-1:~#
no, there is only keepalived inside a LXC container on my server
-
-
On 7/2/2024 at 9:56 AM, itimpi said:
Have you gone into the scheduler settings and made a change to the parity check ones and clicked "Apply"?
On 7/2/2024 at 9:59 AM, JorgeB said:you will need to do a dummy change to create the corrected cron entry.
oh, didn't know that, so i am sorry.
i will do the dummy change and test it again.
thank you!
-
-
10 hours ago, Kilrah said:
Seems VM related here
Same for me, VM service is disabled on my maschine.
10 hours ago, JorgeB said:Did you start docker when you booted in safe mode? If it was docker related it should do the same.
good point... if i remember right, i started docker service at the end, but there wasn't a call trace.
-
just a short test, but the call trace are just apprearing, when i started the docker service like i did at 09:48 oclock (time in my diagnostics)
-
11 minutes ago, JorgeB said:
Tips and Tweaks plugin
this one is a really most installed plugin, must be there much more servers with "call traces"
-
On 6/26/2024 at 11:09 PM, JorgeB said:
but please try rebooting in safe mode to rule out any plugins
i did a short reboot in safe mode and the call trace does not appear.
-
same for me:
Jun 26 22:41:30 Unraid-1 kernel: ------------[ cut here ]------------ Jun 26 22:41:30 Unraid-1 kernel: Can't encode file handler for inotify: 255 Jun 26 22:41:30 Unraid-1 kernel: WARNING: CPU: 0 PID: 56416 at fs/notify/fdinfo.c:55 show_mark_fhandle+0x79/0xe8 Jun 26 22:41:30 Unraid-1 kernel: Modules linked in: tun nft_chain_nat xt_owner nft_compat nf_tables xt_nat xt_tcpudp veth xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat xt_addrtype br_netfilter bridge xt_MASQUERADE ip6table_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod zfs(PO) bluetooth ecdh_generic ecc spl(O) tcp_diag inet_diag af_packet kvmgt mdev i915 drm_buddy ttm i2c_algo_bit drm_display_helper drm_kms_helper drm intel_gtt agpgart nct6775 nct6775_core hwmon_vid wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs 8021q garp mrp stp llc macvtap macvlan tap intel_rapl_common iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 sha256_ssse3 sha1_ssse3 mei_hdcp aesni_intel mei_pxp crypto_simd cryptd Jun 26 22:41:30 Unraid-1 kernel: mei_me rapl wmi_bmof intel_cstate nvme tpm_crb e1000e intel_uncore mei i2c_i801 i2c_smbus nvme_core i2c_core input_leds led_class joydev tpm_tis tpm_tis_core video tpm backlight wmi ahci libahci acpi_pad button acpi_tad intel_pch_thermal Jun 26 22:41:30 Unraid-1 kernel: CPU: 0 PID: 56416 Comm: lsof Tainted: P U O 6.8.12-Unraid #3 Jun 26 22:41:30 Unraid-1 kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H470M-ITX/ac, BIOS L1.22 12/07/2020 Jun 26 22:41:30 Unraid-1 kernel: RIP: 0010:show_mark_fhandle+0x79/0xe8 Jun 26 22:41:30 Unraid-1 kernel: Code: ff 00 00 00 89 c1 74 04 85 c0 79 22 80 3d 0a 40 2c 01 00 75 5e 89 ce 48 c7 c7 4b 4a 27 82 c6 05 f8 3f 2c 01 01 e8 23 28 d8 ff <0f> 0b eb 45 89 44 24 0c 8b 44 24 04 48 89 ef 31 db 48 c7 c6 89 4a Jun 26 22:41:30 Unraid-1 kernel: RSP: 0018:ffffc90004eafc30 EFLAGS: 00010282 Jun 26 22:41:30 Unraid-1 kernel: RAX: 0000000000000000 RBX: ffff8881006fb680 RCX: 0000000000000027 Jun 26 22:41:30 Unraid-1 kernel: RDX: 0000000082440510 RSI: ffffffff82258ed4 RDI: 00000000ffffffff Jun 26 22:41:30 Unraid-1 kernel: RBP: ffff888107aa5b40 R08: 0000000000000000 R09: ffffffff82440510 Jun 26 22:41:30 Unraid-1 kernel: R10: 00007fffffffffff R11: 0000000000000000 R12: ffff888107aa5b40 Jun 26 22:41:30 Unraid-1 kernel: R13: ffff888107aa5b40 R14: ffffffff812f1e37 R15: ffff888102588c78 Jun 26 22:41:30 Unraid-1 kernel: FS: 000014b4fe7f5e40(0000) GS:ffff88883f600000(0000) knlGS:0000000000000000 Jun 26 22:41:30 Unraid-1 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jun 26 22:41:30 Unraid-1 kernel: CR2: 00000000004e2088 CR3: 0000000342d1e003 CR4: 00000000007706f0 Jun 26 22:41:30 Unraid-1 kernel: PKRU: 55555554 Jun 26 22:41:30 Unraid-1 kernel: Call Trace: Jun 26 22:41:30 Unraid-1 kernel: <TASK> Jun 26 22:41:30 Unraid-1 kernel: ? __warn+0x99/0x11a Jun 26 22:41:30 Unraid-1 kernel: ? report_bug+0xdb/0x155 Jun 26 22:41:30 Unraid-1 kernel: ? show_mark_fhandle+0x79/0xe8 Jun 26 22:41:30 Unraid-1 kernel: ? handle_bug+0x3c/0x63 Jun 26 22:41:30 Unraid-1 kernel: ? exc_invalid_op+0x13/0x60 Jun 26 22:41:30 Unraid-1 kernel: ? asm_exc_invalid_op+0x16/0x20 Jun 26 22:41:30 Unraid-1 kernel: ? __pfx_inotify_fdinfo+0x10/0x10 Jun 26 22:41:30 Unraid-1 kernel: ? show_mark_fhandle+0x79/0xe8 Jun 26 22:41:30 Unraid-1 kernel: ? __pfx_inotify_fdinfo+0x10/0x10 Jun 26 22:41:30 Unraid-1 kernel: ? seq_vprintf+0x33/0x49 Jun 26 22:41:30 Unraid-1 kernel: ? seq_printf+0x53/0x6e Jun 26 22:41:30 Unraid-1 kernel: ? preempt_latency_start+0x2b/0x46 Jun 26 22:41:30 Unraid-1 kernel: inotify_fdinfo+0x83/0xaa Jun 26 22:41:30 Unraid-1 kernel: show_fdinfo.isra.0+0x63/0xab Jun 26 22:41:30 Unraid-1 kernel: seq_show+0x151/0x172 Jun 26 22:41:30 Unraid-1 kernel: seq_read_iter+0x16e/0x353 Jun 26 22:41:30 Unraid-1 kernel: ? do_filp_open+0x8e/0xb8 Jun 26 22:41:30 Unraid-1 kernel: seq_read+0xe2/0x109 Jun 26 22:41:30 Unraid-1 kernel: vfs_read+0xa3/0x197 Jun 26 22:41:30 Unraid-1 kernel: ? __do_sys_newfstat+0x35/0x5c Jun 26 22:41:30 Unraid-1 kernel: ksys_read+0x76/0xc2 Jun 26 22:41:30 Unraid-1 kernel: do_syscall_64+0x6c/0xdc Jun 26 22:41:30 Unraid-1 kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80 Jun 26 22:41:30 Unraid-1 kernel: RIP: 0033:0x14b4fea835cd Jun 26 22:41:30 Unraid-1 kernel: Code: 41 48 0e 00 f7 d8 64 89 02 b8 ff ff ff ff eb bb 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 80 3d 59 cc 0e 00 00 74 17 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 5b c3 66 2e 0f 1f 84 00 00 00 00 00 48 83 ec Jun 26 22:41:30 Unraid-1 kernel: RSP: 002b:00007ffdb23757d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 Jun 26 22:41:30 Unraid-1 kernel: RAX: ffffffffffffffda RBX: 000000000043f600 RCX: 000014b4fea835cd Jun 26 22:41:30 Unraid-1 kernel: RDX: 0000000000000400 RSI: 0000000000449850 RDI: 0000000000000007 Jun 26 22:41:30 Unraid-1 kernel: RBP: 000014b4feb67230 R08: 0000000000000001 R09: 0000000000000000 Jun 26 22:41:30 Unraid-1 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000014b4feb670e0 Jun 26 22:41:30 Unraid-1 kernel: R13: 0000000000001000 R14: 0000000000000000 R15: 000000000043f600 Jun 26 22:41:30 Unraid-1 kernel: </TASK> Jun 26 22:41:30 Unraid-1 kernel: ---[ end trace 0000000000000000 ]---
-
4 hours ago, Mainfrezzer said:
Anything above the basic "point Client to Server" or "point Client to Server Network" requires manual configuation.
Okay, and how can i do manual configuration, without the case, that the guide revert my changes, when hitting "apply" button?
-
1 hour ago, Mainfrezzer said:
For every client thats only supposed to see and talk to the server, remote access to server is right choice.
thats what i did... just with an addtion, that the 10.253.3.0/27 (which are .3.1 till .3.30 is) is also allowed.
1 hour ago, Mainfrezzer said:For every client that supposed to talk to the server and vpn clients, youre looking at a hub and spoke setting.
that is what i choosed for the peer with with ip range from .3.2 till 3.30.
my "'report" shoulnd be about a specific or complext setup.
i is about the not applied changes, when i hit the "apply" button.
i am not able to setting up "complex" setups by my own.
-
okay, the case is:
10.253.3.31-10.253.3.100 should be connected to the server, but not see each or other clients on that VPN.
10.253.3.1-10.253.3.31 should be connected to the server and all other VPN clients.
so what should be the "right" preset for that case?
i think my choise is right, with the "manually" addition on the "AllowedIPs", or how to handel that?
btw, i manually added the "AllowedIPs" on the seeds, after i imported the config and it works like it should.
i dont know if i am right, but those "presets" are for the IP tables and WebUI, but it shouldn't remove manually added values. Especally when thoses addition are working.
-
it worked like it want, when i manually edited it at the client config.
the peer "kvm-isg" can only reach the server, but there are also peers that are allowed to comunicate with: 10.253.3.1 - 10.253.3.30
-
On 1/27/2024 at 12:28 PM, JorgeB said:
but LT is aware of the issue and it should be fixed soon.
Bug is still present on 6.12.9
-
Changed Status to Open
-
thank you @Squid.
so that thread can be closed?
-
7 minutes ago, Squid said:
Why would you want to have a container path that isn't mapped to a host path?
I didn't "want" it empty. It was like I load that template from the CA and I don't have any reasons using that path at the moment. I didn't recognize that before, because the container was builded with that empty path on later unraid version.
9 minutes ago, Squid said:Docker silently failed on .6 as it wouldn't create any container path. Now they're calling it an error instead of silently failing.
Okay thank you for explaining me that case.
9 minutes ago, JonathanM said:What happens if you properly fill out the export path?
The same, like I did (deleting that path): it will work.
9 minutes ago, JonathanM said:I don't think 6.12.6 would have accepted it either.
I has... Like squid said before.
I reported it, because I think I will not the last one who runs into that "problem"
Just wanna try to help.
-
Changed Status to Solved
[7.0.0-beta.2] out of memory from nowere
-
-
-
-
-
in Prereleases
Posted
@JorgeB
okay thanks, i will try this.