Jump to content
  • [7-0-0-beta1] Some call traces


    Kilrah
    • Minor



    User Feedback

    Recommended Comments



    same for me:

     

    Jun 26 22:41:30 Unraid-1 kernel: ------------[ cut here ]------------
    Jun 26 22:41:30 Unraid-1 kernel: Can't encode file handler for inotify: 255
    Jun 26 22:41:30 Unraid-1 kernel: WARNING: CPU: 0 PID: 56416 at fs/notify/fdinfo.c:55 show_mark_fhandle+0x79/0xe8
    Jun 26 22:41:30 Unraid-1 kernel: Modules linked in: tun nft_chain_nat xt_owner nft_compat nf_tables xt_nat xt_tcpudp veth xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat xt_addrtype br_netfilter bridge xt_MASQUERADE ip6table_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod zfs(PO) bluetooth ecdh_generic ecc spl(O) tcp_diag inet_diag af_packet kvmgt mdev i915 drm_buddy ttm i2c_algo_bit drm_display_helper drm_kms_helper drm intel_gtt agpgart nct6775 nct6775_core hwmon_vid wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs 8021q garp mrp stp llc macvtap macvlan tap intel_rapl_common iosf_mbi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 sha256_ssse3 sha1_ssse3 mei_hdcp aesni_intel mei_pxp crypto_simd cryptd
    Jun 26 22:41:30 Unraid-1 kernel: mei_me rapl wmi_bmof intel_cstate nvme tpm_crb e1000e intel_uncore mei i2c_i801 i2c_smbus nvme_core i2c_core input_leds led_class joydev tpm_tis tpm_tis_core video tpm backlight wmi ahci libahci acpi_pad button acpi_tad intel_pch_thermal
    Jun 26 22:41:30 Unraid-1 kernel: CPU: 0 PID: 56416 Comm: lsof Tainted: P     U     O       6.8.12-Unraid #3
    Jun 26 22:41:30 Unraid-1 kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H470M-ITX/ac, BIOS L1.22 12/07/2020
    Jun 26 22:41:30 Unraid-1 kernel: RIP: 0010:show_mark_fhandle+0x79/0xe8
    Jun 26 22:41:30 Unraid-1 kernel: Code: ff 00 00 00 89 c1 74 04 85 c0 79 22 80 3d 0a 40 2c 01 00 75 5e 89 ce 48 c7 c7 4b 4a 27 82 c6 05 f8 3f 2c 01 01 e8 23 28 d8 ff <0f> 0b eb 45 89 44 24 0c 8b 44 24 04 48 89 ef 31 db 48 c7 c6 89 4a
    Jun 26 22:41:30 Unraid-1 kernel: RSP: 0018:ffffc90004eafc30 EFLAGS: 00010282
    Jun 26 22:41:30 Unraid-1 kernel: RAX: 0000000000000000 RBX: ffff8881006fb680 RCX: 0000000000000027
    Jun 26 22:41:30 Unraid-1 kernel: RDX: 0000000082440510 RSI: ffffffff82258ed4 RDI: 00000000ffffffff
    Jun 26 22:41:30 Unraid-1 kernel: RBP: ffff888107aa5b40 R08: 0000000000000000 R09: ffffffff82440510
    Jun 26 22:41:30 Unraid-1 kernel: R10: 00007fffffffffff R11: 0000000000000000 R12: ffff888107aa5b40
    Jun 26 22:41:30 Unraid-1 kernel: R13: ffff888107aa5b40 R14: ffffffff812f1e37 R15: ffff888102588c78
    Jun 26 22:41:30 Unraid-1 kernel: FS:  000014b4fe7f5e40(0000) GS:ffff88883f600000(0000) knlGS:0000000000000000
    Jun 26 22:41:30 Unraid-1 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Jun 26 22:41:30 Unraid-1 kernel: CR2: 00000000004e2088 CR3: 0000000342d1e003 CR4: 00000000007706f0
    Jun 26 22:41:30 Unraid-1 kernel: PKRU: 55555554
    Jun 26 22:41:30 Unraid-1 kernel: Call Trace:
    Jun 26 22:41:30 Unraid-1 kernel: <TASK>
    Jun 26 22:41:30 Unraid-1 kernel: ? __warn+0x99/0x11a
    Jun 26 22:41:30 Unraid-1 kernel: ? report_bug+0xdb/0x155
    Jun 26 22:41:30 Unraid-1 kernel: ? show_mark_fhandle+0x79/0xe8
    Jun 26 22:41:30 Unraid-1 kernel: ? handle_bug+0x3c/0x63
    Jun 26 22:41:30 Unraid-1 kernel: ? exc_invalid_op+0x13/0x60
    Jun 26 22:41:30 Unraid-1 kernel: ? asm_exc_invalid_op+0x16/0x20
    Jun 26 22:41:30 Unraid-1 kernel: ? __pfx_inotify_fdinfo+0x10/0x10
    Jun 26 22:41:30 Unraid-1 kernel: ? show_mark_fhandle+0x79/0xe8
    Jun 26 22:41:30 Unraid-1 kernel: ? __pfx_inotify_fdinfo+0x10/0x10
    Jun 26 22:41:30 Unraid-1 kernel: ? seq_vprintf+0x33/0x49
    Jun 26 22:41:30 Unraid-1 kernel: ? seq_printf+0x53/0x6e
    Jun 26 22:41:30 Unraid-1 kernel: ? preempt_latency_start+0x2b/0x46
    Jun 26 22:41:30 Unraid-1 kernel: inotify_fdinfo+0x83/0xaa
    Jun 26 22:41:30 Unraid-1 kernel: show_fdinfo.isra.0+0x63/0xab
    Jun 26 22:41:30 Unraid-1 kernel: seq_show+0x151/0x172
    Jun 26 22:41:30 Unraid-1 kernel: seq_read_iter+0x16e/0x353
    Jun 26 22:41:30 Unraid-1 kernel: ? do_filp_open+0x8e/0xb8
    Jun 26 22:41:30 Unraid-1 kernel: seq_read+0xe2/0x109
    Jun 26 22:41:30 Unraid-1 kernel: vfs_read+0xa3/0x197
    Jun 26 22:41:30 Unraid-1 kernel: ? __do_sys_newfstat+0x35/0x5c
    Jun 26 22:41:30 Unraid-1 kernel: ksys_read+0x76/0xc2
    Jun 26 22:41:30 Unraid-1 kernel: do_syscall_64+0x6c/0xdc
    Jun 26 22:41:30 Unraid-1 kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
    Jun 26 22:41:30 Unraid-1 kernel: RIP: 0033:0x14b4fea835cd
    Jun 26 22:41:30 Unraid-1 kernel: Code: 41 48 0e 00 f7 d8 64 89 02 b8 ff ff ff ff eb bb 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 80 3d 59 cc 0e 00 00 74 17 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 5b c3 66 2e 0f 1f 84 00 00 00 00 00 48 83 ec
    Jun 26 22:41:30 Unraid-1 kernel: RSP: 002b:00007ffdb23757d8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
    Jun 26 22:41:30 Unraid-1 kernel: RAX: ffffffffffffffda RBX: 000000000043f600 RCX: 000014b4fea835cd
    Jun 26 22:41:30 Unraid-1 kernel: RDX: 0000000000000400 RSI: 0000000000449850 RDI: 0000000000000007
    Jun 26 22:41:30 Unraid-1 kernel: RBP: 000014b4feb67230 R08: 0000000000000001 R09: 0000000000000000
    Jun 26 22:41:30 Unraid-1 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000014b4feb670e0
    Jun 26 22:41:30 Unraid-1 kernel: R13: 0000000000001000 R14: 0000000000000000 R15: 000000000043f600
    Jun 26 22:41:30 Unraid-1 kernel: </TASK>
    Jun 26 22:41:30 Unraid-1 kernel: ---[ end trace 0000000000000000 ]---

     

    unraid-1-diagnostics-20240626-2252.zip

    Link to comment

    I saw that call trace a few times during beta, on one server only, just once per boot, and without any apparent issues, but please try rebooting in safe mode to rule out any plugins, I see there are some common ones between these two reports, and my server also.

     

    Edit: for me with that server, the call trace usually takes a few days, this last time it took 6 days to appear.

    Link to comment

    I think it may be related to a plugin, but my affected server takes days to show the first call trace, so if anyone else has the call trace immediately after a reboot please try to confirm if it's a plugin.

     

    One possible suspect, is the Tips and Tweaks plugin, just because it is installed for all the diags posted and my server, and it does has settings related to Inotify.

    Link to comment
    11 minutes ago, JorgeB said:

    Tips and Tweaks plugin

    this one is a really most installed plugin, must be there much more servers with "call traces"

    Link to comment
    32 minutes ago, sonic6 said:

    must be there much more servers with "call traces"

    True, but if some are like mine, i.e., it takes a week to show the call trace, it won't have happened to most yet, also it may not cause the call traces for everyone, but I don't have a concrete reason to suspect that plugin, mostly a wild guess based on the posted above.

    Link to comment
    33 minutes ago, sonic6 said:

    just a short test, but the call trace are just apprearing, when i started the docker service

    Did you start docker when you booted in safe mode? If it was docker related it should do the same.

     

     

    Link to comment

    Seems VM related here, the first start or stop of a VM after a reboot may trigger it. I thought I could reproduce quite consistently but no, didn't see it in safe mode but then i can't reproduce it in normal mode anymore either.

    Link to comment
    11 minutes ago, Kilrah said:

    Seems VM related here

    The server where it is happening to me doesn't have VMs, the VM service is disabled, so possibly not that, unless there are multiple causes.

     

     

    Link to comment
    10 hours ago, Kilrah said:

    Seems VM related here

    Same for me, VM service is disabled on my maschine.

     

    10 hours ago, JorgeB said:

    Did you start docker when you booted in safe mode? If it was docker related it should do the same.

    good point... if i remember right, i started docker service at the end, but there wasn't a call trace.

    Link to comment

    Mover plugin maybe?

     

    I get those trace calls, too.... Everything is working except the docker tab.

     

    But before I had a docker separated eth. Maybe I going back to this configuration...

    Link to comment
    14 minutes ago, enJOyIT said:

    Mover plugin maybe?

    Can't be since I don't use it, it also can't be the Tweaks and Tips plugin, since I don't have it in one of my servers, not even sure if it's a plugin, it's difficult to test for me because it takes days after a reboot to get the call trace, and then it doesn't repeat.

    Link to comment
    enJOyIT

    Posted (edited)

    My network and docker settings:

     

    image.thumb.png.d7e122397ecd8f936b0423efdaa59cfe.png

     

    image.thumb.png.d643b282984c9d908e425c83ecfe2811.png

     

    image.thumb.png.060a4a9e3f779cf021e7eeb12bb64fab.png

     

    image.thumb.png.2ea3a5ea13268aec625e0409d7647367.png

     

    That should be the right config like mentioned in the how-to? I'm asking because I always had a seperate docker network before but I wanted the host access to my VMs and that's the only way to get it, right?

    Edited by enJOyIT
    Link to comment

    I'm trying to confirm if this is plugin related or not by running one of servers in safe mode, but the call trace can take a few days, I'll post an update when I know the answer.

    Link to comment

    Strange things... I reverted back to a dedicated NIC for my docker containers... But I got the call trace?!

     

    Wtf?

     

    My current settings:

    image.thumb.png.cdbd42c4a22ce00f1b10e05f6ed04973.png

     

    image.thumb.png.b20c3eaa613ef7872e78a7c02f7f10e3.png

     

    image.thumb.png.ddebb65774f3eae59d1028317d210345.png

     

    And all of my containers are pointing to eth1. Some are using "Bridge" as they don't need an extra IP.

    image.thumb.png.63adf35bc763651ea3efb452b82948fc.png

     

    This configuration worked without issues in 6.12.10.

     

    What do I miss, dedicated NIC should be working?!?

     

    This is so frustrating.... 

     

    Edited by enJOyIT
    Link to comment
    3 hours ago, enJOyIT said:

    Strange things... I reverted back to a dedicated NIC for my docker containers... But I got the call trace?!

    Not sure what you mean, this thread is about the inotify call traces, it has nothing to do with docker, maybe you are thinking of macvlan call traces?

    Link to comment

    sorry, I only read about call traces and didn't know there are other call trace issues... yes I talk about macvlan call traces.

     

    Edit: I thought it is related to macvlan call traces, but if I look closer (didn't make a screenshot of the call trace message) I think my issue is about inotify call traces, too. 🙂

    Edited by enJOyIT
    Link to comment
    1 hour ago, enJOyIT said:

    I think my issue is about inotify call traces, too.

    Those are being investigated, but they appear to be harmeless.

    Link to comment

    Ok, then I really have another issue... Because if the call traces happens, the docker page cannot be loaded any more and stopping the array/docker is impossible. I have to reboot the server via physical power/reset button.

     

    But the docker/VMs are still working.

    Link to comment
    40 minutes ago, enJOyIT said:

    Ok, then I really have another issue... Because if the call traces happens, the docker page cannot be loaded any more and stopping the array/docker is impossible. I have to reboot the server via physical power/reset button.

     

    I don't see any diagnostics from you, but if you are using a docker folder with zfs, it can be this issue:

    https://forums.unraid.net/bug-reports/prereleases/700-beta1-kernel-bug-r3068/?do=findComment&comment=28812

     

    • Like 1
    Link to comment
    On 7/11/2024 at 7:11 PM, JorgeB said:

     

    I don't see any diagnostics from you, but if you are using a docker folder with zfs, it can be this issue:

    https://forums.unraid.net/bug-reports/prereleases/700-beta1-kernel-bug-r3068/?do=findComment&comment=28812

     

     

    Ah, I know I read that previously and still decided it would be smart to move my docker folder to my new ZFS cache setup.. was wondering why docker was flaky. will move back to an image file for now.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...