Jump to content
  • [7-0-0-beta1] Some call traces


    Kilrah
    • Minor



    User Feedback

    Recommended Comments



    Any news on this? 

    I also got these:

    Jul 20 18:14:42 Unraid-Server kernel: ------------[ cut here ]------------
    Jul 20 18:14:42 Unraid-Server kernel: Can't encode file handler for inotify: 255
    Jul 20 18:14:42 Unraid-Server kernel: WARNING: CPU: 5 PID: 387498 at fs/notify/fdinfo.c:55 show_mark_fhandle+0x79/0xe8
    Jul 20 18:14:42 Unraid-Server kernel: Modules linked in: tls tcp_diag udp_diag inet_diag af_packet macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle iptable_mangle vhost_net tun vhost vhost_iotlb tap veth xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo ip6table_nat iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter dm_crypt dm_mod md_mod zfs(PO) spl(O) nct6775 nct6775_core hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs bridge stp llc intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel i915 kvm iosf_mbi drm_buddy ttm i2c_algo_bit drm_display_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel drm_kms_helper sha512_ssse3 sha256_ssse3 sha1_ssse3 aesni_intel crypto_simd cryptd drm mei_pxp mei_hdcp rapl wmi_bmof intel_wmi_thunderbolt mxm_wmi intel_cstate intel_uncore mpt3sas nvme intel_gtt mei_me e1000e i2c_i801 agpgart i2c_smbus input_leds nvme_core ahci mei joydev led_class
    Jul 20 18:14:42 Unraid-Server kernel: raid_class libahci i2c_core scsi_transport_sas intel_pch_thermal tpm_crb video tpm_tis backlight tpm_tis_core tpm wmi button acpi_pad acpi_tad
    Jul 20 18:14:42 Unraid-Server kernel: CPU: 5 PID: 387498 Comm: lsof Tainted: P     U     O       6.8.12-Unraid #3
    Jul 20 18:14:42 Unraid-Server kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z390 Taichi Ultimate, BIOS P4.30 11/26/2019
    Jul 20 18:14:42 Unraid-Server kernel: RIP: 0010:show_mark_fhandle+0x79/0xe8
    Jul 20 18:14:42 Unraid-Server kernel: Code: ff 00 00 00 89 c1 74 04 85 c0 79 22 80 3d 0a 40 2c 01 00 75 5e 89 ce 48 c7 c7 4b 4a 27 82 c6 05 f8 3f 2c 01 01 e8 23 28 d8 ff <0f> 0b eb 45 89 44 24 0c 8b 44 24 04 48 89 ef 31 db 48 c7 c6 89 4a
    Jul 20 18:14:42 Unraid-Server kernel: RSP: 0018:ffffc9000eaffc30 EFLAGS: 00010282
    Jul 20 18:14:42 Unraid-Server kernel: RAX: 0000000000000000 RBX: ffff8881093e33c8 RCX: 0000000000000027
    Jul 20 18:14:42 Unraid-Server kernel: RDX: 0000000082440510 RSI: ffffffff82258ed4 RDI: 00000000ffffffff
    Jul 20 18:14:42 Unraid-Server kernel: RBP: ffff888101b1b7f8 R08: 0000000000000000 R09: ffffffff82440510
    Jul 20 18:14:42 Unraid-Server kernel: R10: 00007fffffffffff R11: 0000000000000000 R12: ffff888101b1b7f8
    Jul 20 18:14:42 Unraid-Server kernel: R13: ffff888101b1b7f8 R14: ffffffff812f1e37 R15: ffff88810c061478
    Jul 20 18:14:42 Unraid-Server kernel: FS:  000014d2da6a8f00(0000) GS:ffff88907ef40000(0000) knlGS:0000000000000000
    Jul 20 18:14:42 Unraid-Server kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Jul 20 18:14:42 Unraid-Server kernel: CR2: 00000000004c4198 CR3: 00000001a2c3a002 CR4: 00000000003706f0
    Jul 20 18:14:42 Unraid-Server kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Jul 20 18:14:42 Unraid-Server kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Jul 20 18:14:42 Unraid-Server kernel: Call Trace:
    Jul 20 18:14:42 Unraid-Server kernel: <TASK>
    Jul 20 18:14:42 Unraid-Server kernel: ? __warn+0x99/0x11a
    Jul 20 18:14:42 Unraid-Server kernel: ? report_bug+0xdb/0x155
    Jul 20 18:14:42 Unraid-Server kernel: ? show_mark_fhandle+0x79/0xe8
    Jul 20 18:14:42 Unraid-Server kernel: ? handle_bug+0x3c/0x63
    Jul 20 18:14:42 Unraid-Server kernel: ? exc_invalid_op+0x13/0x60
    Jul 20 18:14:42 Unraid-Server kernel: ? asm_exc_invalid_op+0x16/0x20
    Jul 20 18:14:42 Unraid-Server kernel: ? __pfx_inotify_fdinfo+0x10/0x10
    Jul 20 18:14:42 Unraid-Server kernel: ? show_mark_fhandle+0x79/0xe8
    Jul 20 18:14:42 Unraid-Server kernel: ? __pfx_inotify_fdinfo+0x10/0x10
    Jul 20 18:14:42 Unraid-Server kernel: ? seq_vprintf+0x33/0x49
    Jul 20 18:14:42 Unraid-Server kernel: ? seq_printf+0x53/0x6e
    Jul 20 18:14:42 Unraid-Server kernel: ? preempt_latency_start+0x2b/0x46
    Jul 20 18:14:42 Unraid-Server kernel: inotify_fdinfo+0x83/0xaa
    Jul 20 18:14:42 Unraid-Server kernel: show_fdinfo.isra.0+0x63/0xab
    Jul 20 18:14:42 Unraid-Server kernel: seq_show+0x151/0x172
    Jul 20 18:14:42 Unraid-Server kernel: seq_read_iter+0x16e/0x353
    Jul 20 18:14:42 Unraid-Server kernel: ? do_filp_open+0x8e/0xb8
    Jul 20 18:14:42 Unraid-Server kernel: seq_read+0xe2/0x109
    Jul 20 18:14:42 Unraid-Server kernel: vfs_read+0xa3/0x197
    Jul 20 18:14:42 Unraid-Server kernel: ? __do_sys_newfstat+0x35/0x5c
    Jul 20 18:14:42 Unraid-Server kernel: ksys_read+0x76/0xc2
    Jul 20 18:14:42 Unraid-Server kernel: do_syscall_64+0x6c/0xdc
    Jul 20 18:14:42 Unraid-Server kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80
    Jul 20 18:14:42 Unraid-Server kernel: RIP: 0033:0x14d2da9375cd
    Jul 20 18:14:42 Unraid-Server kernel: Code: 41 48 0e 00 f7 d8 64 89 02 b8 ff ff ff ff eb bb 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 80 3d 59 cc 0e 00 00 74 17 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 5b c3 66 2e 0f 1f 84 00 00 00 00 00 48 83 ec
    Jul 20 18:14:42 Unraid-Server kernel: RSP: 002b:00007ffe5fef6378 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
    Jul 20 18:14:42 Unraid-Server kernel: RAX: ffffffffffffffda RBX: 000000000043f600 RCX: 000014d2da9375cd
    Jul 20 18:14:42 Unraid-Server kernel: RDX: 0000000000000400 RSI: 0000000000448ae0 RDI: 0000000000000007
    Jul 20 18:14:42 Unraid-Server kernel: RBP: 000014d2daa1b230 R08: 0000000000000001 R09: 0000000000000000
    Jul 20 18:14:42 Unraid-Server kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000014d2daa1b0e0
    Jul 20 18:14:42 Unraid-Server kernel: R13: 0000000000001000 R14: 0000000000000000 R15: 000000000043f600
    Jul 20 18:14:42 Unraid-Server kernel: </TASK>
    Jul 20 18:14:42 Unraid-Server kernel: ---[ end trace 0000000000000000 ]---

     

    while tryin to use tdarr qsv i915 on 9900k

    Edited by NewDisplayName
    Link to comment
    1 hour ago, NewDisplayName said:

    while tryin to use tdarr qsv i915 on 9900k

    Nothing to do with this, possibly a plugin, I have a server running in safe mode for over a week and no call trace so far, in normal boot it usually happens in 1 to 3 days.

    Link to comment
    27 minutes ago, JorgeB said:

    Nothing to do with this, possibly a plugin, I have a server running in safe mode for over a week and no call trace so far, in normal boot it usually happens in 1 to 3 days.

    I believe file activity uses this. Has some thing change in kernel to upset inotify binary.

    Link to comment

    I got one straight away after updating to beta2, but don't get them in safe mode either so most likely a plugin.

    Link to comment
    6 minutes ago, NewDisplayName said:

    is /dev/dri (even if not used) forwarded to any vm or container?

    Nope, and as mentioned, this issue has nothing to do with that iGPUs or tdarr

    Link to comment

    i wonder how you can know that. Im pretty sure it happend and then tdarr failed. I also saw it while using plex... 

    Link to comment
    11 minutes ago, NewDisplayName said:

    i wonder how you can know that. Im pretty sure it happend and then tdarr failed. I also saw it while using plex... 

    inotify is monitoring for file system changes each of the apps would be chnaging files etc.

    Link to comment
    6 minutes ago, SimonF said:

    inotify is monitoring for file system changes each of the apps would be chnaging files etc.

    I know. As linux handles devices "like files", it might have something to do with igpu going crazy and crashing probably creating more inotfy? idk.

    Link to comment
    1 hour ago, SimonF said:

    I believe file activity uses this. Has some thing change in kernel to upset inotify binary.

    Im not quite sure about that, Kilrah doesnt have that plugin.

    Link to comment
    4 minutes ago, Mainfrezzer said:

    Im not quite sure about that, Kilrah doesnt have that plugin.

    System cache or file integrity?

    Link to comment

    I don't think it's a plugin now, though I haven't had a call trace in safe mode, I think it sometimes takes longer, and it's not practical to have the server in safe mode for weeks, but I basically ruled out every plugin, so I think it's another issue, but it still looks harmless.

    Link to comment

    for me, it feels like it has something todo with docker?

    when i start my server, the call trace doesn't appear.

    after i start docker, the call traces comes, immediately when all containers has started.

    Link to comment
    1 hour ago, sonic6 said:

    for me, it feels like it has something todo with docker?

    I believe I tested without docker and still got the call trace, but not 100% sure, I'll retest.

    Link to comment
    30 minutes ago, JorgeB said:

    I believe I tested without docker and still got the call trace, but not 100% sure, I'll retest.

    maybe something trigger by docker, or a  specific container faster, than without any container running?
    but okay... hard to find.

    Link to comment
    On 7/20/2024 at 8:10 PM, JorgeB said:

    in normal boot it usually happens in 1 to 3 days.

    Maybe something to do with RAM?

     

    I had just 32GB RAM until i upgrade two day before. On 32GB the calltrace come instantly.

     

    Here i marked the Calltrace at 15:28PM. After a reboot this morning there wasn't any calltrace yet.

    image.thumb.png.50358af9c087cb8058203e12587e80f0.png

    Link to comment

    hmm i wonder, did anyone use "mount -o remount,size=30G /dev/shm" or similiar (i use it at first startup only)

     

    i have 64 and only 30% is used while transcoding

    Edited by NewDisplayName
    Link to comment

    You can, but unrelated, and why?

    /dev/shm would be 32GB by default if you have 64, so what's the point of resizing it to almost the same?

     

    Edited by Kilrah
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...