• Crashes since updating to v6.11.x for qBittorrent and Deluge users


    JorgeB
    • Closed

    EDIT: issue was traced to libtorrent 2.x, it's not an Unraid problem, more info in this post:

     

    https://forums.unraid.net/bug-reports/stable-releases/crashes-since-updating-to-v611x-for-qbittorrent-and-deluge-users-r2153/?do=findComment&comment=21671

     

     

    Original Post:

     

    I'm creating this to better track an issue that some users have been reporting where Unraid started crashing after updating to v6.11.x (it happens with both 6.11.0 and 6.11.1), there's a very similar call traced logged for all cases, e.g:

     

    Oct 12 04:18:27 zaBOX kernel: BUG: kernel NULL pointer dereference, address: 00000000000000b6
    Oct 12 04:18:27 zaBOX kernel: #PF: supervisor read access in kernel mode
    Oct 12 04:18:27 zaBOX kernel: #PF: error_code(0x0000) - not-present page
    Oct 12 04:18:27 zaBOX kernel: PGD 0 P4D 0
    Oct 12 04:18:27 zaBOX kernel: Oops: 0000 [#1] PREEMPT SMP PTI
    Oct 12 04:18:27 zaBOX kernel: CPU: 4 PID: 28596 Comm: Disk Tainted: P     U  W  O      5.19.14-Unraid #1
    Oct 12 04:18:27 zaBOX kernel: Hardware name: Gigabyte Technology Co., Ltd. Z390 AORUS PRO WIFI/Z390 AORUS PRO WIFI-CF, BIOS F12 11/05/2021
    Oct 12 04:18:27 zaBOX kernel: RIP: 0010:folio_try_get_rcu+0x0/0x21
    Oct 12 04:18:27 zaBOX kernel: Code: e8 8e 61 63 00 48 8b 84 24 80 00 00 00 65 48 2b 04 25 28 00 00 00 74 05 e8 9e 9b 64 00 48 81 c4 88 00 00 00 5b c3 cc cc cc cc <8b> 57 34 85 d2 74 10 8d 4a 01 89 d0 f0 0f b1 4f 34 74 04 89 c2 eb
    Oct 12 04:18:27 zaBOX kernel: RSP: 0000:ffffc900070dbcc0 EFLAGS: 00010246
    Oct 12 04:18:27 zaBOX kernel: RAX: 0000000000000082 RBX: 0000000000000082 RCX: 0000000000000082
    Oct 12 04:18:27 zaBOX kernel: RDX: 0000000000000001 RSI: ffff888757426fe8 RDI: 0000000000000082
    Oct 12 04:18:27 zaBOX kernel: RBP: 0000000000000000 R08: 0000000000000028 R09: ffffc900070dbcd0
    Oct 12 04:18:27 zaBOX kernel: R10: ffffc900070dbcd0 R11: ffffc900070dbd48 R12: 0000000000000000
    Oct 12 04:18:27 zaBOX kernel: R13: ffff88824f95d138 R14: 000000000007292c R15: ffff88824f95d140
    Oct 12 04:18:27 zaBOX kernel: FS:  000014ed38204b38(0000) GS:ffff8888a0500000(0000) knlGS:0000000000000000
    Oct 12 04:18:27 zaBOX kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Oct 12 04:18:27 zaBOX kernel: CR2: 00000000000000b6 CR3: 0000000209854005 CR4: 00000000003706e0
    Oct 12 04:18:27 zaBOX kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Oct 12 04:18:27 zaBOX kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Oct 12 04:18:27 zaBOX kernel: Call Trace:
    Oct 12 04:18:27 zaBOX kernel: <TASK>
    Oct 12 04:18:27 zaBOX kernel: __filemap_get_folio+0x98/0x1ff
    Oct 12 04:18:27 zaBOX kernel: ? _raw_spin_unlock_irqrestore+0x24/0x3a
    Oct 12 04:18:27 zaBOX kernel: filemap_fault+0x6e/0x524
    Oct 12 04:18:27 zaBOX kernel: __do_fault+0x2d/0x6e
    Oct 12 04:18:27 zaBOX kernel: __handle_mm_fault+0x9a5/0xc7d
    Oct 12 04:18:27 zaBOX kernel: handle_mm_fault+0x113/0x1d7
    Oct 12 04:18:27 zaBOX kernel: do_user_addr_fault+0x36a/0x514
    Oct 12 04:18:27 zaBOX kernel: exc_page_fault+0xfc/0x11e
    Oct 12 04:18:27 zaBOX kernel: asm_exc_page_fault+0x22/0x30
    Oct 12 04:18:27 zaBOX kernel: RIP: 0033:0x14ed3a0ae7b5
    Oct 12 04:18:27 zaBOX kernel: Code: 8b 48 08 48 8b 32 48 8b 00 48 39 f0 73 09 48 8d 14 08 48 39 d6 eb 0c 48 39 c6 73 0b 48 8d 14 0e 48 39 d0 73 02 0f 0b 48 89 c7 <f3> a4 66 48 8d 3d 59 b7 22 00 66 66 48 e8 d9 d8 f6 ff 48 89 28 48
    Oct 12 04:18:27 zaBOX kernel: RSP: 002b:000014ed38203960 EFLAGS: 00010206
    Oct 12 04:18:27 zaBOX kernel: RAX: 000014ed371aa160 RBX: 000014ed38203ad0 RCX: 0000000000004000
    Oct 12 04:18:27 zaBOX kernel: RDX: 000014c036530000 RSI: 000014c03652c000 RDI: 000014ed371aa160
    Oct 12 04:18:27 zaBOX kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 000014ed38203778
    Oct 12 04:18:27 zaBOX kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
    Oct 12 04:18:27 zaBOX kernel: R13: 000014ed38203b40 R14: 000014ed384fe940 R15: 000014ed38203ac0
    Oct 12 04:18:27 zaBOX kernel: </TASK>
    Oct 12 04:18:27 zaBOX kernel: Modules linked in: macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle vhost_net vhost vhost_iotlb tap tun veth xt_nat xt_tcpudp xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter xfs md_mod kvmgt mdev i915 iosf_mbi drm_buddy i2c_algo_bit ttm drm_display_helper intel_gtt agpgart hwmon_vid iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables af_packet 8021q garp mrp bridge stp llc bonding tls ipv6 nvidia_drm(PO) nvidia_modeset(PO) nvidia(PO) x86_pkg_temp_thermal intel_powerclamp drm_kms_helper btusb btrtl i2c_i801 btbcm coretemp gigabyte_wmi wmi_bmof intel_wmi_thunderbolt mxm_wmi kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd
    Oct 12 04:18:27 zaBOX kernel: btintel rapl intel_cstate intel_uncore e1000e i2c_smbus bluetooth drm nvme nvme_core ahci i2c_core libahci ecdh_generic ecc syscopyarea sysfillrect input_leds sysimgblt led_class joydev nzxt_kraken2 intel_pch_thermal fb_sys_fops thermal fan video tpm_crb wmi tpm_tis backlight tpm_tis_core tpm acpi_pad button unix
    Oct 12 04:18:27 zaBOX kernel: CR2: 00000000000000b6
    Oct 12 04:18:27 zaBOX kernel: ---[ end trace 0000000000000000 ]---

     

    Another example with very different hardware:

    Oct 11 21:32:08 Impulse kernel: BUG: kernel NULL pointer dereference, address: 0000000000000056
    Oct 11 21:32:08 Impulse kernel: #PF: supervisor read access in kernel mode
    Oct 11 21:32:08 Impulse kernel: #PF: error_code(0x0000) - not-present page
    Oct 11 21:32:08 Impulse kernel: PGD 0 P4D 0
    Oct 11 21:32:08 Impulse kernel: Oops: 0000 [#1] PREEMPT SMP NOPTI
    Oct 11 21:32:08 Impulse kernel: CPU: 1 PID: 5236 Comm: Disk Not tainted 5.19.14-Unraid #1
    Oct 11 21:32:08 Impulse kernel: Hardware name: System manufacturer System Product Name/ROG STRIX B450-F GAMING II, BIOS 4301 03/04/2021
    Oct 11 21:32:08 Impulse kernel: RIP: 0010:folio_try_get_rcu+0x0/0x21
    Oct 11 21:32:08 Impulse kernel: Code: e8 8e 61 63 00 48 8b 84 24 80 00 00 00 65 48 2b 04 25 28 00 00 00 74 05 e8 9e 9b 64 00 48 81 c4 88 00 00 00 5b e9 cc 5f 86 00 <8b> 57 34 85 d2 74 10 8d 4a 01 89 d0 f0 0f b1 4f 34 74 04 89 c2 eb
    Oct 11 21:32:08 Impulse kernel: RSP: 0000:ffffc900026ffcc0 EFLAGS: 00010246
    Oct 11 21:32:08 Impulse kernel: RAX: 0000000000000022 RBX: 0000000000000022 RCX: 0000000000000022
    Oct 11 21:32:08 Impulse kernel: RDX: 0000000000000001 RSI: ffff88801e450b68 RDI: 0000000000000022
    Oct 11 21:32:08 Impulse kernel: RBP: 0000000000000000 R08: 000000000000000c R09: ffffc900026ffcd0
    Oct 11 21:32:08 Impulse kernel: R10: ffffc900026ffcd0 R11: ffffc900026ffd48 R12: 0000000000000000
    Oct 11 21:32:08 Impulse kernel: R13: ffff888428441cb8 R14: 00000000000028cd R15: ffff888428441cc0
    Oct 11 21:32:08 Impulse kernel: FS:  00001548d34fa6c0(0000) GS:ffff88842e840000(0000) knlGS:0000000000000000
    Oct 11 21:32:08 Impulse kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Oct 11 21:32:08 Impulse kernel: CR2: 0000000000000056 CR3: 00000001a3fe6000 CR4: 00000000003506e0
    Oct 11 21:32:08 Impulse kernel: Call Trace:
    Oct 11 21:32:08 Impulse kernel: <TASK>
    Oct 11 21:32:08 Impulse kernel: __filemap_get_folio+0x98/0x1ff
    Oct 11 21:32:08 Impulse kernel: filemap_fault+0x6e/0x524
    Oct 11 21:32:08 Impulse kernel: __do_fault+0x30/0x6e
    Oct 11 21:32:08 Impulse kernel: __handle_mm_fault+0x9a5/0xc7d
    Oct 11 21:32:08 Impulse kernel: handle_mm_fault+0x113/0x1d7
    Oct 11 21:32:08 Impulse kernel: do_user_addr_fault+0x36a/0x514
    Oct 11 21:32:08 Impulse kernel: exc_page_fault+0xfc/0x11e
    Oct 11 21:32:08 Impulse kernel: asm_exc_page_fault+0x22/0x30
    Oct 11 21:32:08 Impulse kernel: RIP: 0033:0x1548dbc04741
    Oct 11 21:32:08 Impulse kernel: Code: 48 01 d0 eb 1b 0f 1f 40 00 f3 0f 1e fa 48 39 d1 0f 82 73 28 fc ff 0f 1f 00 f3 0f 1e fa 48 89 f8 48 83 fa 20 0f 82 af 00 00 00 <c5> fe 6f 06 48 83 fa 40 0f 87 3e 01 00 00 c5 fe 6f 4c 16 e0 c5 fe
    Oct 11 21:32:08 Impulse kernel: RSP: 002b:00001548d34f9808 EFLAGS: 00010202
    Oct 11 21:32:08 Impulse kernel: RAX: 000015480c010d30 RBX: 000015480c018418 RCX: 00001548d34f9a40
    Oct 11 21:32:08 Impulse kernel: RDX: 0000000000004000 RSI: 000015471f8cd50f RDI: 000015480c010d30
    Oct 11 21:32:08 Impulse kernel: RBP: 0000000000000000 R08: 0000000000000003 R09: 0000000000000000
    Oct 11 21:32:08 Impulse kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
    Oct 11 21:32:08 Impulse kernel: R13: 00001548d34f9ac0 R14: 0000000000000003 R15: 0000154814013d10
    Oct 11 21:32:08 Impulse kernel: </TASK>
    Oct 11 21:32:08 Impulse kernel: Modules linked in: xt_connmark xt_comment iptable_raw wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha xt_mark xt_nat xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs md_mod ip6table_filter ip6_tables iptable_filter ip_tables x_tables af_packet 8021q garp mrp bridge stp llc ipv6 mlx4_en mlx4_core igb i2c_algo_bit edac_mce_amd edac_core kvm_amd kvm wmi_bmof mxm_wmi asus_wmi_sensors crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel mpt3sas aesni_intel crypto_simd nvme cryptd ahci i2c_piix4 raid_class rapl k10temp i2c_core nvme_core ccp scsi_transport_sas libahci wmi button acpi_cpufreq unix [last unloaded: mlx4_core]
    Oct 11 21:32:08 Impulse kernel: CR2: 0000000000000056
    Oct 11 21:32:08 Impulse kernel: ---[ end trace 0000000000000000 ]---

     

    So they always start with this (end address will change):

     

    Oct 11 05:02:02 Cogsworth kernel: BUG: kernel NULL pointer dereference, address: 0000000000000076

     

    and always have this:

     

    Oct 11 05:02:02 Cogsworth kernel: Call Trace:
    Oct 11 05:02:02 Cogsworth kernel: <TASK>
    Oct 11 05:02:02 Cogsworth kernel: __filemap_get_folio+0x98/0x1ff

     

    The fact that it's happening to various users with very different hardware, both Intel and AMD, makes me think it's not a hardware/firmware issue, so we can try to find if they are running anything in common, these are the plugins I've found in common between the 4 or 5 cases found so far, these are some of the most used plugins so not surprising they are installed in all but it's also easy to rule them out:

     

    ca.backup2.plg - 2022.07.23  (Up to date)
    community.applications.plg - 2022.09.30  (Up to date)
    dynamix.active.streams.plg - 2020.06.17  (Up to date)
    file.activity.plg - 2022.08.19  (Up to date)
    fix.common.problems.plg - 2022.10.09  (Up to date)
    unassigned.devices.plg - 2022.10.03  (Up to date)
    unassigned.devices-plus.plg - 2022.08.19  (Up to date)

     

    So anyone having this issue try temporarily uninstalling/disabling these plugin to see if there's any difference.

    • Like 2
    • Upvote 1



    User Feedback

    Recommended Comments



    3+ days torrent uptime, no crashes in 6.11.5.

     

    I'm satisfied with the current resolution that libtorrent v2 is to blame.

     

    If you all want to get back on v2, I suggest following the open issue on the libtorrent tracker to see when they correctly support transparent hugepages.

     

     

    • Like 1
    Link to comment

    So I ran the commands @binhex posted for libtorrent/transparent hugepages issue on the day he posted it. Rebooted and made sure they were still disabled. Well, today I crashed again. Same issue, dead docker, unresponsive unraid webgui. Still running the latest version of binhex-delugevpn.

     

    Quote

    Nov 27 16:23:28 Impulse kernel: BUG: kernel NULL pointer dereference, address: 00000000000000b6
    Nov 27 16:23:28 Impulse kernel: #PF: supervisor read access in kernel mode
    Nov 27 16:23:28 Impulse kernel: #PF: error_code(0x0000) - not-present page
    Nov 27 16:23:28 Impulse kernel: PGD 0 P4D 0 
    Nov 27 16:23:28 Impulse kernel: Oops: 0000 [#1] PREEMPT SMP NOPTI
    Nov 27 16:23:28 Impulse kernel: CPU: 1 PID: 9932 Comm: Disk Tainted: P           O      5.19.17-Unraid #2
    Nov 27 16:23:28 Impulse kernel: Hardware name: System manufacturer System Product Name/ROG STRIX B450-F GAMING II, BIOS 4301 03/04/2021
    Nov 27 16:23:28 Impulse kernel: RIP: 0010:folio_try_get_rcu+0x0/0x21
    Nov 27 16:23:28 Impulse kernel: Code: e8 9d fd 67 00 48 8b 84 24 80 00 00 00 65 48 2b 04 25 28 00 00 00 74 05 e8 c1 35 69 00 48 81 c4 88 00 00 00 5b e9 ef 59 a6 00 <8b> 57 34 85 d2 74 10 8d 4a 01 89 d0 f0 0f b1 4f 34 74 04 89 c2 eb
    Nov 27 16:23:28 Impulse kernel: RSP: 0000:ffffc9000133fcc0 EFLAGS: 00010246
    Nov 27 16:23:28 Impulse kernel: RAX: 0000000000000082 RBX: 0000000000000082 RCX: 0000000000000082
    Nov 27 16:23:28 Impulse kernel: RDX: 0000000000000001 RSI: ffff8885a9af6238 RDI: 0000000000000082
    Nov 27 16:23:28 Impulse kernel: RBP: 0000000000000000 R08: 0000000000000028 R09: ffffc9000133fcd0
    Nov 27 16:23:28 Impulse kernel: R10: ffffc9000133fcd0 R11: ffffc9000133fd48 R12: 0000000000000000
    Nov 27 16:23:28 Impulse kernel: R13: ffff88804c3f1538 R14: 0000000000000c6b R15: ffff88804c3f1540
    Nov 27 16:23:28 Impulse kernel: FS:  000014ce677fb6c0(0000) GS:ffff88881e840000(0000) knlGS:0000000000000000
    Nov 27 16:23:28 Impulse kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Nov 27 16:23:28 Impulse kernel: CR2: 00000000000000b6 CR3: 0000000023602000 CR4: 00000000003506e0
    Nov 27 16:23:28 Impulse kernel: Call Trace:
    Nov 27 16:23:28 Impulse kernel: <TASK>
    Nov 27 16:23:28 Impulse kernel: __filemap_get_folio+0x98/0x1ff
    Nov 27 16:23:28 Impulse kernel: ? _raw_spin_unlock+0x14/0x29
    Nov 27 16:23:28 Impulse kernel: filemap_fault+0x6e/0x524
    Nov 27 16:23:28 Impulse kernel: __do_fault+0x30/0x6e
    Nov 27 16:23:28 Impulse kernel: __handle_mm_fault+0x9a5/0xc7d
    Nov 27 16:23:28 Impulse kernel: handle_mm_fault+0x113/0x1d7
    Nov 27 16:23:28 Impulse kernel: do_user_addr_fault+0x36a/0x514
    Nov 27 16:23:28 Impulse kernel: exc_page_fault+0xfc/0x11e
    Nov 27 16:23:28 Impulse kernel: asm_exc_page_fault+0x22/0x30
    Nov 27 16:23:28 Impulse kernel: RIP: 0033:0x14ce8076c741
    Nov 27 16:23:28 Impulse kernel: Code: 48 01 d0 eb 1b 0f 1f 40 00 f3 0f 1e fa 48 39 d1 0f 82 73 28 fc ff 0f 1f 00 f3 0f 1e fa 48 89 f8 48 83 fa 20 0f 82 af 00 00 00 <c5> fe 6f 06 48 83 fa 40 0f 87 3e 01 00 00 c5 fe 6f 4c 16 e0 c5 fe
    Nov 27 16:23:28 Impulse kernel: RSP: 002b:000014ce677fa808 EFLAGS: 00010202
    Nov 27 16:23:28 Impulse kernel: RAX: 000014ce5c006200 RBX: 000014ce5c00fcf8 RCX: 000014ce677faa40
    Nov 27 16:23:28 Impulse kernel: RDX: 0000000000004000 RSI: 000014c42fc6bd64 RDI: 000014ce5c006200
    Nov 27 16:23:28 Impulse kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
    Nov 27 16:23:28 Impulse kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
    Nov 27 16:23:28 Impulse kernel: R13: 000014ce677faac0 R14: 0000000000000044 R15: 000014ce5c005190
    Nov 27 16:23:28 Impulse kernel: </TASK>
    Nov 27 16:23:28 Impulse kernel: Modules linked in: drm backlight xt_connmark xt_comment iptable_raw wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha xt_mark xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs md_mod ip6table_filter ip6_tables iptable_filter ip_tables x_tables af_packet 8021q garp mrp bridge stp llc igb i2c_algo_bit edac_mce_amd edac_core kvm_amd kvm wmi_bmof mxm_wmi crct10dif_pclmul crc32_pclmul asus_wmi_sensors crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd mpt3sas nvme i2c_piix4 ahci raid_class nvme_core k10temp i2c_core libahci rapl ccp scsi_transport_sas wmi button acpi_cpufreq unix [last unloaded: i2c_algo_bit]
    Nov 27 16:23:28 Impulse kernel: CR2: 00000000000000b6
    Nov 27 16:23:28 Impulse kernel: ---[ end trace 0000000000000000 ]---
    Nov 27 16:23:28 Impulse kernel: RIP: 0010:folio_try_get_rcu+0x0/0x21
    Nov 27 16:23:28 Impulse kernel: Code: e8 9d fd 67 00 48 8b 84 24 80 00 00 00 65 48 2b 04 25 28 00 00 00 74 05 e8 c1 35 69 00 48 81 c4 88 00 00 00 5b e9 ef 59 a6 00 <8b> 57 34 85 d2 74 10 8d 4a 01 89 d0 f0 0f b1 4f 34 74 04 89 c2 eb
    Nov 27 16:23:28 Impulse kernel: RSP: 0000:ffffc9000133fcc0 EFLAGS: 00010246
    Nov 27 16:23:28 Impulse kernel: RAX: 0000000000000082 RBX: 0000000000000082 RCX: 0000000000000082
    Nov 27 16:23:28 Impulse kernel: RDX: 0000000000000001 RSI: ffff8885a9af6238 RDI: 0000000000000082
    Nov 27 16:23:28 Impulse kernel: RBP: 0000000000000000 R08: 0000000000000028 R09: ffffc9000133fcd0
    Nov 27 16:23:28 Impulse kernel: R10: ffffc9000133fcd0 R11: ffffc9000133fd48 R12: 0000000000000000
    Nov 27 16:23:28 Impulse kernel: R13: ffff88804c3f1538 R14: 0000000000000c6b R15: ffff88804c3f1540
    Nov 27 16:23:28 Impulse kernel: FS:  000014ce677fb6c0(0000) GS:ffff88881e840000(0000) knlGS:0000000000000000
    Nov 27 16:23:28 Impulse kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Nov 27 16:23:28 Impulse kernel: CR2: 00000000000000b6 CR3: 0000000023602000 CR4: 00000000003506e0

     

    Made sure to capture that transparent hugepages was still disabled before rebooting.

     

    1316743242_Screenshot2022-11-27173252.png.46573ef6847823b1ff36c350c985ba56.png

     

    Going to downgrade to a version with v1 libtorrent since we know the issue is in v2.

    Link to comment

    @binhex Looks like you have a new build of qbittorrent out on the same day that qbittorrent themselves released v4.5.0 with their comment:

    Quote

    NOTE: The default builds for all OSs switched to libtorrent 1.2.x from 2.0.x. Builds for libtorrent 2.0.x are also offered and are tagged with lt20. The switch happened due to user demand and perceived performance issues. If until now you didn't experience any performance issue then go ahead and use the lt20 builds.

     

    Assuming your latest build is leveraging their 4.5 release, should those of us on your release be relatively secure in reverting back to "latest"?

    Link to comment
    1 hour ago, sundown said:

    Looks like you have a new build of qbittorrent out on the same day that qbittorrent themselves released v4.5.0

    no coincidence, my builds are auto triggered by upstream releases.

     

    1 hour ago, sundown said:

    Assuming your latest build is leveraging their 4.5 release, should those of us on your release be relatively secure in reverting back to "latest"?

    no, the upstream build is still using libtorrent 2.x as can be seen from the qbittorrent web ui:-

    image.png.30f90251a1444f0a35d2587252ac54ec.png

    Link to comment
    On 11/29/2022 at 3:15 PM, binhex said:

    no, the upstream build is still using libtorrent 2.x as can be seen from the qbittorrent web ui:-

    That's odd, the qbittorrent build config appears to require the addition of a flag to include libtorrent 2.x otherwise it will default to building with 1.2.18?

    • Like 1
    Link to comment

    Hi all,

     

    I have been watching this page for a few weeks now, as I was also having the same issue as everyone else in this thread. I see that the current workaround is to use libtorrent v1. I am currently on unRAID 6.10.3.

     

    With that said, I am currently using binhex's delugeVPN, and am not entirely sure which version I need to downgrade to to use libtorrent v1. I do see the versions for lsio's containers, but not the specific one I am using.

     

    Thank you all for your help!

    Link to comment

    Hey there @pr85 Assuming you want to stay on Deluge and have the VPN option, easiest workaround I found was to install the linuxserver.io Deluge 2.1.1 libtorrent v1 @JesterEE posted about.

     

    On 11/18/2022 at 11:30 AM, JesterEE said:

     

    And then I also installed binhex-privoxyvpn along side that since it has all the same VPN options his binhex-delugevpn has. Then all you do is route the Deluge container's network through binhex-privoxyvpn. Easier said than done if its your first time doing that. I followed this guide and got it working after a bit of user struggle. If you need more help with that let me know.

     

    One thing I will say is make sure you have Privoxy running first or else Deluge will use your IP and not the VPN IP. Not sure why when it's network is routed through the Privoxy container. I set a start wait on deluge (Docker > Advanced View > fill in the wait on autostart) to fix this on any reboots. Also if you're running "CA Backup / Restore Appdata" that messes with this whole set up since it restarts all your docker containers at the same time when its doing its back up, and deluge has used my ip every time instead of the vpn. So I've disabled it for now.

    • Like 1
    Link to comment
    8 hours ago, ShadyDeth said:

    Assuming you want to stay on Deluge and have the VPN option, easiest workaround I found was to install the linuxserver.io Deluge 2.1.1 libtorrent v1 @JesterEE posted about.

     

    I checked the repo again just now, this is still the latest LSIO release on libtorrent v1.

     

    8 hours ago, ShadyDeth said:

    One thing I will say is make sure you have Privoxy running first or else Deluge will use your IP and not the VPN IP. Not sure why when it's network is routed through the Privoxy container. I set a start wait on deluge (Docker > Advanced View > fill in the wait on autostart) to fix this on any reboots. Also if you're running "CA Backup / Restore Appdata" that messes with this whole set up since it restarts all your docker containers at the same time when its doing its back up, and deluge has used my ip every time instead of the vpn. So I've disabled it for now.

     

    I use the Gluetun container for my VPN and I've never seen this issue. Actually just the opposite. I intentionally test this from time to time to see if I'm leaking my IP and when the VPN is off and does not revert to the default internet connection (essentially a built in kill switch).  I do not create a custom docker network as this write-up has shown. Instead, in the template for the container you want to use the VPN network, I set:

     

    Network Type: None

     

    and add  

     

     --network=container:VPN_CONTAINER_NAME

     

    on the extra parameters line.  I'm pretty sure this is essentially doing the same thing except without naming the network, so I'm not sure why we have different experiences with dropped connections.

     

    What is important to note, doing it this way will require the client containers to rebuild when the VPN container is updated. This is because docker needs to point the clients (deluge, etc.) to the new endpoint since it has a new hash associated with the VPN container. So when you update your VPN container via the WebUI, since Unraid 6.9 (I think), the OS has been smart enough to rebuild the attached containers automatically, and after a minute or so for rebuild and restarting the client containers, all is well.

     

    However, if the VPN container gets updated automatically by the Auto Update Applications plugin, the rebuild will not be triggered (since this rebuild control is implemented in the Docker WebUI php code), and all clients will lose their network connection. This will still not leak my IP and revert to the default network, but the client containers will just have no network connectivity. So, in the Auto Update Applications settings, I turn autoupdate off for the VPN client and do that one manually from time to time.

     

    Hope this helps!

    Link to comment

    shadydeth and jesteree,

     

    Thank you both for the information. I am currently attempting to get PrivoxyVPN setup, and everything works as excepted, but I do have one question regarding port forwarding.

     

    Currently, DelugeVPN handles the forwarded port automatically with PIA. When using PrivoxyVPN, how can I specify that I need a port forward? I am assuming that once I figure that out, I will be able to write that port out to a file, then map that file to Deluge so that it uses it everytime both containers spin up.

     

    Thanks again for all your help!

    Link to comment

    This is probably no where near the right way to do it, but this is how I got it to work.

     

    On the deluge container I removed all the port settings, and in the webui network settings I set the inbound and outbound ports to random.

     

    Screenshot_20221203_011202.thumb.png.985d309033d24eb47f8441b6d2f743a2.png

     

    Screenshot_20221203_011136.png.cc91fe90187c8737ce5945f162d6e844.png

     

    And for Privoxy I added Deluge's webui port to Key13, Key14 and added a new port setting for deluge and then it started working. If you want to use specific ports, I would assume you add them to Key13 and Key14, and set them in the deluge webui.

     

    Screenshot_20221203_011239.thumb.png.138eabd778fd93b8cf3ea8e6052dcba1.png

    Edited by ShadyDeth
    Link to comment

    Disabling transparent hugepages (per binhex's instructions above) has resolved the crashing for me. I'm running unraid 6.11.5 and the latest binhex-delugevpn container. Before (since moving to 6.11.x) I was crashing at least every other day, now I have had over 6 days of uptime. Finally!

     

    If anyone else having this issue doesn't want to go through the hassle of changing docker images to one that is using libtorrent 1.x, I'd give this a try first. Haven't noticed any major performance differences, but I didn't do any formal performance benchmarking.

    Link to comment
    On 12/4/2022 at 3:10 PM, rocketeer55 said:

    Disabling transparent hugepages (per binhex's instructions above) has resolved the crashing for me. I'm running unraid 6.11.5 and the latest binhex-delugevpn container. Before (since moving to 6.11.x) I was crashing at least every other day, now I have had over 6 days of uptime. Finally!

     

    If anyone else having this issue doesn't want to go through the hassle of changing docker images to one that is using libtorrent 1.x, I'd give this a try first. Haven't noticed any major performance differences, but I didn't do any formal performance benchmarking.

     

    Thank you so much for your update, however, that has not been my experience. I recently updated to 6.11.5, and was able to follow binhex's commands and add them to my go file. When I woke up this morning, unRaid was not responsive. So it was not even 12 hours before my system crashed.

     

    @ShadyDeth, thank you so much for those instructions, but that is just how to allow you deluge to use the VPN container, and also to allow you access to the webui. Since I seed alot, what I am looking for are instructions on how to map a VPN forwarded port to my deluge client so that I am able to continue seeding files.

     

    Currently, when I start up DelugeVPN, a port gets forwarded through the VPN, and automatically assigned as the port in deluge UI, as the image shows.

     

     

    Capture.PNG

    Link to comment

    Is this still an issue?  I upgraded to unraid 6.11.15 a few days ago, and my server has started going unresponsive. I use the binhex qBittorrent image, but the "Software Used" info from the web UI indicated Libtorrent 1.2.15.0.

     

    I haven't had time to read this whole thread, so sorry if I have missed something important.

    qbittorrent.PNG

    Link to comment
    On 11/24/2022 at 12:45 AM, binhex said:

    An interesting read regards THP, which looks to be triggering the crash:- https://blog.nelhage.com/post/transparent-hugepages/

     

    if you are feeling brave then you can try the following to disable THP which SHOUILD then prevent the crash without the need to downgrade libtorrent:-


    Thank you for this fix! I was still crashing daily, sometimes within hours even after downgrading to your 4.3.9 version but disabling THP completely fixed it.

    Anyone have an idea when this would be permanently fixed?

    Link to comment

    Just jumping on this thread. This is finally getting to me. Binehex-DelugeVPN was my culprit. Downgraded to LinuxServer/Deluge:2.1.1-r3-ls179. I just setup PrivoxyVPN but I don't think port forwarding is working right with PIA Wireguard. Anyone have any good tutorials or advice on connectivity with Deluge? Sorry in advanced about this not really being about this thread.

     

     

    SCR-20221214-ik3.png

    SCR-20221214-ioi.png

    Link to comment

    @ruablack2

     

    This is the same issue I was running into. I am successfully able to use PrivoxyVPN with Deluge libtorrent v1, however, I am unable to dynamically forward the PIA port to the Deluge container. This causes me to not be able to seed, and is currently not an option for me. If I was simply leeching, this solution would work great.

    Link to comment

    My linuxserver/deluge:2.1.1-r3-ls179 is seeding just fine through PrivoxyVPN.

     

    Deluge docker settings

    Screenshot_20221216_041053.thumb.png.9711a411dec2f3b8a6861db686d63a1e.png

     

    PrivoxyVPN docker settings. I did change the PIA Wireguard settings to a network location closer to me.

    Screenshot_20221216_041233.thumb.png.cefde41e06e0264af3295cce38f78078.png

     

    And my Deluge network settings and proof of seeding. (Pls dont judge my horrible ISP upload speeds)

    Screenshot_20221216_035345.png.804b683322af390517193ad2e440a60b.png

    Screenshot_20221216_042114.png.12fe6ecd20e4d232d4d187979a39f9af.png

    Edited by ShadyDeth
    Added more info
    Link to comment
    On 11/23/2022 at 3:15 PM, binhex said:
    grep -i HugePages_Total /proc/meminfo # output should be 0
    cat /proc/sys/vm/nr_hugepages # output should be 0

     

    from the article linked above this MAY actually increase your performance!, or at the very worst you may see a 10% drop in performance (depends on usage).

     

    keep in mind the above is a temporary hack, libtorrent/kernel/app will no doubt resolve this issue at some point, im simply posting this as a workaround for now, so you will then need to remove the above from your go file and reboot to reverse this hack.

     

    That's interesting. I haven't run any command to disable this.
    But i run your command to see if it disable.

    Has this been fixed in version 6.11.5 ?

     

    image.png

     

    PS! I have been running this version (lscr.io/linuxserver/qbittorrent:libtorrentv1-release-4.4.5_v1.2.18-ls4)  for almost 28 days without any crashes. Just upgraded to lscr.io/linuxserver/qbittorrent:libtorrentv1 to get the latest qbittorrent version 4.5.0.
     

    Edited by CiscoCoreX
    • Thanks 1
    Link to comment
    On 12/26/2022 at 5:10 AM, CiscoCoreX said:

     

    That's interesting. I haven't run any command to disable this.
    But i run your command to see if it disable.

    Has this been fixed in version 6.11.5 ?

     

    image.png

     

    PS! I have been running this version (lscr.io/linuxserver/qbittorrent:libtorrentv1-release-4.4.5_v1.2.18-ls4)  for almost 28 days without any crashes. Just upgraded to lscr.io/linuxserver/qbittorrent:libtorrentv1 to get the latest qbittorrent version 4.5.0.
     

    Hello,

    Any updates on running this version - lscr.io/linuxserver/qbittorrent:libtorrentv1 ?

    I have been a Deluge (binhex/arch-deluge) user for as long as I remember and considering switching to qbit for reliability. I have been having crashes every 2-3 days because of this.

    Thanks in Advance.

     

    Cheers!

    • Upvote 1
    Link to comment
    On 12/16/2022 at 4:22 PM, ShadyDeth said:

    My linuxserver/deluge:2.1.1-r3-ls179 is seeding just fine through PrivoxyVPN.

     

    Deluge docker settings

    Screenshot_20221216_041053.thumb.png.9711a411dec2f3b8a6861db686d63a1e.png

     

    PrivoxyVPN docker settings. I did change the PIA Wireguard settings to a network location closer to me.

    Screenshot_20221216_041233.thumb.png.cefde41e06e0264af3295cce38f78078.png

     

    And my Deluge network settings and proof of seeding. (Pls dont judge my horrible ISP upload speeds)

    Screenshot_20221216_035345.png.804b683322af390517193ad2e440a60b.png

    Screenshot_20221216_042114.png.12fe6ecd20e4d232d4d187979a39f9af.png

     

    Any update on stability of linuxserver/deluge:2.1.1-r3-ls179 container.

    Thanks in Advance.

    Link to comment
    49 minutes ago, Shomil Saini said:

     

    Any update on stability of linuxserver/deluge:2.1.1-r3-ls179 container.

    Thanks in Advance.

     

    Been running it continuously since my 11/18/2022 post. No issues.

     

    • Thanks 1
    Link to comment
    53 minutes ago, Shomil Saini said:

     

    Any update on stability of linuxserver/deluge:2.1.1-r3-ls179 container.

    Thanks in Advance.

     

    I have had zero issues running this version since my posts were made.

    • Thanks 1
    Link to comment
    On 2/5/2023 at 7:41 PM, Shomil Saini said:

    Hello,

    Any updates on running this version - lscr.io/linuxserver/qbittorrent:libtorrentv1 ?

    I have been a Deluge (binhex/arch-deluge) user for as long as I remember and considering switching to qbit for reliability. I have been having crashes every 2-3 days because of this.

    Thanks in Advance.

     

    Cheers!

    Hi,

     

    I have been running this version lscr.io/linuxserver/qbittorrent:libtorrentv1 since December 2022. Uptime over 35 days.
    For fun, I installed binhex latest bittorrent version, and the problem came back :(.

    Back to lscr.io/linuxserver/qbittorrent:libtorrentv1 

     

    • Thanks 1
    Link to comment
    On 2/6/2023 at 3:47 PM, JesterEE said:

     

    Been running it continuously since my 11/18/2022 post. No issues.

     

     

    On 2/6/2023 at 3:54 PM, ShadyDeth said:

     

    I have had zero issues running this version since my posts were made.

     

    5 hours ago, CiscoCoreX said:

    Hi,

     

    I have been running this version lscr.io/linuxserver/qbittorrent:libtorrentv1 since December 2022. Uptime over 35 days.
    For fun, I installed binhex latest bittorrent version, and the problem came back :(.

    Back to lscr.io/linuxserver/qbittorrent:libtorrentv1 

     

    Thanks everyone, I am migrating from binhex-deluge to linuxserver/deluge:2.1.1-r3-ls179 today. Fingers crossed.

     

    Cheers!

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.