DuzAwe
-
Posts
123 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by DuzAwe
-
-
@JorgeB That appears to be fixed. 😁
- 1
-
@sunbear Still hunting. So far PLEX and the ARR suite has a PASS from me. I have quite a few dockers so it will be a number of days before I have gone through them all. I have also ruled out all my plugins
@JorgeB When I click on the first drive in the ZFS array/cluster it shows like above an incomplete interface, if I go to any other disk in the machine, Unraid Array, BTRFS and the other ZFS disks I get the normal interfaces. ie Scrub options, Smart Info, Free space setting. But for the first disk in the the ZFS it is missing all these options.
-
@JorgeB Should I spin up an other thread for the missing gui for my ZFS pool?
-
So, if I don't have a crash today. Is the course of action to add one thing back at a time or ?
-
Safe Mode no Docker
Jun 9 11:02:38 TheLibrary shfs: set -o pipefail ; /usr/sbin/zfs destroy -r 'ingress/Back Up' |& logger
-
Those are images I got from sperate instances of lock up. I just had another one and manged to get an image and diag out.
I have to roll back to rc6 at this point.
-
OK so I had one lock up without docker or plugins, gui dead but ssh and netwrok mounts worked.
After the reboot RAM started low but hit 80% quite quickly. Network mounts still worked.
The control panel for ZFS was gone when I went into the drives for them.
-
Yes, not as quickly. In safe mode in rc7 when I login ram is in the 60% range on rc7 and seems to steadily climb over the hours in tandem with CPU usage. In rc6 after 9 hours of usage the RAM is sitting at the 40% range and CPU usage is nominal.
With NetData installed as a Docker it reports completely different usage numbers than the UnRaid GUI with rc7 and matches the GUI in rc6.
When using Top, I see Dockerd, SFSH at the top trading places almost exclusively in rc7 in rc6 first place is more fluid as to be expected.
I had been running rc6 since almost release and have had to date no issues with it. With rc7 I haven't had more then a few hours uptime. As whatever is happening also kills all my dockers and network mounts.
-
I have a 100% CPU usage and 80% RAM usage again. I cant keep my box up at all on RC7 its maxing out every few hours and becoming unresponsive. I managed to get one Diag out during my multiple reboots today.
-
Update: So I thought I found the issue being Dynamix Cache Directories. I reinstalled everything and disabled Dynamix Cache Directories in hopes it may be patched at a later date. I woke up this morning to 100% CPU usage and 80% RAM usage I was unable to get anything out of the server. I had to reboot it via ssh again.
-
Got a reboot, not a clean one but. It looks like things are behaving again. Guess I now play that whack a plugin game.
-
Should have specified I am sshing into the box. Reboot crashed with SIGKILL
-
Is there away to do this from the CLI? GUI has become unusable, with this issue.
-
Changed Status to Open
Changed Priority to Minor
-
How do we enable AMD_PSTATE?
-
Had a random reboot this morning. error in logs as it was coming back up. Diags attached, I have lost communication to my UPS(usb cable).
Aug 25 08:29:38 TheLibrary kernel: mce: [Hardware Error]: Machine check events logged Aug 25 08:29:38 TheLibrary kernel: mce: [Hardware Error]: CPU 6: Machine Check: 0 Bank 0: bc002800000c0135 Aug 25 08:29:38 TheLibrary kernel: mce: [Hardware Error]: TSC 0 ADDR 6400ef000 MISC d012000000000000 IPID b000000000 Aug 25 08:29:38 TheLibrary kernel: mce: [Hardware Error]: PROCESSOR 2:870f10 TIME 1629876550 SOCKET 0 APIC c microcode 8701021
-
-
So bit of a curveball I ran macvlan br0 on the new rc for 24 hours without issue.
I hadn't realised that I didn't enable ipVlan, I have now enabled ipvlan and its still all good.
But I had not been able to run for 24 hours on mcavlan with 6.9.
-
Manged to catch this today. With the lightest of google searches it looks like it may be a bug/regression in the kernel. I very well could be wrong but maybe?
https://www.spinics.net/lists/linux-nfs/msg78091.html
https://www.spinics.net/lists/amd-gfx/msg48596.html
Jun 12 14:00:01 thelibrary kernel: Jun 12 14:00:44 thelibrary kernel: general protection fault, probably for non-canonical address 0x1090000ffffff76: 0000 [#1] SMP NOPTI Jun 12 14:00:44 thelibrary kernel: CPU: 6 PID: 10937 Comm: qbittorrent-nox Tainted: P S W O 5.10.28-Unraid #1 Jun 12 14:00:44 thelibrary kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X470D4U, BIOS L4.21 04/15/2021 Jun 12 14:00:44 thelibrary kernel: RIP: 0010:nf_nat_setup_info+0x129/0x6aa [nf_nat] Jun 12 14:00:44 thelibrary kernel: Code: ff 48 8b 15 ef 6a 00 00 89 c0 48 8d 04 c2 48 8b 10 48 85 d2 74 80 48 81 ea 98 00 00 00 48 85 d2 0f 84 70 ff ff ff 8a 44 24 46 <38> 42 46 74 09 48 8b 92 98 00 00 00 eb d9 48 8b 4a 20 48 8b 42 28 Jun 12 14:00:44 thelibrary kernel: RSP: 0018:ffffc90000338700 EFLAGS: 00010202 Jun 12 14:00:44 thelibrary kernel: RAX: ffff88818b422f06 RBX: ffff888108b21a40 RCX: 0000000000000000 Jun 12 14:00:44 thelibrary kernel: RDX: 01090000ffffff76 RSI: 000000003f50ed19 RDI: ffffc90000338720 Jun 12 14:00:44 thelibrary kernel: RBP: ffffc900003387c8 R08: 0000000098f45bae R09: ffff88813dd40620 Jun 12 14:00:44 thelibrary kernel: R10: 0000000000000348 R11: ffffffff815cbe4b R12: 0000000000000000 Jun 12 14:00:44 thelibrary kernel: R13: ffffc90000338720 R14: ffffc900003387dc R15: ffffffff8210b440 Jun 12 14:00:44 thelibrary kernel: FS: 0000146c98419700(0000) GS:ffff88881e980000(0000) knlGS:0000000000000000 Jun 12 14:00:44 thelibrary kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jun 12 14:00:44 thelibrary kernel: CR2: 0000150d3d688320 CR3: 000000020149a000 CR4: 0000000000350ee0 Jun 12 14:00:44 thelibrary kernel: Call Trace: Jun 12 14:00:44 thelibrary kernel: <IRQ> Jun 12 14:00:44 thelibrary kernel: ? fq_enqueue+0x25b/0x4e8 Jun 12 14:00:44 thelibrary kernel: ? igb_xmit_frame_ring+0x7c5/0x8fd [igb] Jun 12 14:00:44 thelibrary kernel: ? __ksize+0x15/0x64 Jun 12 14:00:44 thelibrary kernel: ? krealloc+0x26/0x7a Jun 12 14:00:44 thelibrary kernel: nf_nat_masquerade_ipv4+0x10b/0x131 [nf_nat] Jun 12 14:00:44 thelibrary kernel: masquerade_tg+0x44/0x5e [xt_MASQUERADE] Jun 12 14:00:44 thelibrary kernel: ? __qdisc_run+0x21d/0x3c9 Jun 12 14:00:44 thelibrary kernel: ipt_do_table+0x51a/0x5c0 [ip_tables] Jun 12 14:00:44 thelibrary kernel: ? __dev_queue_xmit+0x4d9/0x501 Jun 12 14:00:44 thelibrary kernel: ? fib_validate_source+0xb0/0xda Jun 12 14:00:44 thelibrary kernel: nf_nat_inet_fn+0xe9/0x183 [nf_nat] Jun 12 14:00:44 thelibrary kernel: nf_nat_ipv4_out+0xf/0x88 [nf_nat] Jun 12 14:00:44 thelibrary kernel: nf_hook_slow+0x39/0x8e Jun 12 14:00:44 thelibrary kernel: nf_hook+0xab/0xd3 Jun 12 14:00:44 thelibrary kernel: ? __ip_finish_output+0x146/0x146 Jun 12 14:00:44 thelibrary kernel: ip_output+0x7d/0x8a Jun 12 14:00:44 thelibrary kernel: ? __ip_finish_output+0x146/0x146 Jun 12 14:00:44 thelibrary kernel: ip_forward+0x3f1/0x420 Jun 12 14:00:44 thelibrary kernel: ? ip_check_defrag+0x18f/0x18f Jun 12 14:00:44 thelibrary kernel: ip_sabotage_in+0x43/0x4d [br_netfilter] Jun 12 14:00:44 thelibrary kernel: nf_hook_slow+0x39/0x8e Jun 12 14:00:44 thelibrary kernel: nf_hook.constprop.0+0xb1/0xd8 Jun 12 14:00:44 thelibrary kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50 Jun 12 14:00:44 thelibrary kernel: ip_rcv+0x41/0x61 Jun 12 14:00:44 thelibrary kernel: __netif_receive_skb_one_core+0x74/0x95 Jun 12 14:00:44 thelibrary kernel: netif_receive_skb+0x79/0xa1 Jun 12 14:00:44 thelibrary kernel: br_handle_frame_finish+0x30d/0x351 Jun 12 14:00:44 thelibrary kernel: ? br_pass_frame_up+0xda/0xda Jun 12 14:00:44 thelibrary kernel: br_nf_hook_thresh+0xa3/0xc3 [br_netfilter] Jun 12 14:00:44 thelibrary kernel: ? br_pass_frame_up+0xda/0xda Jun 12 14:00:44 thelibrary kernel: br_nf_pre_routing_finish+0x23d/0x264 [br_netfilter] Jun 12 14:00:44 thelibrary kernel: ? br_pass_frame_up+0xda/0xda Jun 12 14:00:44 thelibrary kernel: ? br_handle_frame_finish+0x351/0x351 Jun 12 14:00:44 thelibrary kernel: ? nf_nat_ipv4_pre_routing+0x1e/0x4a [nf_nat] Jun 12 14:00:44 thelibrary kernel: ? br_nf_forward_finish+0xd0/0xd0 [br_netfilter] Jun 12 14:00:44 thelibrary kernel: ? br_handle_frame_finish+0x351/0x351 Jun 12 14:00:44 thelibrary kernel: NF_HOOK+0xd7/0xf7 [br_netfilter] Jun 12 14:00:44 thelibrary kernel: ? br_nf_forward_finish+0xd0/0xd0 [br_netfilter] Jun 12 14:00:44 thelibrary kernel: br_nf_pre_routing+0x229/0x239 [br_netfilter] Jun 12 14:00:44 thelibrary kernel: ? br_nf_forward_finish+0xd0/0xd0 [br_netfilter] Jun 12 14:00:44 thelibrary kernel: br_handle_frame+0x25e/0x2a6 Jun 12 14:00:44 thelibrary kernel: ? br_pass_frame_up+0xda/0xda Jun 12 14:00:44 thelibrary kernel: __netif_receive_skb_core+0x335/0x4e7 Jun 12 14:00:44 thelibrary kernel: __netif_receive_skb_one_core+0x3d/0x95 Jun 12 14:00:44 thelibrary kernel: process_backlog+0xa3/0x13b Jun 12 14:00:44 thelibrary kernel: net_rx_action+0xf4/0x29d Jun 12 14:00:44 thelibrary kernel: __do_softirq+0xc4/0x1c2 Jun 12 14:00:44 thelibrary kernel: asm_call_irq_on_stack+0x12/0x20 Jun 12 14:00:44 thelibrary kernel: </IRQ> Jun 12 14:00:44 thelibrary kernel: do_softirq_own_stack+0x2c/0x39 Jun 12 14:00:44 thelibrary kernel: do_softirq+0x3a/0x44 Jun 12 14:00:44 thelibrary kernel: __local_bh_enable_ip+0x3b/0x43 Jun 12 14:00:44 thelibrary kernel: ip_finish_output2+0x2ec/0x31f Jun 12 14:00:44 thelibrary kernel: ? ipv4_mtu+0x3d/0x64 Jun 12 14:00:44 thelibrary kernel: __ip_queue_xmit+0x2a3/0x2df Jun 12 14:00:44 thelibrary kernel: __tcp_transmit_skb+0x845/0x8ba Jun 12 14:00:44 thelibrary kernel: tcp_connect+0x76d/0x7f4 Jun 12 14:00:44 thelibrary kernel: tcp_v4_connect+0x3fc/0x455 Jun 12 14:00:44 thelibrary kernel: __inet_stream_connect+0xd3/0x2b6 Jun 12 14:00:44 thelibrary kernel: inet_stream_connect+0x34/0x49 Jun 12 14:00:44 thelibrary kernel: __sys_connect+0x62/0x9d Jun 12 14:00:44 thelibrary kernel: ? __sys_bind+0x78/0x9f Jun 12 14:00:44 thelibrary kernel: __x64_sys_connect+0x11/0x14 Jun 12 14:00:44 thelibrary kernel: do_syscall_64+0x5d/0x6a Jun 12 14:00:44 thelibrary kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Jun 12 14:00:44 thelibrary kernel: RIP: 0033:0x146c9bdec53b Jun 12 14:00:44 thelibrary kernel: Code: 83 ec 18 89 54 24 0c 48 89 34 24 89 7c 24 08 e8 bb fa ff ff 8b 54 24 0c 48 8b 34 24 41 89 c0 8b 7c 24 08 b8 2a 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 2f 44 89 c7 89 44 24 08 e8 f1 fa ff ff 8b 44 Jun 12 14:00:44 thelibrary kernel: RSP: 002b:0000146c98416f20 EFLAGS: 00000293 ORIG_RAX: 000000000000002a Jun 12 14:00:44 thelibrary kernel: RAX: ffffffffffffffda RBX: 0000146c90d18c60 RCX: 0000146c9bdec53b Jun 12 14:00:44 thelibrary kernel: RDX: 0000000000000010 RSI: 0000146c90e4ac94 RDI: 000000000000004b Jun 12 14:00:44 thelibrary kernel: RBP: 0000146c98417160 R08: 0000000000000000 R09: 0000146c98418258 Jun 12 14:00:44 thelibrary kernel: R10: 0000146c9841710c R11: 0000000000000293 R12: 0000146c90e4ac94 Jun 12 14:00:44 thelibrary kernel: R13: 0000000000000000 R14: 0000146c98418258 R15: 0000146c90003310 Jun 12 14:00:44 thelibrary kernel: Modules linked in: nvidia_uvm(PO) xt_nat xt_tcpudp veth macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs md_mod nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper drm backlight agpgart syscopyarea sysfillrect sysimgblt fb_sys_fops nvidia(PO) ip6table_filter ip6_tables iptable_filter ip_tables x_tables igb i2c_algo_bit ipmi_ssif amd64_edac_mod edac_mce_amd kvm_amd wmi_bmof kvm crct10dif_pclmul crc32_pclmul crc32c_intel mpt3sas ghash_clmulni_intel aesni_intel i2c_piix4 crypto_simd cryptd raid_class i2c_core nvme ahci scsi_transport_sas nvme_core wmi glue_helper acpi_ipmi ccp k10temp rapl libahci button ipmi_si acpi_cpufreq [last unloaded: i2c_algo_bit] Jun 12 14:00:44 thelibrary kernel: ---[ end trace 98e92523c69e7e44 ]--- Jun 12 14:00:44 thelibrary kernel: RIP: 0010:nf_nat_setup_info+0x129/0x6aa [nf_nat] Jun 12 14:00:44 thelibrary kernel: Code: ff 48 8b 15 ef 6a 00 00 89 c0 48 8d 04 c2 48 8b 10 48 85 d2 74 80 48 81 ea 98 00 00 00 48 85 d2 0f 84 70 ff ff ff 8a 44 24 46 <38> 42 46 74 09 48 8b 92 98 00 00 00 eb d9 48 8b 4a 20 48 8b 42 28 Jun 12 14:00:44 thelibrary kernel: RSP: 0018:ffffc90000338700 EFLAGS: 00010202 Jun 12 14:00:44 thelibrary kernel: RAX: ffff88818b422f06 RBX: ffff888108b21a40 RCX: 0000000000000000 Jun 12 14:00:44 thelibrary kernel: RDX: 01090000ffffff76 RSI: 000000003f50ed19 RDI: ffffc90000338720 Jun 12 14:00:44 thelibrary kernel: RBP: ffffc900003387c8 R08: 0000000098f45bae R09: ffff88813dd40620 Jun 12 14:00:44 thelibrary kernel: R10: 0000000000000348 R11: ffffffff815cbe4b R12: 0000000000000000 Jun 12 14:00:44 thelibrary kernel: R13: ffffc90000338720 R14: ffffc900003387dc R15: ffffffff8210b440 Jun 12 14:00:44 thelibrary kernel: FS: 0000146c98419700(0000) GS:ffff88881e980000(0000) knlGS:0000000000000000 Jun 12 14:00:44 thelibrary kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jun 12 14:00:44 thelibrary kernel: CR2: 0000150d3d688320 CR3: 000000020149a000 CR4: 0000000000350ee0 Jun 12 14:00:44 thelibrary kernel: Kernel panic - not syncing: Fatal exception in interrupt
- 3
-
I am in the rather unique situation to be able to build a almost exact clone of my set up (ram will be different).
I could set up a VPN and provide Limetech with access for testing.
-
If it is, I have this disabled at the bios level as well as in my boot.cfg and I have still had lock ups with macvlan.
-
Same issues as K1ng0011 also with an ASRockRack Board, Same X470 Base. Have had a great number of system lock ups since December. Started for me with RC2. Not in a position to be able to create new vlans. Have instead removed all my dockers that used BR0 as even stopped I was having issues.
1 hour ago, K1ng0011 said:Current Unraid Version: 6.9.2
Original Motherboard: MSI Pro Carbon X370
Current Motherboard: ASRockRack X470D4U2-2T
-
I believe I have the sae issue on 6.9.2
-
Thanks, Ill try that should it happen again.
large number of signal 9 (SIGKILL) on pool www since rc 7 install
-
-
-
-
-
in Prereleases
Posted
For me it was unriad & TDAR. So unraid was just using higher than normal resources on the issues above, which was fine most of the time. When TDAR did anything though it would just die. All my other dockers and Plugins did nothing to the over all usage.