Dephcon

Members
  • Posts

    601
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Dephcon

  1. +1  I recently added a NVMe drive and different RAM to my server and all of a sudden I was getting macvlan traces, so by advise of a community member I switched to ipvlan.

     

    I'm having the same issue, with ipvlan and "Host access to custom networks: Enabled" my unraid sever and any "bridge" containers can no longer route externally.

     

    It Disabling "Host access to custom networks" seems to have "fixed" the issue, but I'm not sure what I'm losing here.  I recall turning it on for a good reaason

  2. I've removed the nic bonding and the tagged vlan network and I'm still seeing the same behavior.  it's really frustrating because the containers on the br0 interface with their own IPs work just fine, it's the containers in bridge mode that fail along with the unraidOS itself.

  3. seems the problem persists with the docker service running but with all the containers stopped.  It's possible ipvlan doesn't jive well with vlan tagging networks, or LACP bonding.  Do you have any idea why macvlan was causing my system to kernel panic?

  4. @JorgeB since switching to ipvlan, my unraid server seems to have problems with DNS/routing now.  It's a bit hard to rationalize...

    On boot/array stop: everything is fine, i can resolve websites using cloudfare DNS (1.1.1.1/1.0.0.1) and ping them

    On array docker start: same behavior
    After about 3-4 minutes I can no longer ping/route to external addresses like cloudflare DNS and thus can't resolve anything

    If i disable the docker service, everything works again

     

    This was not an issue while running macvlan

    vault13-diagnostics-20221026-1109.zip

  5. I added an NVME drive and replaced 4x4GB DIMMs with 2x16GB a few months ago.  Seemed fine for a while but the last month or two it's kernel paniced almost weekly.  Unfortunately i didn't have remote syslog enabled so this is the first time i've gotten anything useful.   I'll include the diag from after reboot, but it doesn't contain anything pre-panic.

     

    I did verify the RAM with memtest for 24 hours.

     

    Oct 23 20:51:24 vault13 kernel: ------------[ cut here ]------------
    Oct 23 20:51:24 vault13 kernel: WARNING: CPU: 5 PID: 0 at net/netfilter/nf_nat_core.c:594 nf_nat_setup_info+0x73/0x7b1 [nf_nat]
    Oct 23 20:51:24 vault13 kernel: Modules linked in: tcp_diag udp_diag inet_diag macvlan veth xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs md_mod k10temp hwmon_vid fam15h_power efivarfs wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge 8021q garp mrp stp llc bonding tls ipv6 e1000e i915 x86_pkg_temp_thermal intel_powerclamp iosf_mbi drm_buddy coretemp i2c_algo_bit ttm kvm_intel drm_display_helper kvm drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel mxm_wmi intel_wmi_thunderbolt crypto_simd cryptd rapl drm intel_cstate nvme input_leds intel_uncore mpt3sas intel_gtt led_class i2c_i801 i2c_smbus agpgart nvme_core i2c_core ahci raid_class syscopyarea libahci scsi_transport_sas
    Oct 23 20:51:24 vault13 kernel: sysfillrect sysimgblt fb_sys_fops intel_pch_thermal fan video thermal wmi backlight acpi_pad button unix [last unloaded: e1000e]
    Oct 23 20:51:24 vault13 kernel: CPU: 5 PID: 0 Comm: swapper/5 Tainted: G        W         5.19.14-Unraid #1
    Oct 23 20:51:24 vault13 kernel: Hardware name: MSI MS-7998/Z170A SLI PLUS (MS-7998), BIOS 1.E0 06/15/2018
    Oct 23 20:51:24 vault13 kernel: RIP: 0010:nf_nat_setup_info+0x73/0x7b1 [nf_nat]
    Oct 23 20:51:24 vault13 kernel: Code: 48 8b 87 80 00 00 00 48 89 fb 49 89 f4 76 04 0f 0b eb 0e 83 7c 24 1c 00 75 07 25 80 00 00 00 eb 05 25 00 01 00 00 85 c0 74 07 <0f> 0b e9 6a 06 00 00 48 8b 83 88 00 00 00 48 8d 73 58 48 8d 7c 24
    Oct 23 20:51:24 vault13 kernel: RSP: 0018:ffffc900001fc7b8 EFLAGS: 00010202
    Oct 23 20:51:24 vault13 kernel: RAX: 0000000000000080 RBX: ffff88826d04cf00 RCX: ffff8881063ce3c0
    Oct 23 20:51:24 vault13 kernel: RDX: 0000000000000000 RSI: ffffc900001fc89c RDI: ffff88826d04cf00
    Oct 23 20:51:24 vault13 kernel: RBP: ffffc900001fc880 R08: 00000000cf00510a R09: 0000000000000000
    Oct 23 20:51:24 vault13 kernel: R10: 0000000000000158 R11: 0000000000000000 R12: ffffc900001fc89c
    Oct 23 20:51:24 vault13 kernel: R13: 00000000cf005100 R14: ffffc900001fc978 R15: 0000000000000000
    Oct 23 20:51:24 vault13 kernel: FS:  0000000000000000(0000) GS:ffff88884ed40000(0000) knlGS:0000000000000000
    Oct 23 20:51:24 vault13 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Oct 23 20:51:24 vault13 kernel: CR2: 000000c000353010 CR3: 000000000400a002 CR4: 00000000003706e0
    Oct 23 20:51:24 vault13 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Oct 23 20:51:24 vault13 kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Oct 23 20:51:24 vault13 kernel: Call Trace:
    Oct 23 20:51:24 vault13 kernel: <IRQ>
    Oct 23 20:51:24 vault13 kernel: ? xt_write_recseq_end+0xf/0x1c [ip_tables]
    Oct 23 20:51:24 vault13 kernel: ? __local_bh_enable_ip+0x56/0x6b
    Oct 23 20:51:24 vault13 kernel: ? ipt_do_table+0x57a/0x5bf [ip_tables]
    Oct 23 20:51:24 vault13 kernel: ? xt_write_recseq_end+0xf/0x1c [ip_tables]
    Oct 23 20:51:24 vault13 kernel: ? __local_bh_enable_ip+0x56/0x6b
    Oct 23 20:51:24 vault13 kernel: __nf_nat_alloc_null_binding+0x66/0x81 [nf_nat]
    Oct 23 20:51:24 vault13 kernel: nf_nat_inet_fn+0xc0/0x1a8 [nf_nat]
    Oct 23 20:51:24 vault13 kernel: nf_nat_ipv4_local_in+0x2a/0xaa [nf_nat]
    Oct 23 20:51:24 vault13 kernel: nf_hook_slow+0x3a/0x96
    Oct 23 20:51:24 vault13 kernel: ? ip_protocol_deliver_rcu+0x164/0x164
    Oct 23 20:51:24 vault13 kernel: NF_HOOK.constprop.0+0x79/0xd9
    Oct 23 20:51:24 vault13 kernel: ? ip_protocol_deliver_rcu+0x164/0x164
    Oct 23 20:51:24 vault13 kernel: ip_sabotage_in+0x47/0x58 [br_netfilter]
    Oct 23 20:51:24 vault13 kernel: nf_hook_slow+0x3a/0x96
    Oct 23 20:51:24 vault13 kernel: ? ip_rcv_finish_core.constprop.0+0x3b7/0x3b7
    Oct 23 20:51:24 vault13 kernel: NF_HOOK.constprop.0+0x79/0xd9
    Oct 23 20:51:24 vault13 kernel: ? ip_rcv_finish_core.constprop.0+0x3b7/0x3b7
    Oct 23 20:51:24 vault13 kernel: __netif_receive_skb_one_core+0x68/0x8d
    Oct 23 20:51:24 vault13 kernel: netif_receive_skb+0xbf/0x127
    Oct 23 20:51:24 vault13 kernel: br_handle_frame_finish+0x476/0x4b0 [bridge]
    Oct 23 20:51:24 vault13 kernel: ? br_pass_frame_up+0xdd/0xdd [bridge]
    Oct 23 20:51:24 vault13 kernel: br_nf_hook_thresh+0xe2/0x109 [br_netfilter]
    Oct 23 20:51:24 vault13 kernel: ? br_pass_frame_up+0xdd/0xdd [bridge]
    Oct 23 20:51:24 vault13 kernel: br_nf_pre_routing_finish+0x2c1/0x2ec [br_netfilter]
    Oct 23 20:51:24 vault13 kernel: ? br_pass_frame_up+0xdd/0xdd [bridge]
    Oct 23 20:51:24 vault13 kernel: ? NF_HOOK.isra.0+0xe4/0x140 [br_netfilter]
    Oct 23 20:51:24 vault13 kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter]
    Oct 23 20:51:24 vault13 kernel: br_nf_pre_routing+0x226/0x23a [br_netfilter]
    Oct 23 20:51:24 vault13 kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter]
    Oct 23 20:51:24 vault13 kernel: br_handle_frame+0x27c/0x2e7 [bridge]
    Oct 23 20:51:24 vault13 kernel: ? br_pass_frame_up+0xdd/0xdd [bridge]
    Oct 23 20:51:24 vault13 kernel: __netif_receive_skb_core.constprop.0+0x4f6/0x6e3
    Oct 23 20:51:24 vault13 kernel: ? slab_post_alloc_hook+0x4d/0x15e
    Oct 23 20:51:24 vault13 kernel: ? __alloc_skb+0xb2/0x15e
    Oct 23 20:51:24 vault13 kernel: ? __kmalloc_node_track_caller+0x1ae/0x1d9
    Oct 23 20:51:24 vault13 kernel: ? udp_gro_udphdr+0x1c/0x40
    Oct 23 20:51:24 vault13 kernel: __netif_receive_skb_list_core+0x8a/0x11e
    Oct 23 20:51:24 vault13 kernel: netif_receive_skb_list_internal+0x1d7/0x210
    Oct 23 20:51:24 vault13 kernel: gro_normal_list+0x1d/0x3f
    Oct 23 20:51:24 vault13 kernel: napi_complete_done+0x7b/0x11a
    Oct 23 20:51:24 vault13 kernel: e1000e_poll+0x9e/0x23e [e1000e]
    Oct 23 20:51:24 vault13 kernel: __napi_poll.constprop.0+0x28/0x124
    Oct 23 20:51:24 vault13 kernel: net_rx_action+0x159/0x24f
    Oct 23 20:51:24 vault13 kernel: ? e1000_intr_msi+0x114/0x120 [e1000e]
    Oct 23 20:51:24 vault13 kernel: __do_softirq+0x126/0x288
    Oct 23 20:51:24 vault13 kernel: __irq_exit_rcu+0x79/0xb8
    Oct 23 20:51:24 vault13 kernel: common_interrupt+0x9b/0xc1
    Oct 23 20:51:24 vault13 kernel: </IRQ>
    Oct 23 20:51:24 vault13 kernel: <TASK>
    Oct 23 20:51:24 vault13 kernel: asm_common_interrupt+0x22/0x40
    Oct 23 20:51:24 vault13 kernel: RIP: 0010:cpuidle_enter_state+0x11b/0x1e4
    Oct 23 20:51:24 vault13 kernel: Code: e4 0f a2 ff 45 84 ff 74 1b 9c 58 0f 1f 40 00 0f ba e0 09 73 08 0f 0b fa 0f 1f 44 00 00 31 ff e8 0e bf a6 ff fb 0f 1f 44 00 00 <45> 85 ed 0f 88 9e 00 00 00 48 8b 04 24 49 63 cd 48 6b d1 68 49 29
    Oct 23 20:51:24 vault13 kernel: RSP: 0018:ffffc90000107e98 EFLAGS: 00000246
    Oct 23 20:51:24 vault13 kernel: RAX: ffff88884ed40000 RBX: 0000000000000004 RCX: 0000000000000000
    Oct 23 20:51:24 vault13 kernel: RDX: 0000000000000005 RSI: ffffffff81ec95aa RDI: ffffffff81ec9a8a
    Oct 23 20:51:24 vault13 kernel: RBP: ffff88884ed75300 R08: 0000000000000002 R09: 0000000000000002
    Oct 23 20:51:24 vault13 kernel: R10: 0000000000000020 R11: 0000000000000221 R12: ffffffff821156c0
    Oct 23 20:51:24 vault13 kernel: R13: 0000000000000004 R14: 0000af79da1529b8 R15: 0000000000000000
    Oct 23 20:51:24 vault13 kernel: ? cpuidle_enter_state+0xf5/0x1e4
    Oct 23 20:51:24 vault13 kernel: cpuidle_enter+0x2a/0x38
    Oct 23 20:51:24 vault13 kernel: do_idle+0x187/0x1f5
    Oct 23 20:51:24 vault13 kernel: cpu_startup_entry+0x1d/0x1f
    Oct 23 20:51:24 vault13 kernel: start_secondary+0xeb/0xeb
    Oct 23 20:51:24 vault13 kernel: secondary_startup_64_no_verify+0xce/0xdb
    Oct 23 20:51:24 vault13 kernel: </TASK>
    Oct 23 20:51:24 vault13 kernel: ---[ end trace 0000000000000000 ]---

     

    vault13-diagnostics-20221024-0933.zip

  6. 10 hours ago, flyize said:

    Tone mapping works now.


    Sorry, can you elaborate on that a bit?  Was about to pull the trigger on 11th gen, but would prefer 12th.

    so with 12th gen:

    1. it's stable with unraid in general
    2. it does plex docker hardware h264 encode/decode and h265 decode
    3. it does plex docker HDR tone mapping while hardware encode/decode is enabled (and actually working)

    thanks!

  7. On 9/24/2022 at 9:35 AM, KarlMeyer said:

    I have my intel 12500 now successfully working to hardware transcode Plex content (though tone mapping must be turned off in the Plex settings because it's still borked). I assume it would work in tDarr because on a hardware level, hardware transcoding works for 12 gen now. Makes me want to throw a party.

     

    is the tone mapping an issue with the driver included in unraid or software issue with plex?  i just started getting 4k/HDR content to find out my 7th gen doesn't support tone mapping at all and need a replacement =\

  8. I've been having some issues lately with my fairly new NVME cache disk.  I've had a couple kernel panics and the cache drive going read-only.

     

    This is the first time I've been able to access the system to capture a diag, attached.  This stood out imediately:

    Sep 20 20:23:15 vault13 kernel: XFS (nvme0n1p1): Metadata corruption detected at xfs_dir3_leaf_check_int+0x93/0xc3 [xfs], xfs_dir3_leaf1 block 0x3e6f0818 
    Sep 20 20:23:15 vault13 kernel: XFS (nvme0n1p1): Unmount and run xfs_repair
    Sep 20 20:23:15 vault13 kernel: XFS (nvme0n1p1): First 128 bytes of corrupted metadata buffer:
    Sep 20 20:23:15 vault13 kernel: 00000000: 00 00 00 00 00 00 00 00 3d f1 00 00 04 3e 49 e8  ........=....>I.
    Sep 20 20:23:15 vault13 kernel: 00000010: 00 00 00 00 3e 6f 08 18 00 00 00 25 00 01 b3 b7  ....>o.....%....
    Sep 20 20:23:15 vault13 kernel: 00000020: c9 5c 71 28 56 fd 49 6f 91 d3 34 33 ac a8 ba 20  .\q(V.Io..43... 
    Sep 20 20:23:15 vault13 kernel: 00000030: 00 00 00 00 40 36 d5 c0 00 a9 00 1a 00 00 00 00  ....@6..........
    Sep 20 20:23:15 vault13 kernel: 00000040: 00 00 00 2e 00 00 00 08 00 00 17 2e 00 00 00 0a  ................
    Sep 20 20:23:15 vault13 kernel: 00000050: 00 f7 cc bb 00 00 02 2b 01 0f ac 85 00 00 00 00  .......+........
    Sep 20 20:23:15 vault13 kernel: 00000060: 06 6c 50 1c 00 00 00 48 06 76 1a 37 00 00 03 58  .lP....H.v.7...X
    Sep 20 20:23:15 vault13 kernel: 00000070: 07 bc cb 9b 00 00 02 b0 0c 77 81 da 00 00 00 00  .........w......
    Sep 20 20:23:15 vault13 kernel: XFS (nvme0n1p1): Corruption of in-memory data (0x8) detected at __xfs_buf_submit+0xdd/0x172 [xfs] (fs/xfs/xfs_buf.c:1514).  Shutting down filesystem.
    Sep 20 20:23:15 vault13 kernel: XFS (nvme0n1p1): Please unmount the filesystem and rectify the problem(s)
    Sep 20 20:23:15 vault13 kernel: blk_update_request: I/O error, dev loop2, sector 35378856 op 0x0:(READ) flags 0x80700 phys_seg 2 prio class 0

     

    The "corruption of in-memory data" bit caught my eye, last week I did run a 24hours memtest86 to ensure my also fairly new 4x16GB RAM kits wasn't bad.

     

    Based on some other threads, it seems I need to boot into maintenance mode to be able to run 'Check Filesystem Status'.  I had to hard reboot to recover so is there anything else i can do while my parity check is running?

     

    Thanks!

    vault13-diagnostics-20220920-2025.zip

  9. 19 hours ago, alturismo said:

     

    exactly the same, just add the /dev/dri as device in the container(s), btw, gvt-g is for a VM ... not for the docker(s), this is just a sweet benefit ;)

    oh snap!  for some reason I assume d/dev/dri was an exclusive-mode type thing.  thanks!

  10. how do the virtual GPUs work for multiple containers?  currently I have the baremetal iGPU passed though to my plex container with:

    --device=/dev/dri


    I'd like to attach a virtual GPU to both plex and jellyfin containers.

  11. unfortunately, the amount of counterfeiting at amazon on sandisk/samsung SD cards and now apparently USB Keys is brutal.  typically the most popular brands are targeted due to higher volume.

     

    For those curious, at amazon warehouses they store their own stock and stock from 3rd party resellers in the same bins, so the counterfeit items with the same SKU dilute amazon's legit stock.

    • Haha 1
  12. 53 minutes ago, trurl said:

    This is exactly what I do (and I suspect most people do). The warning is saying it could overwrite files in the destination path. If the destination path specifies a subfolder of a user share, it won't go up a level and overwrite other things.

    great because using having appdata and appdata_backup was really cramping my auto-complete game in the cli

  13. On 7/28/2018 at 4:19 PM, CaptainTivo said:

    2. I am concerned about this warning: 

     

    I have a dedicated share for all backups from all my computers called Backup1, which is limited to a specific disk.  Within that share are a bunch of directories with different backup types:

     

    Backup1\

       backups\

          PC1\

          PC2\

      Game backups\

      Pictures\

     

    etc.  Can I simply make a dedicated folder for this backup?  i.e.:

     

    Destination Share:   Backup1\backups\mariadb

     

    The warning seems to imply that this could write over files/folders at the Backup1 directory  level.

     

    Thanks.

     

    I'm also curious about this.  I'd prefer the store my appdata backups at /mnt/user/backup/appdata_backup instead of a dedicated share

  14. anyone had any lucking using inputs.bond?

     

    i get this in my log:

    2020-07-15T18:18:00Z E! [inputs.bond] Error in plugin: error inspecting '/rootfs/proc/net/bonding/bond0' interface: open /rootfs/proc/net/bonding/bond0: no such file or directory

    then i set the path in conf with 'host_proc = "/proc"' and then got this:

    2020-07-15T18:19:00Z E! [inputs.bond] Error in plugin: error inspecting '/proc/net/bonding/bond0' interface: open /proc/net/bonding/bond0: no such file or directory