Jump to content

vcxpz

Members
  • Posts

    18
  • Joined

  • Last visited

Posts posted by vcxpz

  1. Following some guides on youtube, im trying to setup some more security measures, ending with putting containers in different networks, doing

    docker network create proxy

     

    creates the proxy network, that when a container is in can still access all other containers, and the host. heres what happens when i use nc to probe the ports:

     

    443 is a docker container set to host, 8443 is the unraid webui, and 7443 is another container set to bridge, is this normal behaviour? I thought that creating another network grants the containers in it no access to any local networks, just the internet?

     

    root@55f012343cc3:/# nc -v 192.168.1.2 443
    Connection to 192.168.1.2 443 port [tcp/https] succeeded!
    ^C
    root@55f012343cc3:/# nc -v 192.168.1.2 8443
    Connection to 192.168.1.2 8443 port [tcp/*] succeeded!
    ^C
    root@55f012343cc3:/# nc -v 192.168.1.2 7443
    Connection to 192.168.1.2 7443 port [tcp/*] succeeded!
    ^C
    root@55f012343cc3:/# 

     

  2. I have acquired a i7-13700k for my unraid machine (with addition to 64gb of 3600mhz ddr4), upgrade went fine, except the temperatures were exceeding 70 degrees, and under any load the system would crash, since I have bought the biggest CPU cooler that fits in my case, temps are fine. but still getting some erros:

     

    Mar  2 21:50:01 Discovery kernel: Modules linked in: af_packet ipvlan nvidia_uvm(PO) veth xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle iptable_mangle vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter xfs ip6table_nat md_mod efivarfs iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge 8021q garp mrp stp llc mlx4_en gigabyte_wmi wmi_bmof x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd intel_cstate intel_uncore nvidia_drm(PO) nvidia_modeset(PO) i2c_i801 i915 i2c_smbus mlx4_core nvidia(PO) iosf_mbi drm_buddy i2c_algo_bit ttm ahci drm_display_helper libahci drm_kms_helper input_leds
    Mar  2 21:50:01 Discovery kernel: joydev led_class drm cp210x usbserial btusb btrtl intel_gtt btbcm btintel agpgart bluetooth i2c_core nvme syscopyarea sysfillrect sysimgblt ecdh_generic ecc nvme_core fb_sys_fops fan thermal wmi video backlight tpm_crb tpm_tis tpm_tis_core tpm acpi_tad acpi_pad button unix
    Mar  2 21:50:01 Discovery kernel: CPU: 12 PID: 11662 Comm: shfs Tainted: P      D W  O      5.19.17-Unraid #2
    Mar  2 21:50:01 Discovery kernel: Hardware name: Gigabyte Technology Co., Ltd. Z690 GAMING X DDR4/Z690 GAMING X DDR4, BIOS F22 12/07/2022
    Mar  2 21:50:01 Discovery kernel: RIP: 0010:do_exit+0x39/0x8e5
    Mar  2 21:50:01 Discovery kernel: Code: 89 fd 53 48 83 ec 28 65 48 8b 04 25 28 00 00 00 48 89 44 24 20 31 c0 65 48 8b 1c 25 c0 bb 01 00 48 83 bb a0 07 00 00 00 74 02 <0f> 0b 48 8b bb c8 06 00 00 e8 b7 c0 7c 00 48 8b 83 c0 06 00 00 83
    Mar  2 21:50:01 Discovery kernel: RSP: 0018:ffffc90000cc3ee0 EFLAGS: 00010282
    Mar  2 21:50:01 Discovery kernel: RAX: 0000000000000000 RBX: ffff8882094b6000 RCX: 0000000000000000
    Mar  2 21:50:01 Discovery kernel: RDX: 0000000000000000 RSI: ffffffff820d7be1 RDI: 000000000000000b
    Mar  2 21:50:01 Discovery kernel: RBP: 000000000000000b R08: 0000000000000000 R09: 0000000000aaaaaa
    Mar  2 21:50:01 Discovery kernel: R10: 0000000000000001 R11: 0000000000000001 R12: 0000000000000000
    Mar  2 21:50:01 Discovery kernel: R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
    Mar  2 21:50:01 Discovery kernel: FS:  0000150f46ca36c0(0000) GS:ffff88907f900000(0000) knlGS:0000000000000000
    Mar  2 21:50:01 Discovery kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Mar  2 21:50:01 Discovery kernel: CR2: 0000150fdd400000 CR3: 00000001ce180006 CR4: 0000000000770ee0
    Mar  2 21:50:01 Discovery kernel: PKRU: 55555554
    Mar  2 21:50:01 Discovery kernel: Call Trace:
    Mar  2 21:50:01 Discovery kernel: <TASK>
    Mar  2 21:50:01 Discovery kernel: ? ksys_pread64+0x64/0x84
    Mar  2 21:50:01 Discovery kernel: make_task_dead+0xba/0xba
    Mar  2 21:50:01 Discovery kernel: rewind_stack_and_make_dead+0x17/0x17
    Mar  2 21:50:01 Discovery kernel: RIP: 0033:0x150f476b3657
    Mar  2 21:50:01 Discovery kernel: Code: 08 89 3c 24 48 89 4c 24 18 e8 f5 4a f8 ff 4c 8b 54 24 18 48 8b 54 24 10 41 89 c0 48 8b 74 24 08 8b 3c 24 b8 11 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 04 24 e8 45 4b f8 ff 48 8b
    Mar  2 21:50:01 Discovery kernel: RSP: 002b:0000150f46ca2a10 EFLAGS: 00000297 ORIG_RAX: 0000000000000011
    Mar  2 21:50:01 Discovery kernel: RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 0000150f476b3657
    Mar  2 21:50:01 Discovery kernel: RDX: 0000000000020000 RSI: 0000150f30039000 RDI: 00000000000001ad
    Mar  2 21:50:01 Discovery kernel: RBP: 0000000000000000 R08: 0000000000000001 R09: 0000150f46ca2bd8
    Mar  2 21:50:01 Discovery kernel: R10: 000000008091b000 R11: 0000000000000297 R12: 0000150f46ca2bd8
    Mar  2 21:50:01 Discovery kernel: R13: 0000000000000000 R14: 0000150f30012928 R15: 0000000000000000
    Mar  2 21:50:01 Discovery kernel: </TASK>
    Mar  2 21:50:01 Discovery kernel: ---[ end trace 0000000000000000 ]---
    Mar  2 21:53:04 Discovery kernel: md: recovery thread: P corrected, sector=106723928

    (last entry is caused by the array shutting down uncleanly)

     

    after this the unraid UI is unresponsive (does not load), ssh works. though a shutdown holts and does nothing. this is really all the information i have, im going to try revert back to the 12700f that was in it previously, otherwise i have a dead CPU/unsupported?

     

    I will try get a diagnostics report

    edit: there are no diagnostic reports that were made for the 13700k, swapped back to the 12700f, everything is fine. what are the chances of this CPU being dead? if anyone can look at the logs that i have above, and determine if the CPU is bad or whether i should put it in another machine for further testing.

     

    Edit 2:

    put the 13700k into another system, did a test with IPDT, passed on "instant 6ghz" mode, ngl im lost, here are the system specs if that helps:

    Z690 GAMING X DDR4 (same mb that i tested the cpu with in another system)

    Corsair Vengeance LPX 64GB (4x16GB) (3600MHz)

    GTX 1660 Super

     

    I can rule out power delivery problems, its a 850w PSU, which is the same as my other system that i tested this CPU with and that one has a 4070ti

     

    edit 100:

    still, with the old 12700f i now get a crash :(:

    Mar  2 23:27:19 Discovery kernel: Sending NMI from CPU 9 to CPUs 16:
    Mar  2 23:27:19 Discovery kernel: NMI backtrace for cpu 16
    Mar  2 23:27:19 Discovery kernel: CPU: 16 PID: 0 Comm: swapper/16 Tainted: P      D    O      5.19.17-Unraid #2
    Mar  2 23:27:19 Discovery kernel: Hardware name: Gigabyte Technology Co., Ltd. Z690 GAMING X DDR4/Z690 GAMING X DDR4, BIOS F22 12/07/2022
    Mar  2 23:27:19 Discovery kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x16e/0x1d0
    Mar  2 23:27:19 Discovery kernel: Code: f6 48 05 00 ce 02 00 48 03 04 f5 e0 6a 16 82 48 89 10 8b 42 08 85 c0 75 04 f3 90 eb f5 48 8b 32 48 85 f6 74 bc 0f 0d 0e 8b 03 <66> 85 c0 74 04 f3 90 eb f5 89 c7 66 31 ff 39 f9 74 0a 48 85 f6 c6
    Mar  2 23:27:19 Discovery kernel: RSP: 0018:ffffc900004d8e38 EFLAGS: 00000002
    Mar  2 23:27:19 Discovery kernel: RAX: 0000000000040101 RBX: ffff888104a65570 RCX: 0000000000440000
    Mar  2 23:27:19 Discovery kernel: RDX: ffff88907fc2ce00 RSI: 0000000000000000 RDI: ffff888104a65570
    Mar  2 23:27:19 Discovery kernel: RBP: 0000000000000010 R08: 0000000000000000 R09: 0000000000000200
    Mar  2 23:27:19 Discovery kernel: R10: 0000000000000000 R11: ffffc900004d8ff8 R12: ffff88907fc2ce00
    Mar  2 23:27:19 Discovery kernel: R13: 0000000000000000 R14: 0000000000000202 R15: ffff8881bcb59b00
    Mar  2 23:27:19 Discovery kernel: FS:  0000000000000000(0000) GS:ffff88907fc00000(0000) knlGS:0000000000000000
    Mar  2 23:27:19 Discovery kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Mar  2 23:27:19 Discovery kernel: CR2: 000000c00010f010 CR3: 000000000420a003 CR4: 0000000000770ee0
    Mar  2 23:27:19 Discovery kernel: PKRU: 55555554
    Mar  2 23:27:19 Discovery kernel: Call Trace:
    Mar  2 23:27:19 Discovery kernel: <IRQ>
    Mar  2 23:27:19 Discovery kernel: do_raw_spin_lock+0x14/0x1a
    Mar  2 23:27:19 Discovery kernel: _raw_spin_lock_irqsave+0x2c/0x37
    Mar  2 23:27:19 Discovery kernel: end_request+0x158/0x184 [md_mod]
    Mar  2 23:27:19 Discovery kernel: blk_update_request+0x22c/0x2e2
    Mar  2 23:27:19 Discovery kernel: scsi_end_request+0x27/0xf0
    Mar  2 23:27:19 Discovery kernel: scsi_io_completion+0x15f/0x466
    Mar  2 23:27:19 Discovery kernel: blk_complete_reqs+0x3e/0x4c
    Mar  2 23:27:19 Discovery kernel: __do_softirq+0x126/0x288
    Mar  2 23:27:19 Discovery kernel: __irq_exit_rcu+0x79/0xb8
    Mar  2 23:27:19 Discovery kernel: common_interrupt+0x9b/0xc1
    Mar  2 23:27:19 Discovery kernel: </IRQ>
    Mar  2 23:27:19 Discovery kernel: <TASK>
    Mar  2 23:27:19 Discovery kernel: asm_common_interrupt+0x22/0x40
    Mar  2 23:27:19 Discovery kernel: RIP: 0010:cpuidle_enter_state+0x11b/0x1e4
    Mar  2 23:27:19 Discovery kernel: Code: 5b fa a1 ff 45 84 ff 74 1b 9c 58 0f 1f 40 00 0f ba e0 09 73 08 0f 0b fa 0f 1f 44 00 00 31 ff e8 9d a9 a6 ff fb 0f 1f 44 00 00 <45> 85 ed 0f 88 9e 00 00 00 48 8b 04 24 49 63 cd 48 6b d1 68 49 29
    Mar  2 23:27:19 Discovery kernel: RSP: 0018:ffffc900001fbe98 EFLAGS: 00000246
    Mar  2 23:27:19 Discovery kernel: RAX: ffff88907fc00000 RBX: 0000000000000004 RCX: 0000000000000000
    Mar  2 23:27:19 Discovery kernel: RDX: 0000000000000010 RSI: ffffffff820d7be1 RDI: ffffffff820d80c1
    Mar  2 23:27:19 Discovery kernel: RBP: ffff88907fc35600 R08: 0000000000000000 R09: 0000000000000000
    Mar  2 23:27:19 Discovery kernel: R10: 0000000000000020 R11: 0000000000000100 R12: ffffffff82315740
    Mar  2 23:27:19 Discovery kernel: R13: 0000000000000004 R14: 0000031093627f79 R15: 0000000000000000
    Mar  2 23:27:19 Discovery kernel: ? cpuidle_enter_state+0xf5/0x1e4
    Mar  2 23:27:19 Discovery kernel: cpuidle_enter+0x2a/0x38
    Mar  2 23:27:19 Discovery kernel: do_idle+0x187/0x1f5
    Mar  2 23:27:19 Discovery kernel: cpu_startup_entry+0x1d/0x1f
    Mar  2 23:27:19 Discovery kernel: start_secondary+0xeb/0xeb
    Mar  2 23:27:19 Discovery kernel: secondary_startup_64_no_verify+0xce/0xdb
    Mar  2 23:27:19 Discovery kernel: </TASK>

    I will swap back in the old ram and see... (this NEVER happened before I upgraded the CPU and ram)

  3. Im trying to get my bluetooth 5 dongle to passthrough to docker in unraid 6.11.0-rc2, it looks like the drivers are trying to download automatically, but are failing to download

    [   57.803956] Bluetooth: Core ver 2.22
    [   57.804111] NET: Registered PF_BLUETOOTH protocol family
    [   57.804258] Bluetooth: HCI device and connection manager initialized
    [   57.804406] Bluetooth: HCI socket layer initialized
    [   57.804553] Bluetooth: L2CAP socket layer initialized
    [   57.804698] Bluetooth: SCO socket layer initialized
    [   58.867186] Bluetooth: hci0: RTL: examining hci_ver=0a hci_rev=000b lmp_ver=0a lmp_subver=8761
    [   58.879191] Bluetooth: hci0: RTL: rom_version status=0 version=1
    [   58.879351] Bluetooth: hci0: RTL: loading rtl_bt/rtl8761bu_fw.bin
    [   58.879532] bluetooth hci0: Direct firmware load for rtl_bt/rtl8761bu_fw.bin failed with error -2
    [   58.879689] Bluetooth: hci0: RTL: firmware file rtl_bt/rtl8761bu_fw.bin not found

     

  4. 2 minutes ago, johnnie.black said:

    Yep, df is also reporting the wrong free space, so it's a btrfs issue, df does report the correct used space unlike Unraid, but that's a known issue, since it currently calculates the used space by subtracting free space from total capacity.

    So this is a bug? Should I be worried about it? Or should I ignore it until it’s been patched? (Assuming it’ll be patched in the next update)

  5. 19 hours ago, johnnie.black said:

    If you don't need the extra space remove it, one less point of failure, also array writes can be faster since it's basically RAID1, but as long as you have system notifications enable and do regular parity checks it's unlikely to fail "silently".

    thanks! From what you suggested i think i will leave it for now

  6. Hey,
    I have 3 WD Red HHDs (2x 8TB 1x 4TB) with 1 of the 8TB drives being parity. I bought the 2 8TB drives new and The 4TB drive was taken out a WD NAS, so 2 brand new ones and 1 old one (~3 Year Power on hours according to smart). As the 4TB drive is old and has 0% usage should I remove it from the array so there is only 1 disk and 1 parity? What if the 4TB drive silently fails then my main 8TB drive fails?

     

    I have attached a SMART report of the ~3 Year old disk from what I see its healthy For Now

    skynet-smart-20200426-2211.zip

×
×
  • Create New...