LammeN3rd

Members
  • Posts

    76
  • Joined

  • Days Won

    1

Everything posted by LammeN3rd

  1. Is there a compelling reason that there is a Red error notification when there is a update for the plugin? from my perspective this should not be red since it's just a plugin update not something really bad.... There have been quite some Updates the last couple of weeks and I still jump every time I see a red error 😬
  2. Great to hear! I've switched my UniFi Docker back to macvlan will report back if the issue comes up again
  3. to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's. and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there.
  4. you could have a look at het used space on a drive level, but that's not that easy when drives are used in BTRFS raid other than 2 drives in raid1. NVMe drives usually report namespace utilisation so looking at that number and testing only the Namespace 1 utilization would do the trick. this is the graph from one of my NVMe drives: and this is the used space (274GB):
  5. Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD? Yes, the controller or interface is probably the bottleneck
  6. that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
  7. Hi, I would recommend against this workaround. Besides the complications and performance impact from running this vm the main reason not to do this and use a router in bridge mode is that you lose acces to your unraid server if there is anything wrong with anything. that could be the routing vm or anything with unraid / the hardware...... sounds like a to complicated solution for a simple problem.....
  8. please add this to the help notes in Unraid, when I started with Unraid I googled around for this info and having it in the help text will help a lot of new users
  9. LammeN3rd

    Squid is 50!

    Happy birthday squid!!
  10. found a few more places with the same issue, these are all places I found: opening log from top right corner opening the log of an individual VM Settings --> VM Manager --> View Libvirt LOG opening Console from a Docker (both from docker tab as well as Dashboard) The following places do not have the issue and work as normal: opening Terminal from top right corner opening a docker log from docker tab if I find any others I will update
  11. systemlog under the tools tab works fine, so no reason to switch browsers just yet
  12. In safari opening the log window (top right corner) shows the login screen, the terminal and or the logs from docker containers work as normal. In Firefox the log window opens and shows the log like in 6.7.2
  13. I suggest looking at your BIOS! from the above log it seems you are running a 2011 version the last version available is from feb 2019 https://www.dell.com/support/home/nl/nl/nlbsdt1/drivers/driversdetails?driverid=0f4yy&oscode=ws8r2&productcode=poweredge-r710 for my Poweredge T630 the BIOS from Feb 2019 fixed my issue!
  14. Quick update, I've done a lot of testing removing plugins, disabling Virtual machines and Docker before I rolled back the firmware of my intel NIC (i350) and Bios that I had updated recently. After the BIOS downgrade to 2.9.1 I've not seen the issue for days upgrading tot 2.10.5 brings back the crashes! I've not reinstalled my plugins so I can't be 100% sure its the BIOS but if anyone else has this issue I highly recommend looking at your BIOS!!
  15. Before unRaid 6.7.x I used AFP but had a lot of issue's with the introduction of unRaid 6.7.x time-machine over SMB works flawless!
  16. Ive been noticing the same kind of kernel warnings, system keeps running though. (I also run Unraid on a Dell Poweredge T630) I've been running 6.7.2 since it's release and these error's are more recent so it can't be due to a core Unraid change, only updated plugin's or dockers are suspect. Oct 17 06:09:47 unRAID kernel: WARNING: CPU: 18 PID: 0 at net/netfilter/nf_nat_core.c:420 nf_nat_setup_info+0x6b/0x5fb [nf_nat] Oct 17 06:09:47 unRAID kernel: Modules linked in: xt_CHECKSUM veth ipt_REJECT ip6table_mangle ip6table_nat nf_nat_ipv6 xt_nat iptable_mangle ip6table_filter ip6_tables macvlan vhost_net tun vhost tap ipt_MASQUERADE iptable_nat nf_nat_ipv4 iptable_filter ip_tables nf_nat nfsd lockd grace sunrpc md_mod ipmi_devintf igb i2c_algo_bit sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd cryptd glue_helper ipmi_ssif intel_cstate intel_uncore nvme intel_rapl_perf i2c_core mxm_wmi megaraid_sas nvme_core wmi ipmi_si acpi_power_meter acpi_pad pcc_cpufreq button [last unloaded: i2c_algo_bit] Oct 17 06:09:47 unRAID kernel: CPU: 18 PID: 0 Comm: swapper/18 Tainted: G W 4.19.56-Unraid #1 Oct 17 06:09:47 unRAID kernel: Hardware name: Dell Inc. PowerEdge T630/0NT78X, BIOS 2.10.5 08/07/2019 Oct 17 06:09:47 unRAID kernel: RIP: 0010:nf_nat_setup_info+0x6b/0x5fb [nf_nat] Oct 17 06:09:47 unRAID kernel: Code: 48 89 fb 48 8b 87 80 00 00 00 49 89 f7 41 89 d6 76 04 0f 0b eb 0b 85 d2 75 07 25 80 00 00 00 eb 05 25 00 01 00 00 85 c0 74 07 <0f> 0b e9 ac 04 00 00 48 8b 83 90 00 00 00 4c 8d 64 24 30 48 8d 73 Oct 17 06:09:47 unRAID kernel: RSP: 0018:ffff88903fa83808 EFLAGS: 00010202 Oct 17 06:09:47 unRAID kernel: RAX: 0000000000000080 RBX: ffff88a0256ba640 RCX: 0000000000000000 Oct 17 06:09:47 unRAID kernel: RDX: 0000000000000000 RSI: ffff88903fa838f4 RDI: ffff88a0256ba640 Oct 17 06:09:47 unRAID kernel: RBP: ffff88903fa838e0 R08: ffff88a0256ba640 R09: 0000000000000000 Oct 17 06:09:47 unRAID kernel: R10: 0000000000000158 R11: ffffffff81e8e001 R12: ffff88859e47ff00 Oct 17 06:09:47 unRAID kernel: R13: 0000000000000000 R14: 0000000000000000 R15: ffff88903fa838f4 Oct 17 06:09:47 unRAID kernel: FS: 0000000000000000(0000) GS:ffff88903fa80000(0000) knlGS:0000000000000000 Oct 17 06:09:47 unRAID kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Oct 17 06:09:47 unRAID kernel: CR2: 000000c42017a000 CR3: 0000000001e0a001 CR4: 00000000003626e0 Oct 17 06:09:47 unRAID kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Oct 17 06:09:47 unRAID kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Oct 17 06:09:47 unRAID kernel: Call Trace: Oct 17 06:09:47 unRAID kernel: <IRQ> Oct 17 06:09:47 unRAID kernel: ? ipt_do_table+0x58e/0x5db [ip_tables] Oct 17 06:09:47 unRAID kernel: nf_nat_alloc_null_binding+0x6f/0x86 [nf_nat] Oct 17 06:09:47 unRAID kernel: nf_nat_inet_fn+0xa0/0x192 [nf_nat] Oct 17 06:09:47 unRAID kernel: nf_hook_slow+0x37/0x96 Oct 17 06:09:47 unRAID kernel: ip_local_deliver+0xa9/0xd7 Oct 17 06:09:47 unRAID kernel: ? ip_sublist_rcv_finish+0x53/0x53 Oct 17 06:09:47 unRAID kernel: ip_sabotage_in+0x38/0x3e Oct 17 06:09:47 unRAID kernel: nf_hook_slow+0x37/0x96 Oct 17 06:09:47 unRAID kernel: ip_rcv+0x8e/0xbe Oct 17 06:09:47 unRAID kernel: ? ip_rcv_finish_core.isra.0+0x2e2/0x2e2 Oct 17 06:09:47 unRAID kernel: __netif_receive_skb_one_core+0x4d/0x69 Oct 17 06:09:47 unRAID kernel: netif_receive_skb_internal+0x9f/0xba Oct 17 06:09:47 unRAID kernel: br_pass_frame_up+0x123/0x145 Oct 17 06:09:47 unRAID kernel: ? br_port_flags_change+0x29/0x29 Oct 17 06:09:47 unRAID kernel: br_handle_frame_finish+0x330/0x375 Oct 17 06:09:47 unRAID kernel: ? ipt_do_table+0x58e/0x5db [ip_tables] Oct 17 06:09:47 unRAID kernel: ? br_pass_frame_up+0x145/0x145 Oct 17 06:09:47 unRAID kernel: br_nf_hook_thresh+0xa3/0xc3 Oct 17 06:09:47 unRAID kernel: ? br_pass_frame_up+0x145/0x145 Oct 17 06:09:47 unRAID kernel: br_nf_pre_routing_finish+0x239/0x260 Oct 17 06:09:47 unRAID kernel: ? br_pass_frame_up+0x145/0x145 Oct 17 06:09:47 unRAID kernel: ? nf_nat_ipv4_in+0x1d/0x64 [nf_nat_ipv4] Oct 17 06:09:47 unRAID kernel: br_nf_pre_routing+0x2fc/0x321 Oct 17 06:09:47 unRAID kernel: ? br_nf_forward_ip+0x352/0x352 Oct 17 06:09:47 unRAID kernel: nf_hook_slow+0x37/0x96 Oct 17 06:09:47 unRAID kernel: br_handle_frame+0x290/0x2d3 Oct 17 06:09:47 unRAID kernel: ? br_pass_frame_up+0x145/0x145 Oct 17 06:09:47 unRAID kernel: ? br_handle_local_finish+0xe/0xe Oct 17 06:09:47 unRAID kernel: __netif_receive_skb_core+0x466/0x798 Oct 17 06:09:47 unRAID kernel: ? udp_gro_receive+0x4c/0x134 Oct 17 06:09:47 unRAID kernel: __netif_receive_skb_one_core+0x31/0x69 Oct 17 06:09:47 unRAID kernel: netif_receive_skb_internal+0x9f/0xba Oct 17 06:09:47 unRAID kernel: napi_gro_receive+0x42/0x76 Oct 17 06:09:47 unRAID kernel: igb_poll+0xb96/0xbbc [igb] Oct 17 06:09:47 unRAID kernel: net_rx_action+0x10b/0x274 Oct 17 06:09:47 unRAID kernel: __do_softirq+0xce/0x1e2 Oct 17 06:09:47 unRAID kernel: irq_exit+0x5e/0x9d Oct 17 06:09:47 unRAID kernel: do_IRQ+0xa9/0xc7 Oct 17 06:09:47 unRAID kernel: common_interrupt+0xf/0xf Oct 17 06:09:47 unRAID kernel: </IRQ> Oct 17 06:09:47 unRAID kernel: RIP: 0010:cpuidle_enter_state+0xe8/0x141 Oct 17 06:09:47 unRAID kernel: Code: ff 45 84 ff 74 1d 9c 58 0f 1f 44 00 00 0f ba e0 09 73 09 0f 0b fa 66 0f 1f 44 00 00 31 ff e8 ae 0c be ff fb 66 0f 1f 44 00 00 <48> 2b 1c 24 b8 ff ff ff 7f 48 b9 ff ff ff ff f3 01 00 00 48 39 cb Oct 17 06:09:47 unRAID kernel: RSP: 0018:ffffc900064b3ea0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffd2 Oct 17 06:09:47 unRAID kernel: RAX: ffff88903faa0b00 RBX: 00006e7edafc25ba RCX: 000000000000001f Oct 17 06:09:47 unRAID kernel: RDX: 00006e7edafc25ba RSI: 0000000035652651 RDI: 0000000000000000 Oct 17 06:09:47 unRAID kernel: RBP: ffff88903faab400 R08: 0000000000000002 R09: 00000000000203c0 Oct 17 06:09:47 unRAID kernel: R10: 00000000006ac764 R11: 0015366f8d483640 R12: 0000000000000004 Oct 17 06:09:47 unRAID kernel: R13: 0000000000000004 R14: ffffffff81e5a018 R15: 0000000000000000 Oct 17 06:09:47 unRAID kernel: do_idle+0x192/0x20e Oct 17 06:09:47 unRAID kernel: cpu_startup_entry+0x6a/0x6c Oct 17 06:09:47 unRAID kernel: start_secondary+0x197/0x1b2 Oct 17 06:09:47 unRAID kernel: secondary_startup_64+0xa4/0xb0 Oct 17 06:09:47 unRAID kernel: ---[ end trace 6ab3ef74e0b3e5a4 ]--- unraid-diagnostics-20191017-1333.zip
  17. I've been noticing the same messages in my unraid log: every seems to keep working but these errors keep coming back, any ideas?
  18. Write amplification is *&%^$#& as well! https://en.wikipedia.org/wiki/Write_amplification
  19. As long as you have anything but intel SSD's, these tend to go read-only when they reach their predicted life all to protect the customer intel sales!
  20. The English datasheet 200TBW https://www.samsung.com/semiconductor/global.semi.static/Samsung_SSD_960_EVO_Data_Sheet_Rev_1_2.pdf you can always try to claim warranty based on the German site... I found myself some cheap enterprise NVMe drives on eBay I highly recommend them, powerless protection very consistent write speed and above all else 1366 TBW
  21. It could be an option to add a check to the Update Assistant from the Fix Common Problems Plugin hat community tool already supports plugin compatibility.
  22. Same here, just with 4x NVMe in Raid 1 and both sonarr and plex use /mnt/user/appdata I've been running 6.7.1 RC and 6.7.0 before that never had any corruption issues!
  23. There is a plugin to disable them link: plugin-disable-security-mitigations