Jump to content

LammeN3rd

Members
  • Posts

    83
  • Joined

Posts posted by LammeN3rd

  1. I think this is a long overdue change! chasing more and more new users is just not sustainable in the long run!

    All current users being grandfathered in is great but as a user of a Pro licence for the last 6 years I would recommend looking at the model of Nabucasa (home assistant) and developing additional services for unRAID that do require a subscription for legacy users!

    I love Unraid and hope to use it for many more years, the company behind it being healthy is crucial to make that happen.

    • Like 2
  2. Is there a compelling reason that there is a Red error notification when there is a update for the plugin?

    from my perspective this should not be red since it's just a plugin update not something really bad....

    There have been quite some Updates the last couple of weeks and I still jump every time I see a red error 😬

     

    733150759_Screenshot2021-04-29at08_19_02.png.cab4004a9c0583ac5d8160454d28af49.png

  3. 11 hours ago, Hoopster said:

    Updated to 6.9.2 easily with no issue so far.

     

    A couple of posts were made earlier about 6.9.2 containing a kernel patch which would potentially address the macvlan/broadcast call traces on br0 networks when docker containers are assigned a specific IP address.  Creating a VLAN for docker containers solved the problem for me three years ago when I first came across the problem.

     

    I am curious to know what patch that is and what it addresses to resolve the problem (assuming it has been solved).

    I was wondering the same thing :)

  4. to be honest, I don't think it makes real sense to test more than the first 10% of an ssd, this would bypass this issue on all but completely empty ssd's.

    and ssd's don't have any speed difference for different positions of used flash when doing 100% read speed test, for a spinning disk this makes total sense but from a flash perspective a read workload has no difference as long there is data there.

     

  5. 26 minutes ago, Fireball3 said:
    11 hours ago, LammeN3rd said:
    that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.
     

    Just to make sure I understand it right. The flat line that basically indicates the max. interface throughput is trimmed (empty) space on the SSD?

    Yes, the controller or interface is probably the bottleneck

  6. On 1/4/2021 at 5:50 AM, jbartlett said:

    That's a trend across pretty much every SSD and I don't have an answer for you as to why. On the HDDB, I take the peek speed and report that as the transfer speed.

    that's the result of trim, when data is deleted from a modern SSD trim is used to tell the controller that those blocks are free and can be erased, the controller does that and marks those blocks / pages as zeroes. when you try to read from these blocks the ssd controller does not actually read the flash it just sends you zeroes as fast as it can. this is the reason ssd's used in the unraid parity array can not be used with trim since that will invalidate the parity.

     

  7. Hi,

     

    I would recommend against this workaround.  Besides the complications and performance impact from running this vm the main reason not to do this and  use a router in bridge mode is that you lose acces to your unraid server if there is anything wrong with anything. 
    that could be the routing vm or anything with unraid / the hardware...... sounds like a to complicated solution for a simple problem.....

  8. On 10/10/2019 at 10:27 PM, em1917 said:

    here are the diagnostic files. there are no hardware errors. this is running on a Dell R710 Server and I checked for ECC issues and other hardware failures that the drac would have picked up.

    utgard01-diagnostics-20191010-2026.zip 94.26 kB · 0 downloads

    I suggest looking at your BIOS! from the above log it seems you are running a 2011 version the last version available is from feb 2019 

    https://www.dell.com/support/home/nl/nl/nlbsdt1/drivers/driversdetails?driverid=0f4yy&oscode=ws8r2&productcode=poweredge-r710

     

    for my Poweredge T630 the BIOS from Feb 2019 fixed my issue!

  9. Quick update, I've done a lot of testing removing plugins, disabling Virtual machines and Docker before I rolled back the firmware of my intel NIC (i350) and Bios that I had updated recently. After the BIOS downgrade to 2.9.1 I've not seen the issue for days upgrading tot 2.10.5 brings back the crashes!

     

    I've not reinstalled my plugins so I can't be 100% sure its the BIOS but if anyone else has this issue I highly recommend looking at your BIOS!!

     

  10. Ive been noticing the same kind of kernel warnings, system keeps running though. (I also run Unraid on a Dell Poweredge T630)

    I've been running 6.7.2 since it's release and these error's are more recent so it can't be due to a core Unraid change, only updated plugin's or dockers are suspect.

     

     

    Oct 17 06:09:47 unRAID kernel: WARNING: CPU: 18 PID: 0 at net/netfilter/nf_nat_core.c:420 nf_nat_setup_info+0x6b/0x5fb [nf_nat]
    Oct 17 06:09:47 unRAID kernel: Modules linked in: xt_CHECKSUM veth ipt_REJECT ip6table_mangle ip6table_nat nf_nat_ipv6 xt_nat iptable_mangle ip6table_filter ip6_tables macvlan vhost_net tun vhost tap ipt_MASQUERADE iptable_nat nf_nat_ipv4 iptable_filter ip_tables nf_nat nfsd lockd grace sunrpc md_mod ipmi_devintf igb i2c_algo_bit sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc aesni_intel aes_x86_64 crypto_simd cryptd glue_helper ipmi_ssif intel_cstate intel_uncore nvme intel_rapl_perf i2c_core mxm_wmi megaraid_sas nvme_core wmi ipmi_si acpi_power_meter acpi_pad pcc_cpufreq button [last unloaded: i2c_algo_bit]
    Oct 17 06:09:47 unRAID kernel: CPU: 18 PID: 0 Comm: swapper/18 Tainted: G        W         4.19.56-Unraid #1
    Oct 17 06:09:47 unRAID kernel: Hardware name: Dell Inc. PowerEdge T630/0NT78X, BIOS 2.10.5 08/07/2019
    Oct 17 06:09:47 unRAID kernel: RIP: 0010:nf_nat_setup_info+0x6b/0x5fb [nf_nat]
    Oct 17 06:09:47 unRAID kernel: Code: 48 89 fb 48 8b 87 80 00 00 00 49 89 f7 41 89 d6 76 04 0f 0b eb 0b 85 d2 75 07 25 80 00 00 00 eb 05 25 00 01 00 00 85 c0 74 07 <0f> 0b e9 ac 04 00 00 48 8b 83 90 00 00 00 4c 8d 64 24 30 48 8d 73
    Oct 17 06:09:47 unRAID kernel: RSP: 0018:ffff88903fa83808 EFLAGS: 00010202
    Oct 17 06:09:47 unRAID kernel: RAX: 0000000000000080 RBX: ffff88a0256ba640 RCX: 0000000000000000
    Oct 17 06:09:47 unRAID kernel: RDX: 0000000000000000 RSI: ffff88903fa838f4 RDI: ffff88a0256ba640
    Oct 17 06:09:47 unRAID kernel: RBP: ffff88903fa838e0 R08: ffff88a0256ba640 R09: 0000000000000000
    Oct 17 06:09:47 unRAID kernel: R10: 0000000000000158 R11: ffffffff81e8e001 R12: ffff88859e47ff00
    Oct 17 06:09:47 unRAID kernel: R13: 0000000000000000 R14: 0000000000000000 R15: ffff88903fa838f4
    Oct 17 06:09:47 unRAID kernel: FS:  0000000000000000(0000) GS:ffff88903fa80000(0000) knlGS:0000000000000000
    Oct 17 06:09:47 unRAID kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Oct 17 06:09:47 unRAID kernel: CR2: 000000c42017a000 CR3: 0000000001e0a001 CR4: 00000000003626e0
    Oct 17 06:09:47 unRAID kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Oct 17 06:09:47 unRAID kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Oct 17 06:09:47 unRAID kernel: Call Trace:
    Oct 17 06:09:47 unRAID kernel: <IRQ>
    Oct 17 06:09:47 unRAID kernel: ? ipt_do_table+0x58e/0x5db [ip_tables]
    Oct 17 06:09:47 unRAID kernel: nf_nat_alloc_null_binding+0x6f/0x86 [nf_nat]
    Oct 17 06:09:47 unRAID kernel: nf_nat_inet_fn+0xa0/0x192 [nf_nat]
    Oct 17 06:09:47 unRAID kernel: nf_hook_slow+0x37/0x96
    Oct 17 06:09:47 unRAID kernel: ip_local_deliver+0xa9/0xd7
    Oct 17 06:09:47 unRAID kernel: ? ip_sublist_rcv_finish+0x53/0x53
    Oct 17 06:09:47 unRAID kernel: ip_sabotage_in+0x38/0x3e
    Oct 17 06:09:47 unRAID kernel: nf_hook_slow+0x37/0x96
    Oct 17 06:09:47 unRAID kernel: ip_rcv+0x8e/0xbe
    Oct 17 06:09:47 unRAID kernel: ? ip_rcv_finish_core.isra.0+0x2e2/0x2e2
    Oct 17 06:09:47 unRAID kernel: __netif_receive_skb_one_core+0x4d/0x69
    Oct 17 06:09:47 unRAID kernel: netif_receive_skb_internal+0x9f/0xba
    Oct 17 06:09:47 unRAID kernel: br_pass_frame_up+0x123/0x145
    Oct 17 06:09:47 unRAID kernel: ? br_port_flags_change+0x29/0x29
    Oct 17 06:09:47 unRAID kernel: br_handle_frame_finish+0x330/0x375
    Oct 17 06:09:47 unRAID kernel: ? ipt_do_table+0x58e/0x5db [ip_tables]
    Oct 17 06:09:47 unRAID kernel: ? br_pass_frame_up+0x145/0x145
    Oct 17 06:09:47 unRAID kernel: br_nf_hook_thresh+0xa3/0xc3
    Oct 17 06:09:47 unRAID kernel: ? br_pass_frame_up+0x145/0x145
    Oct 17 06:09:47 unRAID kernel: br_nf_pre_routing_finish+0x239/0x260
    Oct 17 06:09:47 unRAID kernel: ? br_pass_frame_up+0x145/0x145
    Oct 17 06:09:47 unRAID kernel: ? nf_nat_ipv4_in+0x1d/0x64 [nf_nat_ipv4]
    Oct 17 06:09:47 unRAID kernel: br_nf_pre_routing+0x2fc/0x321
    Oct 17 06:09:47 unRAID kernel: ? br_nf_forward_ip+0x352/0x352
    Oct 17 06:09:47 unRAID kernel: nf_hook_slow+0x37/0x96
    Oct 17 06:09:47 unRAID kernel: br_handle_frame+0x290/0x2d3
    Oct 17 06:09:47 unRAID kernel: ? br_pass_frame_up+0x145/0x145
    Oct 17 06:09:47 unRAID kernel: ? br_handle_local_finish+0xe/0xe
    Oct 17 06:09:47 unRAID kernel: __netif_receive_skb_core+0x466/0x798
    Oct 17 06:09:47 unRAID kernel: ? udp_gro_receive+0x4c/0x134
    Oct 17 06:09:47 unRAID kernel: __netif_receive_skb_one_core+0x31/0x69
    Oct 17 06:09:47 unRAID kernel: netif_receive_skb_internal+0x9f/0xba
    Oct 17 06:09:47 unRAID kernel: napi_gro_receive+0x42/0x76
    Oct 17 06:09:47 unRAID kernel: igb_poll+0xb96/0xbbc [igb]
    Oct 17 06:09:47 unRAID kernel: net_rx_action+0x10b/0x274
    Oct 17 06:09:47 unRAID kernel: __do_softirq+0xce/0x1e2
    Oct 17 06:09:47 unRAID kernel: irq_exit+0x5e/0x9d
    Oct 17 06:09:47 unRAID kernel: do_IRQ+0xa9/0xc7
    Oct 17 06:09:47 unRAID kernel: common_interrupt+0xf/0xf
    Oct 17 06:09:47 unRAID kernel: </IRQ>
    Oct 17 06:09:47 unRAID kernel: RIP: 0010:cpuidle_enter_state+0xe8/0x141
    Oct 17 06:09:47 unRAID kernel: Code: ff 45 84 ff 74 1d 9c 58 0f 1f 44 00 00 0f ba e0 09 73 09 0f 0b fa 66 0f 1f 44 00 00 31 ff e8 ae 0c be ff fb 66 0f 1f 44 00 00 <48> 2b 1c 24 b8 ff ff ff 7f 48 b9 ff ff ff ff f3 01 00 00 48 39 cb
    Oct 17 06:09:47 unRAID kernel: RSP: 0018:ffffc900064b3ea0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffd2
    Oct 17 06:09:47 unRAID kernel: RAX: ffff88903faa0b00 RBX: 00006e7edafc25ba RCX: 000000000000001f
    Oct 17 06:09:47 unRAID kernel: RDX: 00006e7edafc25ba RSI: 0000000035652651 RDI: 0000000000000000
    Oct 17 06:09:47 unRAID kernel: RBP: ffff88903faab400 R08: 0000000000000002 R09: 00000000000203c0
    Oct 17 06:09:47 unRAID kernel: R10: 00000000006ac764 R11: 0015366f8d483640 R12: 0000000000000004
    Oct 17 06:09:47 unRAID kernel: R13: 0000000000000004 R14: ffffffff81e5a018 R15: 0000000000000000
    Oct 17 06:09:47 unRAID kernel: do_idle+0x192/0x20e
    Oct 17 06:09:47 unRAID kernel: cpu_startup_entry+0x6a/0x6c
    Oct 17 06:09:47 unRAID kernel: start_secondary+0x197/0x1b2
    Oct 17 06:09:47 unRAID kernel: secondary_startup_64+0xa4/0xb0
    Oct 17 06:09:47 unRAID kernel: ---[ end trace 6ab3ef74e0b3e5a4 ]---

     

    unraid-diagnostics-20191017-1333.zip

  11. On 9/15/2019 at 12:25 PM, weirdcrap said:

    I have been noticing this as well, I have been ignoring it with no consequences but am curious if there is a resolution...

     

    Edit: for about a week now I've also been noticing issues where randomly an item chosen from the ui will spin for up to 10 seconds before loading the poster and Metadata. Sometimes the chosen item will spin for 10-15 seconds before it even shows a percentage (and it doesn't show as playing in the Plex ui). There doesn't seem to be any consistency on what content or content type I try. I've already scanned libraries, emptied trash, cleaned bundles, and optimized my libraries. 

     

    I see a few slow queries but they don't seem to correspond to when I experience these issues. 

    I've been noticing the same messages in my unraid log:

    image.png.f2284371213dcda74661f8254ae07b2d.png

     

    every seems to keep working but these errors keep coming back, any ideas?

     

  12. 2 minutes ago, johnnie.black said:

    Yes.

     

    Not for now, SSDs are known to last way more than predicted life, but of course it can start failing at any time, keep an eye on it.

    As long as you have anything but intel SSD's, these tend to go read-only when they reach their predicted life all to protect the customer intel sales!

  13. 6 minutes ago, Diggewuff said:

    I'm not sure about that  this website says 400 TBW for all 3 sizes. But I have also read 200 at any place.

    https://www.samsung.com/de/memory-storage/960-evo-nvme-m-2-ssd/MZ-V6E500BW/

    The English datasheet 200TBW https://www.samsung.com/semiconductor/global.semi.static/Samsung_SSD_960_EVO_Data_Sheet_Rev_1_2.pdf

    you can always try to claim warranty based on the German site...

     

    image.thumb.png.e0d07393ad8cc46373b773916c8aeda3.png

     

    I found myself some cheap enterprise NVMe drives on eBay I highly recommend them, powerless protection very consistent write speed and above all else 1366 TBW

     

     

  14. 1 hour ago, saarg said:

     

    I do not think we should add anything messing with the update check for unraid. That is just asking for trouble.

    It could be an option to add a check to the Update Assistant from the Fix Common Problems Plugin hat community tool already supports plugin compatibility.

     

  15. 6 hours ago, cybrnook said:

    Upgraded from 6.7.0 to 6.7.1 without issue. I have 2 x NVME drives as cache in RAID1. Docker settings are set to default /mnt/user/appdata. No issues starting my Plex docker. Using the Plex docker managed by Plex.

     

    So far so good.

    Same here, just with 4x NVMe in Raid 1 and both sonarr and plex use /mnt/user/appdata

    I've been running 6.7.1 RC and 6.7.0 before that never had any corruption issues!

×
×
  • Create New...