Nicktdot

Members
  • Posts

    36
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nicktdot's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. I upgraded to 6.9.0 last night after running 180+ days on the previous release. I've been experiencing constant crashes across different CPU since the upgrade. My max uptime has been about 5 hours. [ 1758.031275] ------------[ cut here ]------------ [ 1758.031286] WARNING: CPU: 5 PID: 519 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [ 1758.031287] Modules linked in: tun veth macvlan xt_nat xt_MASQUERADE iptable_nat nf_nat nfsd lockd grace sunrpc md_mod xfs hwmon_vid ipmi_devintf ip6table_filter ip6_tables iptable_filter ip_tables bonding igb i2c_algo_bit i40e sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl mpt3sas i2c_i801 intel_cstate ahci i2c_smbus i2c_core intel_uncore nvme libahci raid_class scsi_transport_sas nvme_core wmi button [last unloaded: i2c_algo_bit] [ 1758.031343] CPU: 5 PID: 519 Comm: kworker/5:1 Not tainted 5.10.19-Unraid #1 [ 1758.031345] Hardware name: Supermicro X10SRA/X10SRA, BIOS 2.1a 10/24/2018 [ 1758.031352] Workqueue: events macvlan_process_broadcast [macvlan] [ 1758.031357] RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [ 1758.031360] Code: e8 64 f9 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 d5 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 5d f3 ff ff e8 30 f6 ff ff e9 22 01 [ 1758.031362] RSP: 0018:ffffc90000304d38 EFLAGS: 00010202 [ 1758.031365] RAX: 0000000000000188 RBX: 0000000000003bd9 RCX: 0000000009abba5f [ 1758.031367] RDX: 0000000000000000 RSI: 0000000000000232 RDI: ffffffff8200a7a4 [ 1758.031369] RBP: ffff888586659540 R08: 0000000061fe0175 R09: ffff888103c5d800 [ 1758.031371] R10: 0000000000000158 R11: ffff8885d3cc1e00 R12: 000000000000fa32 [ 1758.031373] R13: ffffffff8210db40 R14: 0000000000003bd9 R15: 0000000000000000 [ 1758.031375] FS: 0000000000000000(0000) GS:ffff88903f340000(0000) knlGS:0000000000000000 [ 1758.031377] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1758.031379] CR2: 000014eba5200000 CR3: 000000000200c004 CR4: 00000000003706e0 [ 1758.031381] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1758.031383] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 1758.031384] Call Trace: [ 1758.031387] <IRQ> [ 1758.031393] nf_conntrack_confirm+0x2f/0x36 [ 1758.031422] nf_hook_slow+0x39/0x8e [ 1758.031429] nf_hook.constprop.0+0xb1/0xd8 [ 1758.031434] ? ip_protocol_deliver_rcu+0xfe/0xfe [ 1758.031437] ip_local_deliver+0x49/0x75 [ 1758.031441] ip_sabotage_in+0x43/0x4d [ 1758.031445] nf_hook_slow+0x39/0x8e [ 1758.031449] nf_hook.constprop.0+0xb1/0xd8 [ 1758.031453] ? l3mdev_l3_rcv.constprop.0+0x50/0x50 [ 1758.031456] ip_rcv+0x41/0x61 [ 1758.031464] __netif_receive_skb_one_core+0x74/0x95 [ 1758.031474] process_backlog+0xa3/0x13b [ 1758.031482] net_rx_action+0xf4/0x29d [ 1758.031489] __do_softirq+0xc4/0x1c2 [ 1758.031495] asm_call_irq_on_stack+0x12/0x20 [ 1758.031500] </IRQ> [ 1758.031507] do_softirq_own_stack+0x2c/0x39 [ 1758.031518] do_softirq+0x3a/0x44 [ 1758.031524] netif_rx_ni+0x1c/0x22 [ 1758.031530] macvlan_broadcast+0x10e/0x13c [macvlan] [ 1758.031540] macvlan_process_broadcast+0xf8/0x143 [macvlan] [ 1758.031548] process_one_work+0x13c/0x1d5 [ 1758.031554] worker_thread+0x18b/0x22f [ 1758.031559] ? process_scheduled_works+0x27/0x27 [ 1758.031564] kthread+0xe5/0xea [ 1758.031567] ? __kthread_bind_mask+0x57/0x57 [ 1758.031571] ret_from_fork+0x22/0x30 [ 1758.031575] ---[ end trace 485f3428373b5ba8 ]---
  2. Currently 193TB in a Chenbro NR40700 enclosure converted into a JBOD box attached to a LSI 9206-16e 4x 960GB NVME SSDs in a BTRFS cache Misc disk mounts for DB, Plex, Docker images, etc.
  3. Interesting. I'm running the upcoming Skylake Purley Xeon . I guess that's what they call the Xeon E5 v5 in the note, however the nomenclature for this upcoming cpu has changed , the current chip has this cpu info: And from the CPU instruction set flag ( http://i.imgur.com/o6Y8LWp.png ) definitely supports HyperThreading (ht), so looks like it's affected by bug. I've hammered the box pretty hard but have not encountered any stability issues.. maybe I should run unRAID on it for a bit
  4. Thanks!!!!!! (make sure the crc32 kernel module is included, I believe the kernel currenbtly used in 6.3.1 requires it)
  5. BUMP. Seriously, enable this as a module in the kernel config and provide it as a module. It's not a big deal and it helps those of us who use it. make menuconfig -> filesystem -> F2FS -> [M] Don't know why it takes two years to ask for this.
  6. I run something similar. Go for it!
  7. Could you try another Flash Drive and see if you have the same problem? Just to see if it boots. With two of you having virtually the same problem, I can't believe it is cockpit error! The issue is the cruzer fit. I have the same thing. Here's how to manually test it. When it scans for the UNRAID label, remove/reinstall the USB key. You'll see it picked up right away and it'll boot normally. That's obviously not a longer term solution, but it will allow you to boot manually.
  8. Sounds more like a Tu-95 ! I swapped out the fans with Noctua. so now it's super quiet.
  9. Saw this earlier in the week and ordered one to see. My existing system has a higher end CPU so I swapped out the board and upgraded to it from a Norco RPC4224 , but this is still a pretty good deal for a 48 drive full system.. It's a Chenbro NR40700 , which has 2 integrated 24 bay expanders in the drive backplane. http://www.chenbro.com/en-global/products/RackmountChassis/4U_Chassis/NR40700 The systems are complete with LSI-9211-8i, Xeon X3450 and 32GB RAM . So for a full system, the asking price is pretty good. See link below: http://www.ebay.com/itm/Chenbro-48-Bay-Top-Loader-4U-Chassis-w-Rail-Kit-Drive-Brackets-COMPLETE-SYSTEM-/252334824504 Cheers
  10. Would it be possible to add this kernel module in the unRaid build? I use it constantly for my flash drives and SSDs attached to the system.
  11. M.2 supports both SATA and PCIe interfaces. Do you know which one your system has?
  12. So it looks like you removed 4 letter device support between b12 and b14. I guess my thread wasn't constructive...
  13. Then my request is simple, make cacheID device count on the license as it does, while cacheId.XXX disks not count on the license. Being penalized on a data array capacity while still using 1 cache , but pooling it for optimization is a let down. The reality is unraid is knocking down the disk usage against the license strictly for using the UI. I can already manually do what I'm requesting become a feature: If I mount the cache drive manually, and it so happens to be pooled in BTRFS RAID1, it still counts as 1 disk against the license as far as Unraid works: Because I am stil system mounting a single disk. However, If I use the UI to do the same thing, you ding 4 disk usage and still mount the same single disk. So here I am doing it manually, having swung 24 disks allocation back to data array in the UI, so I can run 24 array disks: And still running a btrfs raid1 cache device, and mounting a single disk. # df /mnt/cache Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdg1 976771588 23204008 1211440272 2% /mnt/cache # btrfs filesystem show Label: none uuid: a97f3ee8-7459-4518-bc4e-8012fae4f360 Total devices 4 FS bytes used 22.11GiB devid 1 size 465.76GiB used 136.03GiB path /dev/sdh1 devid 2 size 465.76GiB used 136.00GiB path /dev/sde1 devid 3 size 465.76GiB used 136.03GiB path /dev/sdf1 devid 4 size 465.76GiB used 136.00GiB path /dev/sdg1 So I hope you understand what I'm trying to show. The subsystem uses 1 disk, for cache, but the UI takes 4 disk allocation for a feature (btrfs pooling) not native to it.
  14. I'm doing something similar, cache pooling 4x 500GB SSDs in a BTRFS RAID-1. The UI is advanced enough that you can do it all from there, by increasing the allowed # of drives available for cache (by decreasing the qty available for data). By default, as you add cache drives, the BTRFS pooling type will be RAID1 (like), which is smart enough in BTRFS to automatically allocate 1/2 the total space. (unlike true raid-1) it looks something like this: The drawback from this is that as you pull drive count from the data array to add them to the cache pool, you're decreasing the qty of drives available for data array, which is wrong imo because the cache 'device' is still just 1 drive when it comes to the MD driver... you still mount just one drive (case in point , the mount point from the array pictured above looks like this:) # df /mnt/cache Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdg1 976771588 91412472 1054255408 8% /mnt/cache and the btrfs filesystem looks like this: # btrfs filesystem show Label: none uuid: a97f3ee8-7459-4518-bc4e-8012fae4f360 Total devices 4 FS bytes used 87.13GiB devid 1 size 465.76GiB used 126.03GiB path /dev/sdh1 devid 2 size 465.76GiB used 126.00GiB path /dev/sde1 devid 3 size 465.76GiB used 126.03GiB path /dev/sdf1 devid 4 size 465.76GiB used 126.00GiB path /dev/sdg1 I would like for the UI to not take away array drive allocation to allow cache drive pooling. I'll try to circumvent the UI and see how it works. Some have said that unraid is hard coded to only allow 3 letter drive naming (sda) and thus couldn't support this because drives would go beyond the 26 total physical devices (sdaa, sdab, etc), but that's not true at all, but keeps being repeated. The super.dat config file uses device ID names, not lettering (which is why you can move disks around your array, and they'll start at the correct disk# assignment), and I already run my parity disk as "sdaa" (27th disk) and it runs fine. more details here http://lime-technology.com/forum/index.php?topic=38189.0