Nicktdot

Members
  • Posts

    41
  • Joined

  • Last visited

Everything posted by Nicktdot

  1. I'm very glad to know it works. I was researching the error / mask 00000001/0000e000 in the message, and found out it had to do with the PCI end device not responding to an ASPM command. So while turning off AER masks the problem by not logging the errors, it doesn't solve the actual PCI errors. So then started going down the rabbit hole of what ASPM is all about, ( https://en.wikipedia.org/wiki/Active_State_Power_Management ) and saw there is a kernel boot flag to turn off the feature.. I dont think we need it anyways seeing as my server is running 24h/day and never goes to sleep mode. I figured it might help avoid the error altogether if the unused feature is disabled. I'll check my own server next time it reboots!
  2. Could you try pcie_aspm=off . This seems to disable power management mode which is throwing the error.. I've put it in my config for next time I reboot
  3. Let me know how it works out for you. I have 1 of 4 SK Hynix NVMEs on an Asus HYPER M.2 X16 GEN 4 CARD throwing this constantly.
  4. It appears there's a schema upgrade screwup when moving to the latest Plex. I'm told on the Plex website that it's due to a corrupted DB, but I get the same behavior on DB backups as well.. Appears I'm not alone in this boat. See: https://forums.plex.tv/t/loading-libraries-fails-error-got-exception-from-request-handler-bad-cast/795400
  5. I updated to the latest Plex Server 1.27 ( 1.27.0.5849-99e933842 )and the libraries were gone when starting. Checked for permission issues but that wasn't the case. The log files are now filled with (libc++ errors?) these messages whenever a transaction occurs: got exception from request handler: std::bad_cast I reverted back to Plex Server 1.26 ( PlexMediaServer-1.26.0.5715-8cf78dab ) and the issue disappears. Same filesystem, same plex database. example output: Jun 02, 2022 10:55:21.399 [0x7f7e37163b38] DEBUG - [com.plexapp.system] HTTP reply status 200, with 0 bytes of content. Jun 02, 2022 10:55:21.399 [0x7f7e37d1eb38] DEBUG - Completed: [127.0.0.1:38308] 200 GET /system/messaging/clear_events/com.plexapp.agents.fanarttv (4 live) GZIP 7ms 280 bytes Jun 02, 2022 10:55:21.472 [0x7f7e36433b38] ERROR - Got exception from request handler: std::bad_cast Seems like every transaction gets this error or: Jun 02, 2022 10:55:21.611 [0x7f7e37140b38] ERROR - Got exception from request handler: Cannot convert data to std::tm.
  6. I upgraded to 6.9.0 last night after running 180+ days on the previous release. I've been experiencing constant crashes across different CPU since the upgrade. My max uptime has been about 5 hours. [ 1758.031275] ------------[ cut here ]------------ [ 1758.031286] WARNING: CPU: 5 PID: 519 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [ 1758.031287] Modules linked in: tun veth macvlan xt_nat xt_MASQUERADE iptable_nat nf_nat nfsd lockd grace sunrpc md_mod xfs hwmon_vid ipmi_devintf ip6table_filter ip6_tables iptable_filter ip_tables bonding igb i2c_algo_bit i40e sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl mpt3sas i2c_i801 intel_cstate ahci i2c_smbus i2c_core intel_uncore nvme libahci raid_class scsi_transport_sas nvme_core wmi button [last unloaded: i2c_algo_bit] [ 1758.031343] CPU: 5 PID: 519 Comm: kworker/5:1 Not tainted 5.10.19-Unraid #1 [ 1758.031345] Hardware name: Supermicro X10SRA/X10SRA, BIOS 2.1a 10/24/2018 [ 1758.031352] Workqueue: events macvlan_process_broadcast [macvlan] [ 1758.031357] RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [ 1758.031360] Code: e8 64 f9 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 d5 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 5d f3 ff ff e8 30 f6 ff ff e9 22 01 [ 1758.031362] RSP: 0018:ffffc90000304d38 EFLAGS: 00010202 [ 1758.031365] RAX: 0000000000000188 RBX: 0000000000003bd9 RCX: 0000000009abba5f [ 1758.031367] RDX: 0000000000000000 RSI: 0000000000000232 RDI: ffffffff8200a7a4 [ 1758.031369] RBP: ffff888586659540 R08: 0000000061fe0175 R09: ffff888103c5d800 [ 1758.031371] R10: 0000000000000158 R11: ffff8885d3cc1e00 R12: 000000000000fa32 [ 1758.031373] R13: ffffffff8210db40 R14: 0000000000003bd9 R15: 0000000000000000 [ 1758.031375] FS: 0000000000000000(0000) GS:ffff88903f340000(0000) knlGS:0000000000000000 [ 1758.031377] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 1758.031379] CR2: 000014eba5200000 CR3: 000000000200c004 CR4: 00000000003706e0 [ 1758.031381] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 1758.031383] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 1758.031384] Call Trace: [ 1758.031387] <IRQ> [ 1758.031393] nf_conntrack_confirm+0x2f/0x36 [ 1758.031422] nf_hook_slow+0x39/0x8e [ 1758.031429] nf_hook.constprop.0+0xb1/0xd8 [ 1758.031434] ? ip_protocol_deliver_rcu+0xfe/0xfe [ 1758.031437] ip_local_deliver+0x49/0x75 [ 1758.031441] ip_sabotage_in+0x43/0x4d [ 1758.031445] nf_hook_slow+0x39/0x8e [ 1758.031449] nf_hook.constprop.0+0xb1/0xd8 [ 1758.031453] ? l3mdev_l3_rcv.constprop.0+0x50/0x50 [ 1758.031456] ip_rcv+0x41/0x61 [ 1758.031464] __netif_receive_skb_one_core+0x74/0x95 [ 1758.031474] process_backlog+0xa3/0x13b [ 1758.031482] net_rx_action+0xf4/0x29d [ 1758.031489] __do_softirq+0xc4/0x1c2 [ 1758.031495] asm_call_irq_on_stack+0x12/0x20 [ 1758.031500] </IRQ> [ 1758.031507] do_softirq_own_stack+0x2c/0x39 [ 1758.031518] do_softirq+0x3a/0x44 [ 1758.031524] netif_rx_ni+0x1c/0x22 [ 1758.031530] macvlan_broadcast+0x10e/0x13c [macvlan] [ 1758.031540] macvlan_process_broadcast+0xf8/0x143 [macvlan] [ 1758.031548] process_one_work+0x13c/0x1d5 [ 1758.031554] worker_thread+0x18b/0x22f [ 1758.031559] ? process_scheduled_works+0x27/0x27 [ 1758.031564] kthread+0xe5/0xea [ 1758.031567] ? __kthread_bind_mask+0x57/0x57 [ 1758.031571] ret_from_fork+0x22/0x30 [ 1758.031575] ---[ end trace 485f3428373b5ba8 ]---
  7. Currently 193TB in a Chenbro NR40700 enclosure converted into a JBOD box attached to a LSI 9206-16e 4x 960GB NVME SSDs in a BTRFS cache Misc disk mounts for DB, Plex, Docker images, etc.
  8. Interesting. I'm running the upcoming Skylake Purley Xeon . I guess that's what they call the Xeon E5 v5 in the note, however the nomenclature for this upcoming cpu has changed , the current chip has this cpu info: And from the CPU instruction set flag ( http://i.imgur.com/o6Y8LWp.png ) definitely supports HyperThreading (ht), so looks like it's affected by bug. I've hammered the box pretty hard but have not encountered any stability issues.. maybe I should run unRAID on it for a bit
  9. Thanks!!!!!! (make sure the crc32 kernel module is included, I believe the kernel currenbtly used in 6.3.1 requires it)
  10. BUMP. Seriously, enable this as a module in the kernel config and provide it as a module. It's not a big deal and it helps those of us who use it. make menuconfig -> filesystem -> F2FS -> [M] Don't know why it takes two years to ask for this.
  11. I run something similar. Go for it!
  12. Could you try another Flash Drive and see if you have the same problem? Just to see if it boots. With two of you having virtually the same problem, I can't believe it is cockpit error! The issue is the cruzer fit. I have the same thing. Here's how to manually test it. When it scans for the UNRAID label, remove/reinstall the USB key. You'll see it picked up right away and it'll boot normally. That's obviously not a longer term solution, but it will allow you to boot manually.
  13. Sounds more like a Tu-95 ! I swapped out the fans with Noctua. so now it's super quiet.
  14. Saw this earlier in the week and ordered one to see. My existing system has a higher end CPU so I swapped out the board and upgraded to it from a Norco RPC4224 , but this is still a pretty good deal for a 48 drive full system.. It's a Chenbro NR40700 , which has 2 integrated 24 bay expanders in the drive backplane. http://www.chenbro.com/en-global/products/RackmountChassis/4U_Chassis/NR40700 The systems are complete with LSI-9211-8i, Xeon X3450 and 32GB RAM . So for a full system, the asking price is pretty good. See link below: http://www.ebay.com/itm/Chenbro-48-Bay-Top-Loader-4U-Chassis-w-Rail-Kit-Drive-Brackets-COMPLETE-SYSTEM-/252334824504 Cheers
  15. Would it be possible to add this kernel module in the unRaid build? I use it constantly for my flash drives and SSDs attached to the system.
  16. M.2 supports both SATA and PCIe interfaces. Do you know which one your system has?
  17. So it looks like you removed 4 letter device support between b12 and b14. I guess my thread wasn't constructive...
  18. Then my request is simple, make cacheID device count on the license as it does, while cacheId.XXX disks not count on the license. Being penalized on a data array capacity while still using 1 cache , but pooling it for optimization is a let down. The reality is unraid is knocking down the disk usage against the license strictly for using the UI. I can already manually do what I'm requesting become a feature: If I mount the cache drive manually, and it so happens to be pooled in BTRFS RAID1, it still counts as 1 disk against the license as far as Unraid works: Because I am stil system mounting a single disk. However, If I use the UI to do the same thing, you ding 4 disk usage and still mount the same single disk. So here I am doing it manually, having swung 24 disks allocation back to data array in the UI, so I can run 24 array disks: And still running a btrfs raid1 cache device, and mounting a single disk. # df /mnt/cache Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdg1 976771588 23204008 1211440272 2% /mnt/cache # btrfs filesystem show Label: none uuid: a97f3ee8-7459-4518-bc4e-8012fae4f360 Total devices 4 FS bytes used 22.11GiB devid 1 size 465.76GiB used 136.03GiB path /dev/sdh1 devid 2 size 465.76GiB used 136.00GiB path /dev/sde1 devid 3 size 465.76GiB used 136.03GiB path /dev/sdf1 devid 4 size 465.76GiB used 136.00GiB path /dev/sdg1 So I hope you understand what I'm trying to show. The subsystem uses 1 disk, for cache, but the UI takes 4 disk allocation for a feature (btrfs pooling) not native to it.
  19. I'm doing something similar, cache pooling 4x 500GB SSDs in a BTRFS RAID-1. The UI is advanced enough that you can do it all from there, by increasing the allowed # of drives available for cache (by decreasing the qty available for data). By default, as you add cache drives, the BTRFS pooling type will be RAID1 (like), which is smart enough in BTRFS to automatically allocate 1/2 the total space. (unlike true raid-1) it looks something like this: The drawback from this is that as you pull drive count from the data array to add them to the cache pool, you're decreasing the qty of drives available for data array, which is wrong imo because the cache 'device' is still just 1 drive when it comes to the MD driver... you still mount just one drive (case in point , the mount point from the array pictured above looks like this:) # df /mnt/cache Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdg1 976771588 91412472 1054255408 8% /mnt/cache and the btrfs filesystem looks like this: # btrfs filesystem show Label: none uuid: a97f3ee8-7459-4518-bc4e-8012fae4f360 Total devices 4 FS bytes used 87.13GiB devid 1 size 465.76GiB used 126.03GiB path /dev/sdh1 devid 2 size 465.76GiB used 126.00GiB path /dev/sde1 devid 3 size 465.76GiB used 126.03GiB path /dev/sdf1 devid 4 size 465.76GiB used 126.00GiB path /dev/sdg1 I would like for the UI to not take away array drive allocation to allow cache drive pooling. I'll try to circumvent the UI and see how it works. Some have said that unraid is hard coded to only allow 3 letter drive naming (sda) and thus couldn't support this because drives would go beyond the 26 total physical devices (sdaa, sdab, etc), but that's not true at all, but keeps being repeated. The super.dat config file uses device ID names, not lettering (which is why you can move disks around your array, and they'll start at the correct disk# assignment), and I already run my parity disk as "sdaa" (27th disk) and it runs fine. more details here http://lime-technology.com/forum/index.php?topic=38189.0
  20. you're trying hard, but even that doesnt address my request. I'm not trying to go beyond 24-26 devices in the array... I'm highlighting clearly that when I use BRTFS in RAID-1 as an array Cache device it uses up 4 drives in Linux (e-f-g-h) but really, as far as md driver is concerned, only 1 drive is used (sdg1) in this case. So I'm not trying to go over the 24 drive limit, but find that when pairing a BTRFS array as a cache device, the UI decreases the available # of array drive available for Data when it appears the drive naming convention ISN'T the limiting factor. Again, not going beyond 24 data drives.. so not asking to tweak buffers, device assignment etc... And all the statements to the effect that "sdaa" "sdab" won't work , I can show is wrong because it's already running OK on my system and everything starts up fine... So the only valid statement I read from Tom, is that when it comes to the specific feature I'm requesting, it hasn't been tested.
  21. I read a lot of hearsay on those links. Well, I also moved my Parity disk to it: And look, it's rebuilding properly...
  22. I know there's currently a limit of array disks + cache, and the UI allows us to allocate more to cache, at the cost of array disks.. I run a 4 SSD BTRFS Cache disk in RAID-1 (btrfs mode). Would it be possible to allow BTRFS disks to not count against the array disk limit?
  23. I used needo's docker template, saved it as a local template. Then edited it. Once the template was used, it was put on your usb stick : /boot/config/plugins/dockerMan/templates-user/my-SickRage.xml I then edited it with a docker hub change to my own, which fixes the repo that needo uses which is deprecated. If you edit that xml file, make these changes at the top: </Description> <Registry>https://registry.hub.docker.com/u/nicktdot/sickrage</Registry> <Repository>nicktdot/sickrage</Repository> Works fine for me and installed up the latest commit instead of 490 commits behind