ephigenie

Members
  • Posts

    41
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ephigenie's Achievements

Rookie

Rookie (2/14)

3

Reputation

  1. yes i can confirm, i have still the same issue. But in fact it really seems to be related to a dedicated IP i had set before. Now i am still monitoring it - but have not set enabled containers with dedicated IPs and so far its working.
  2. In the meantime i updated to 6.9.2 but have the same issue. I disabled all dockers and just left a few in order to not trigger this. is there anything else recommended to check ?
  3. The Kernel part is included - but there is a nvidia-driver plugin that you need to install via "apps". That will allow you to download the driver / software package you need. In my case once that was installed all containers that needed GPU started working as before just with newer drivers etc.
  4. Anything i can do - can i build a newer kernel & install it ? is there a repo from unraid somewhere ? I would like to contribute in order to solve this - since it is quit annoying... And since it seems to be in "NAT-SETUP_INFO" i think its not related to only macvlan. I was considering NAT to be stable since 2.0.36 .... not unstable with 5.x i will try now with all containers off except PLEX. Its crashing currently every 4-6 hours.
  5. Just got another Kernel Panic will full system Lock. This is in nf_nat_setup does not have much to do with the macvlan issue - or does it ?
  6. Thank you for that info, i just shut down the one docker that has a fixed IP. All other highly active containers are on the Server IP. I will try with a separate VLAN soon.
  7. Just got the full one : [ 2743.152154] kvm: already loaded the other module [ 6110.534616] ------------[ cut here ]------------ [ 6110.534628] WARNING: CPU: 8 PID: 37032 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x99/0x1e1 [ 6110.534629] Modules linked in: ccp macvlan nfsv3 nfs nfs_ssc veth xt_nat iptable_filter xfs nfsd lockd grace sunrpc md_mod tun nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper drm backlight agpgart syscopyarea sysfillrect sysimgblt nvidia_uvm(PO) fb_sys_fops nvidia(PO) iptable_nat xt_MASQUERADE nf_nat ip_tables wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha bonding igb i2c_algo_bit sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl mpt3sas ipmi_ssif ahci intel_cstate input_leds acpi_power_meter raid_class i2c_core led_class wmi scsi_transport_sas megaraid_sas intel_uncore libahci button acpi_pad ipmi_si [last unloaded: i2c_algo_bit] [ 6110.534757] CPU: 8 PID: 37032 Comm: kworker/8:1 Tainted: P O 5.10.1-Unraid #1 [ 6110.534760] Hardware name: Dell Inc. PowerEdge T620/0658N7, BIOS 2.8.0 06/26/2019 [ 6110.534780] Workqueue: events macvlan_process_broadcast [macvlan] [ 6110.534784] RIP: 0010:__nf_conntrack_confirm+0x99/0x1e1 [ 6110.534787] Code: e4 e3 ff ff 8b 54 24 14 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 54 e1 ff ff 84 c0 75 b8 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 89 de ff ff e8 af e0 ff ff e9 1f 01 [ 6110.534789] RSP: 0018:ffffc900065a8dd8 EFLAGS: 00010202 [ 6110.534792] RAX: 0000000000000188 RBX: 000000000000107c RCX: 00000000cbc5d8ed [ 6110.534793] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff82009cd4 [ 6110.534795] RBP: ffff88812e579e00 R08: 00000000bfa88d6d R09: ffff88909caa38a0 [ 6110.534797] R10: 0000000000000098 R11: ffff888120d81c00 R12: 0000000000001925 [ 6110.534799] R13: ffffffff8210da40 R14: 000000000000107c R15: ffff88812e579e0c [ 6110.534802] FS: 0000000000000000(0000) GS:ffff888fff900000(0000) knlGS:0000000000000000 [ 6110.534804] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 6110.534806] CR2: 0000000000f03078 CR3: 000000000200c001 CR4: 00000000000606e0 [ 6110.534808] Call Trace: [ 6110.534811] <IRQ> [ 6110.534815] nf_conntrack_confirm+0x2f/0x36 [ 6110.534819] nf_hook_slow+0x39/0x8e [ 6110.534824] nf_hook.constprop.0+0xb1/0xd8 [ 6110.534842] ? ip_protocol_deliver_rcu+0xfe/0xfe [ 6110.534846] ip_local_deliver+0x49/0x75 [ 6110.534851] __netif_receive_skb_one_core+0x74/0x95 [ 6110.534855] process_backlog+0xa3/0x13b [ 6110.534860] net_rx_action+0xf4/0x29d [ 6110.534865] __do_softirq+0xc4/0x1c2 [ 6110.534872] asm_call_irq_on_stack+0xf/0x20 [ 6110.534874] </IRQ> [ 6110.534879] do_softirq_own_stack+0x2c/0x39 [ 6110.534885] do_softirq+0x3a/0x44 [ 6110.534889] netif_rx_ni+0x1c/0x22 [ 6110.534894] macvlan_broadcast+0x10e/0x13c [macvlan] [ 6110.534899] macvlan_process_broadcast+0xf8/0x143 [macvlan] [ 6110.534904] process_one_work+0x13c/0x1d5 [ 6110.534908] worker_thread+0x18b/0x22f [ 6110.534911] ? process_scheduled_works+0x27/0x27 [ 6110.534915] kthread+0xe5/0xea [ 6110.534918] ? kthread_unpark+0x52/0x52 [ 6110.534922] ret_from_fork+0x1f/0x30 [ 6110.534927] ---[ end trace aa399fc3a4d4c0e8 ]--- root@Tower:~#
  8. I am observing similar problems here - net filter related. Kernel Panic on high network traffic ... reproducible tower-diagnostics-20210130-1530.zip
  9. Thats also an assumption in itself. It actually came across like GTFO from the start. I guess unless Limetech will feel it, when there will be much less community development - making the platform just another storage server... well we see. I discovered Unraid - not because of limetech, but of recommendations that the community is great and supportive and all the good software is here. that i had before running on my self baked home setup. I consider my invest now sunk cost and wait for the first release of the above mentioned software. A quick read sounds so much better then anything i could hope for. And for me this seems more and more like a dead end, @limetech why aren't you happy about this vast amount of contributions? This huge amount of efforts and menpower that went into this project? Much more most likely you'd be able to finance from your cash flow? You were the ones profiting directly and the foremost of it. Maybe a community board would be one way trying to set things right with a community driven feature map - whereas you are contributing manpower into that map? But i guess thats too late ?
  10. Unraid is nothing without the community (addons). Please change the general attitude dramatically in terms of approach to critics or enhancements from the community. We are paying to keep the development going to include those enhancements and make sure everything is updated & stable. The community is what is making this project strong.
  11. Did you try looking with IOtop which process is causing the amount of IO ? Can you trace it as well with docker stats across your containers ? Just to try to identify the verdict....
  12. Well in parts i can agree - individual filesystems are an advantage. Unfortunately what i have seen while debugging shfs: is its highly unefficient. This and then along with the "mover" is causing a lot of issues. I get, that the overhead in IO is caused by being extra-cautios and double check everything. However since the approach of array configuration is not extendable during live operation as well as cache... The "only" configurable thing that is actually causing most of the confusion are the settings around the cache. And the involvement of the mover. To this part i would not understand why i.e. its not possible to make that a transition process, whereas upon all criterias are being fullfilled, a "progress - meter" can show the status of the transition. i.e. I change a share from "prefer" to "cache only" : nothing happens in terms of the mover. ( isn't that unexpected ? ) ? Given that the others : i.e. i change a share from "use cache" to "prefer" : the data will be copied from the array to the cache. But based on what pattern ? MRU ? LRU ? Specially the case of a share being converted to be "cache only" has some of the biggest problems in terms of workflow. Why not stop VM's & Docker and trigger a move by some i.e. rsync based tool on the shell ? Later on - even your share is still "cache only" SHFS triggered by the mover will still insist on over and overly seeking the share's FS on ALL disks. Something that definitely would need to be avoided in order to give the cache some sort of decent performance. Otherwise the disks that are supposed to be relieved are still needed for every run. And i validated this with strace ........ In terms of Cache & ZFS : Why would i not prefer having i.e. snapshots or a block wise cache ? I think there is almost no reason. in terms of different HDD sizes on ZFS : no issue at all. multiple arrays are possible as well.
  13. You can try creating a raid1 / raid0 out of your 2 SSD's and putting XFS on top. Everyone please read: - https://en.wikipedia.org/wiki/ZFS ... - uptown triple-device parity per pool - of course multiple pools per host - builtin encryption - live extension - builtin deduplication - builtin hierarchical caching ( L1 Ram, L2 i.e. SSD ) blockwise and without possible data loss if the cache device dies with cache devices being able to be added and removed live and separate cache for fast write confirmation ( SLOG) - builtin "self-healing" - Snapshots... Only downside pools cannot be easily downsized. Really in short 98% of everything we dream about. I for myself like the comfort of the interface, VM and Docker handling, Ease of configuration of NFS, SMB etc And virtually none of that would fall apart. Filesystems are hugely complex beasts. And the amount of Forum entries here that are connected to performance issues of SHFS / mover are really a lot.
  14. No. If you would look at their latest blog - and the video that was posted there - you would see that they are indeed considering ZFS. And i can tell you from my analysis - of the SHFS processes via strace and its behavior, that SHFS in itself has big issues performance wise. Guess why plugins like the directory cache and others exist. ZFS is the superior file system and it has decent Caching (block wise) build in amongst other features such as i.e. Snapshots, Raid etc. .. So imagine we would have the performance of XFS with the flexibility of BTRFS and snapshots and something like "dm-cache" build in. But all with the nice interface of Unraid and the easy handling of Docker Containers and VMs etc ... With ZFS on top, multiple pools wouldn't be an issue. Not to mention the amount of attention that any BUG in ZFS has in the worldwide community. Whereas if there is a serious bug in SHFS - its only in the hand of a few - and writing a filesystem is a very sophisticated task that needs a lot of time and resources. We would all profit from it - as well as LT. vid in question : https://unraid.net/blog/upcoming-home-gadget-geeks-unraid-show