ephigenie

Members
  • Content Count

    41
  • Joined

  • Last visited

Community Reputation

3 Neutral

About ephigenie

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. yes i can confirm, i have still the same issue. But in fact it really seems to be related to a dedicated IP i had set before. Now i am still monitoring it - but have not set enabled containers with dedicated IPs and so far its working.
  2. In the meantime i updated to 6.9.2 but have the same issue. I disabled all dockers and just left a few in order to not trigger this. is there anything else recommended to check ?
  3. The Kernel part is included - but there is a nvidia-driver plugin that you need to install via "apps". That will allow you to download the driver / software package you need. In my case once that was installed all containers that needed GPU started working as before just with newer drivers etc.
  4. Anything i can do - can i build a newer kernel & install it ? is there a repo from unraid somewhere ? I would like to contribute in order to solve this - since it is quit annoying... And since it seems to be in "NAT-SETUP_INFO" i think its not related to only macvlan. I was considering NAT to be stable since 2.0.36 .... not unstable with 5.x i will try now with all containers off except PLEX. Its crashing currently every 4-6 hours.
  5. Just got another Kernel Panic will full system Lock. This is in nf_nat_setup does not have much to do with the macvlan issue - or does it ?
  6. Thank you for that info, i just shut down the one docker that has a fixed IP. All other highly active containers are on the Server IP. I will try with a separate VLAN soon.
  7. Just got the full one : [ 2743.152154] kvm: already loaded the other module [ 6110.534616] ------------[ cut here ]------------ [ 6110.534628] WARNING: CPU: 8 PID: 37032 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x99/0x1e1 [ 6110.534629] Modules linked in: ccp macvlan nfsv3 nfs nfs_ssc veth xt_nat iptable_filter xfs nfsd lockd grace sunrpc md_mod tun nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper drm backlight agpgart syscopyarea sysfillrect sysimgblt nvidia_uvm(PO) fb_sys_fops nvidia(PO) iptable_nat xt_MASQUERADE nf_nat ip_tables wireguard curve2
  8. I am observing similar problems here - net filter related. Kernel Panic on high network traffic ... reproducible tower-diagnostics-20210130-1530.zip
  9. Thats also an assumption in itself. It actually came across like GTFO from the start. I guess unless Limetech will feel it, when there will be much less community development - making the platform just another storage server... well we see. I discovered Unraid - not because of limetech, but of recommendations that the community is great and supportive and all the good software is here. that i had before running on my self baked home setup. I consider my invest now sunk cost and wait for the first release of the above mentioned software. A quick read sounds so much better then anything i
  10. Unraid is nothing without the community (addons). Please change the general attitude dramatically in terms of approach to critics or enhancements from the community. We are paying to keep the development going to include those enhancements and make sure everything is updated & stable. The community is what is making this project strong.
  11. Did you try looking with IOtop which process is causing the amount of IO ? Can you trace it as well with docker stats across your containers ? Just to try to identify the verdict....
  12. Well in parts i can agree - individual filesystems are an advantage. Unfortunately what i have seen while debugging shfs: is its highly unefficient. This and then along with the "mover" is causing a lot of issues. I get, that the overhead in IO is caused by being extra-cautios and double check everything. However since the approach of array configuration is not extendable during live operation as well as cache... The "only" configurable thing that is actually causing most of the confusion are the settings around the cache. And the involvement of the mover. To t
  13. You can try creating a raid1 / raid0 out of your 2 SSD's and putting XFS on top. Everyone please read: - https://en.wikipedia.org/wiki/ZFS ... - uptown triple-device parity per pool - of course multiple pools per host - builtin encryption - live extension - builtin deduplication - builtin hierarchical caching ( L1 Ram, L2 i.e. SSD ) blockwise and without possible data loss if the cache device dies with cache devices being able to be added and removed live and separate cache for fast write confirmation ( SLOG) - builtin "self-heal
  14. No. If you would look at their latest blog - and the video that was posted there - you would see that they are indeed considering ZFS. And i can tell you from my analysis - of the SHFS processes via strace and its behavior, that SHFS in itself has big issues performance wise. Guess why plugins like the directory cache and others exist. ZFS is the superior file system and it has decent Caching (block wise) build in amongst other features such as i.e. Snapshots, Raid etc. .. So imagine we would have the performance of XFS with the flexibility of BTRFS and snapshots and