tapodufeu

Members
  • Posts

    52
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

tapodufeu's Achievements

Rookie

Rookie (2/14)

7

Reputation

  1. I have reconfig all my dockers to use only bridge and host network.... and deleted br0. After few readings on internet, I found the issue is maybe caused by my network card (embed on motherboard) Intel® I219-V 1Gb Ethernet. I also saw few posts about the same issue with some broadcom network chipset. I had no issue prior 6.10. Maybe a kernel update ? anyone knows ?
  2. argh after 2 weeks... a new kernel panic... I had to reboot my server and in 2 hours (just using plex) another kernel panic. it is really a nightmare. Please help. How can I downgrade to 6.9 ?
  3. I have moved all dockers connected to internet on the default br0 using macvlan, it looks like my server has no more kernel panic. I just have a week of analysis... will see after holidays. Kernrl panics happened often when I used more than 1 docker network (host and bridge do not count). using ipvlan on br0 just does not work in my case. I don't know why.
  4. Hi Kilrah, I tried multiple times new ipvlan or macvlan custom networks. Everytime with ipvlan, it just does not work (despite it looks like it works), and I lose internet connectivity on mu unraid server. With macvlan it works, but with br0 or ay custom networks, I get kernel panics every 48 hours. Do I have to create custom routings or port fowarding with ipvlan to make it works ? Macvaln works out of the box. When I read posts, it look like ipvaln should work too as easely as macvlan. In my case, it does not. For example, my nextcloud in macvlan is up and reachable..... with ipvlan, exactly the same configuration, the docker is up but no traffic in. Do i need to create a custom networks with specific parameters ? what do you recommend ?
  5. and when I was writing this post, I just had a new kernel panic: does it help ? Jul 15 22:54:28 Tower kernel: ------------[ cut here ]------------ Jul 15 22:54:28 Tower kernel: WARNING: CPU: 9 PID: 7702 at net/netfilter/nf_nat_core.c:594 nf_nat_setup_info+0x8c/0x7d1 [nf_nat] Jul 15 22:54:28 Tower kernel: Modules linked in: veth xt_nat xt_tcpudp macvlan xt_conntrack nf_conntrack_netlink nfnetlink xfrm_us er xfrm_algo xt_addrtype br_netfilter nvidia_uvm(PO) xfs md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) tcp_diag inet_diag nct6775 nct6775_core hwmon_vid iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs bridge stp llc bonding tls nvidia_drm( PO) nvidia_modeset(PO) x86_pkg_temp_thermal intel_powerclamp coretemp si2157(O) kvm_intel si2168(O) nvidia(PO) kvm drm_kms_helper drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mei_hdcp mei_pxp aesni_intel tbsecp3(O) gx1133(O) tas2101(O) i2c_mux dvb_core(O) videobuf2_vmalloc(O) videobuf2_memops(O) videobuf2_common(O) wmi_bmof Jul 15 22:54:28 Tower kernel: crypto_simd cryptd rapl mei_me nvme i2c_i801 intel_cstate syscopyarea i2c_smbus mc(O) ahci sysfillre ct e1000e intel_uncore nvme_core sysimgblt mei i2c_core libahci fb_sys_fops thermal fan video tpm_crb tpm_tis wmi tpm_tis_core bac klight tpm intel_pmc_core button acpi_pad acpi_tad unix Jul 15 22:54:28 Tower kernel: CPU: 9 PID: 7702 Comm: kworker/u24:10 Tainted: P S W O 6.1.36-Unraid #1 Jul 15 22:54:28 Tower kernel: Hardware name: ASUS System Product Name/PRIME B560M-K, BIOS 1605 05/13/2022 Jul 15 22:54:28 Tower kernel: Workqueue: events_unbound macvlan_process_broadcast [macvlan] Jul 15 22:54:28 Tower kernel: RIP: 0010:nf_nat_setup_info+0x8c/0x7d1 [nf_nat] Jul 15 22:54:28 Tower kernel: Code: a8 80 75 26 48 8d 73 58 48 8d 7c 24 20 e8 18 bb fd ff 48 8d 43 0c 4c 8b bb 88 00 00 00 48 89 4 4 24 18 eb 54 0f ba e0 08 73 07 <0f> 0b e9 75 06 00 00 48 8d 73 58 48 8d 7c 24 20 e8 eb ba fd ff 48 Jul 15 22:54:28 Tower kernel: RSP: 0018:ffffc9000030cc78 EFLAGS: 00010282 Jul 15 22:54:28 Tower kernel: RAX: 0000000000000180 RBX: ffff88818325ea00 RCX: ffff888104c26780 Jul 15 22:54:28 Tower kernel: RDX: 0000000000000000 RSI: ffffc9000030cd5c RDI: ffff88818325ea00 Jul 15 22:54:28 Tower kernel: RBP: ffffc9000030cd40 R08: 00000000870aa8c0 R09: 0000000000000000 Jul 15 22:54:28 Tower kernel: R10: 0000000000000158 R11: 0000000000000000 R12: ffffc9000030cd5c Jul 15 22:54:28 Tower kernel: R13: 0000000000000000 R14: ffffc9000030ce40 R15: 0000000000000001 Jul 15 22:54:28 Tower kernel: FS: 0000000000000000(0000) GS:ffff888255c40000(0000) knlGS:0000000000000000 Jul 15 22:54:28 Tower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jul 15 22:54:28 Tower kernel: CR2: 0000147e36709840 CR3: 000000000420a005 CR4: 00000000003706e0 Jul 15 22:54:28 Tower kernel: Call Trace: Jul 15 22:54:28 Tower kernel
  6. Since > 6.12, my unraid server is quite unstable. every 48 hours, I have to reboot it. Everytime I see a kernel panic on network in syslog. For long time, I use docker with macvlan. I have red mutliples posts about macvlan kernel issues, so I tried to use ipvlan instead. I tried so many things, new custom networks etc.... but nothing works with ipvlan. Moreover, when I swith br0 to ipvlan, after a couple of minutes, my whole unraid server is not able to reach internet (but works locally). Just switching back to macvlan fix the issue. Please could you help me to diagnose my situation, I don't know where I should look first ? I have a swag server proxying a nextcloud, with macvlan it works like a charm, with ipvlan, host is unreachable despite I see all dockers runing fine and I can even ping dockers between them. thanks tower-diagnostics-20230715-2254.zip
  7. I've just tried to reproduce. Not possible. I re rupgraded to 6.12.2, and now all my dockers are presents and working. Forget my post... I cannot send you diagnostics. Thanks for your response. I surely would not have restarted an upgrade without your request for diagnostics.
  8. I just did the update to 6.12.2 and I lost all my dockers configs.... I immediately reverted to 6.12.1, reboot and they are back. So configs are there, but not able to be used/red maybe because in 6.12.2 you downgraded the docker engine. What did I miss ? thanks PS: I had issues since 6.12 with dockers.... sometimes I experienced some kernel panics.
  9. Hi JorgeB, I am on 6.11. I don't know if performance increased from 6.10 to 6.11... because the only time last year I worked remotely I was on 6.9 and it worked well.
  10. I have also the same issue. It used ot be far better in 6.9.x. I can also confirm performance are worst since 6.10.x. During holidays I have planned to work on my unraid (edit videos, photos etc...) Last time I did it was in july, and I was able to work on my unraid easely. Last week was a nightmare. I had to move files on my local computer to just be able to work. A simple search on 12TB of files (big files... 1GB minimum), can take up to 2mn. Editing is just not possible. It always used ot be slower remotely... but this time it is just a nightmare. I have also notice is a drop in performance when I do multitask on the same share. For example, if I have a download (direct torrent download on unraid with qbittorent docker) + access files on the same driver with SMB... Performance drop is clearly amazing.... from 110MB/s to 20-30MB/s.... just because I use SMB at the same time. Watching a movie with local SMB on vlc + torrent direct download => 30Mb/s download speed Watching a movie with local Plex (http) + torrent direct download => 110Mb/s download speed. I assume SMB use so many IOs that it struggles completely the server. FYI, I am the only user of my unraid... so I can imagine with dozens and dozens of users, it is just not usable at all. Of course, there is no hardware bottleneck on my server. Intel I7 10k, 512 nvme ssd cache + 4 drives 4TB xfs on sata. I drive for parity. I assume it is a standard low configuration used by most people for home server. Let me know if I can help (tests, etc...). Do not hesitate.
  11. Hi, I experience same behaviour with my quadro p600. The card is in state p8, idle for hours... avec le fan speed is between 34 and 36%. The card is at 30°.... so cold as ice. Latest drivers installed. I wonder if we can control the fan speed with a script. 36% is a bit noisy.... during idle time, nothind is active except this fan.
  12. Hi b00, 1/ if you plan to never use more than 4 disks, J or T series do the job. You can really create an efficient NAS server with intel celeron J 5005.... But never plan to add anything !! or just a cache nvme drive on pci slot with adapter. 2/ well lot are cheaper.... HPplex deliver beautiful cases, and my case is under my tv. So design is important for me. 3/ OMG yes.... lot more quiet. In 2021, I built a second nas with seagate ironwolf NAS 3.5 hard drives. If you really want a quiet configuration, choose 2.5 hard drives. BTW, everybody sells 2.5 hard drives because they are replaced by ssd, so you can find really good seagate drives for less than 15€ per TB. NEVER choose western digital !! 4/ yes the 4 disks use 2 pci lanes (1 per controler). Far enough for 4 drives. I don't remember where I found this information, surely on intel.com. 5/ no difference for 4 drives or less. Huge difference 6 drives or more. 6/ Today I have 2 NAS, 1 / 12 x 2.5 drives 1TB seagate. Very quiet, works like a charm. 2 disks for parity. And still less than 30w most of the time. I added a nvidia P600 for hardware encoding. This server has DVB, hardware encodiing, SAS card for 8 drives + nvme cache. AT full full load, I hardly consume mroe than 40w... with encoding. 2/ Celeron J 4105 series, 4 x 3.5 hard drives 4TB seagate. 1 for parity. In term of storage performance it is better. 150Mb/s average, watt usage is between 25 to 40W. I have just a plex server, vpn and DNS. But the noise is not the same at all. You really hear hard drives start/stop spinning or work. I do not recommend this setup if you sleep/work/live close to it. If I compare performance, on my NAS 2, write speed is around 50% better. But when I store a video file, around 4 GB... i dont care it takes 25 seconds or 45 seconds. Today I use lot more my first nas, because far better CPU and the ability to use DVB or encoding card t. Just some friends and family use plex on my second NAS, and if more than 2 persons are connected, not possible to transcode correctly ! With my first NAS and the slower storage, I can easely support 4 or more concurrent plex transcodings at 1080 and 2 or 3 concurrent transcoding at 4k.
  13. Hi, I sell my LSI HBA 9211-8i controller. With chipset LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) PciE x4 Already flashed in IT mode. Ready to use. I sell with : - 2 Mini SAS 36p SFF-8087. SO you can directly connect all 8 drives. - 2 x 1to4 sata power cable (in star, not in a row...) I can send it in France. Or direct pick up in Paris. Regards
  14. I have built a HTPC config which is just under my TV in my living room. Very silent, low power config (less than 50w, 30W average) and very cheap. First of all, when I say cheap, I mean everything is cheap except the case. I have chosen a beautiful HDplex h5 2nd gen fanless case, metalic black finish. Very silent, because fanless, but also built in heavy metal to prevent noise of harddrives to be listen out of the case. The case cost 300€ because I bought extra hard drives racks. They deliver the case for 2x3.5 or 4x2.5. I installed 12x2.5.... For power supply, I do not need a PSU with more than 80 or 90 W. So I have chosen a picoPSU 80W, found on amazon, at 20€. That kind of PicoPSU provides just 1 sata and 1 molex, cable... so, I let imagine the power cable extension you must add The motherboard is a Gigabyte H110 with 8GB of ram. CPU is i5 6400T. Please not that this kind of CPU require external power supply cable. I have also installed : 1x TBS 6281 tuner DBVT card. 1x LSI 9211-8i HBA 1x NVMe PCIe card for cache with 512GB Toshiba nvme drive and 12 hdd drives. just 8 were installed when I took pictures. They are all connected to the LSI card. The 4 missing drives are connected directly to the motherboard. All drives are attached on racks with Orings inn plastic to prevent vibration noises. Hard drives racks are stacked vertically Finally, everything is working perfectly, consume 30W with 8 drives and around 32W with 12 drives (with nextcloud and openvpn). During parity check, I consume 52W. DVBT recording 38 W. Plex only 35W. Plex + TBS: 40-42W On pictures you can see a mix of Wd (CMR) and seagate (SMR) drives. I have resell of WD drives, they underperformed compared to seagate drives. Now I have only 12 ST1000LM035xdrives. (it took me 1 month to find brand new or almost new seagate drives on the 2nd hand market... maximum 20€ per drive). I will maybe change the motherboard to a full ATX MB with Z170 or B150 chipset in order to add more PCIe slots. I lack ethernet router and I will surely add a Quad NIC intel network card and use it with a PfSense VM. With a bit of DIY works, I can also add 4 more 2.5 drives, then I will also need a second LSI 9211-8i card (so it requires one more PCIe 2.0 x8 slot). Finally I am around 600€ for the full configuration, completely silent. Please notice that WD drives are more silent than seagate and consume less power than seagate... but they are also lot less efficient. Avoid completely to buy SMR dirves from WD (for example WD10SPZX...), You can use WD10JPVZ, they are as efficient than ST1000LM024 drives. (35% less than LM035) ST1000LM048 performs better (10-15% less than LM035). The best one today in the 5400 speed is the ST1000LM035 drive !! I have not tried the LM049 (7200 RPM) but you can easely find some on the 2nd hand market at the same price than 5400 RPM drives.
  15. Thanks for your feedback. I understand my issue now. You are totally right, this is the NAT feature of openvpn. I tried disabling it then It is exactly like zerotier. So when I am at home, with just the fiber modem router from my ISP, (no advanced routing inside), openvpn is my only option, with NAT included in the openvpn server I can do what I want. It would be a great option to add a "kind of admin" access with zerotier with NAT included... I would have completely remove openvpn and just use zerotier only. This is exactly the kind of option that devops or infra manager need. For example, since march, with covid, not everyday hopefully, I have connect and change VPNs maybe 30 times per day !!