tapodufeu

Members
  • Posts

    52
  • Joined

  • Last visited

Posts posted by tapodufeu

  1. I have reconfig all my dockers to use only bridge and host network.... and deleted br0.

     

    After few readings on internet, I found the issue is maybe caused by my network card (embed on motherboard) Intel® I219-V 1Gb Ethernet. 

    I also saw few posts about the same issue with some broadcom network chipset.

     

    I had no issue prior 6.10. Maybe a kernel update ? anyone knows ?

  2. I have moved all dockers connected to internet on the default br0 using macvlan, it looks like my server has no more kernel panic.

    I just have a week of analysis... will see after holidays.

    Kernrl panics happened often when I used more than 1 docker network (host and bridge do not count).

    using ipvlan on br0 just does not work in my case. I don't know why. 

     

  3. Hi Kilrah,

     

    I tried multiple times new ipvlan or macvlan custom networks. Everytime with ipvlan, it just does not work (despite it looks like it works), and I lose internet connectivity on mu unraid server. With macvlan it works, but with br0 or ay custom networks, I get kernel panics every 48 hours.

     

    Do I have to create custom routings or port fowarding with ipvlan  to make it works ?

     

    Macvaln works out of the box. When I read posts, it look like ipvaln should work too as easely as macvlan. In my case, it does not.

     

    For example, my nextcloud in macvlan is up and reachable..... with ipvlan, exactly the same configuration, the docker is up but no traffic in.

     

    Do i need to create a custom networks with specific parameters ? what do you recommend ?

     

     

     

     

  4. and when I was writing this post, I just had a new kernel panic:

     

    does it help ?


    Jul 15 22:54:28 Tower kernel: ------------[ cut here ]------------
    Jul 15 22:54:28 Tower kernel: WARNING: CPU: 9 PID: 7702 at net/netfilter/nf_nat_core.c:594 nf_nat_setup_info+0x8c/0x7d1 [nf_nat]
    Jul 15 22:54:28 Tower kernel: Modules linked in: veth xt_nat xt_tcpudp macvlan xt_conntrack nf_conntrack_netlink nfnetlink xfrm_us
    er xfrm_algo xt_addrtype br_netfilter nvidia_uvm(PO) xfs md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO)
     znvpair(PO) spl(O) tcp_diag inet_diag nct6775 nct6775_core hwmon_vid iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6
     nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel
     udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs bridge stp llc bonding tls nvidia_drm(
    PO) nvidia_modeset(PO) x86_pkg_temp_thermal intel_powerclamp coretemp si2157(O) kvm_intel si2168(O) nvidia(PO) kvm drm_kms_helper
    drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 mei_hdcp mei_pxp aesni_intel tbsecp3(O) gx1133(O)
    tas2101(O) i2c_mux dvb_core(O) videobuf2_vmalloc(O) videobuf2_memops(O) videobuf2_common(O) wmi_bmof
    Jul 15 22:54:28 Tower kernel: crypto_simd cryptd rapl mei_me nvme i2c_i801 intel_cstate syscopyarea i2c_smbus mc(O) ahci sysfillre
    ct e1000e intel_uncore nvme_core sysimgblt mei i2c_core libahci fb_sys_fops thermal fan video tpm_crb tpm_tis wmi tpm_tis_core bac
    klight tpm intel_pmc_core button acpi_pad acpi_tad unix
    Jul 15 22:54:28 Tower kernel: CPU: 9 PID: 7702 Comm: kworker/u24:10 Tainted: P S      W  O       6.1.36-Unraid #1
    Jul 15 22:54:28 Tower kernel: Hardware name: ASUS System Product Name/PRIME B560M-K, BIOS 1605 05/13/2022
    Jul 15 22:54:28 Tower kernel: Workqueue: events_unbound macvlan_process_broadcast [macvlan]
    Jul 15 22:54:28 Tower kernel: RIP: 0010:nf_nat_setup_info+0x8c/0x7d1 [nf_nat]
    Jul 15 22:54:28 Tower kernel: Code: a8 80 75 26 48 8d 73 58 48 8d 7c 24 20 e8 18 bb fd ff 48 8d 43 0c 4c 8b bb 88 00 00 00 48 89 4
    4 24 18 eb 54 0f ba e0 08 73 07 <0f> 0b e9 75 06 00 00 48 8d 73 58 48 8d 7c 24 20 e8 eb ba fd ff 48
    Jul 15 22:54:28 Tower kernel: RSP: 0018:ffffc9000030cc78 EFLAGS: 00010282
    Jul 15 22:54:28 Tower kernel: RAX: 0000000000000180 RBX: ffff88818325ea00 RCX: ffff888104c26780
    Jul 15 22:54:28 Tower kernel: RDX: 0000000000000000 RSI: ffffc9000030cd5c RDI: ffff88818325ea00
    Jul 15 22:54:28 Tower kernel: RBP: ffffc9000030cd40 R08: 00000000870aa8c0 R09: 0000000000000000
    Jul 15 22:54:28 Tower kernel: R10: 0000000000000158 R11: 0000000000000000 R12: ffffc9000030cd5c
    Jul 15 22:54:28 Tower kernel: R13: 0000000000000000 R14: ffffc9000030ce40 R15: 0000000000000001
    Jul 15 22:54:28 Tower kernel: FS:  0000000000000000(0000) GS:ffff888255c40000(0000) knlGS:0000000000000000
    Jul 15 22:54:28 Tower kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Jul 15 22:54:28 Tower kernel: CR2: 0000147e36709840 CR3: 000000000420a005 CR4: 00000000003706e0
    Jul 15 22:54:28 Tower kernel: Call Trace:
    Jul 15 22:54:28 Tower kernel

  5. Since > 6.12, my unraid server is quite unstable. every 48 hours, I have to reboot it.

     

    Everytime I see a kernel panic on network in syslog.

    For long time, I use docker with macvlan. I have red mutliples posts about macvlan kernel issues, so I tried to use ipvlan instead.

     

    I tried so many things, new custom networks etc.... but nothing works with ipvlan.

    Moreover, when I swith br0 to ipvlan, after a couple of minutes, my whole unraid server is not able to reach internet (but works locally).

    Just switching back to macvlan fix the issue.

     

    Please could you help me to diagnose my situation, I don't know where I should look first ?

    I have a swag server proxying a nextcloud, with macvlan it works like a charm, with ipvlan, host is unreachable despite I see all dockers runing fine and I can even ping dockers between them.

     

     

    thanks

     

    tower-diagnostics-20230715-2254.zip

  6. I just did the update to 6.12.2 and I lost all my dockers configs.... 

    I immediately reverted to 6.12.1, reboot and they are back.

     

    So configs are there, but not able to be used/red ??? maybe because in 6.12.2 you downgraded the docker engine.

     

    What did I miss ?
     

    thanks

     

    PS: I had issues since 6.12 with dockers.... sometimes I experienced some kernel panics. 

  7. Hi, I experience same behaviour with my quadro p600. The card is in state p8, idle for hours... avec le fan speed is between 34 and 36%. The card is at 30°.... so cold as ice.

     

    Latest drivers installed. I wonder if we can control the fan speed with a script.

     

    36% is a bit noisy.... during idle time, nothind is active except this fan.

  8. Hi b00,

     

    1/ if you plan to never use more than 4 disks, J or T series do the job. You can really create an efficient NAS server with intel celeron J 5005.... But never plan to add anything !! or just a cache nvme drive on pci slot with adapter.

    2/ well lot are cheaper.... HPplex deliver beautiful cases, and my case is under my tv. So design is important for me.

    3/ OMG yes.... lot more quiet. In 2021, I built a second nas with seagate ironwolf NAS 3.5 hard drives. If you really want a quiet configuration, choose 2.5 hard drives. BTW, everybody sells 2.5 hard drives because they are replaced by ssd, so you can find really good seagate drives for less than 15€ per TB. NEVER choose western digital !!

    4/ yes the 4 disks use 2 pci lanes (1 per controler). Far enough for 4 drives. I don't remember where I found this information, surely on intel.com.

    5/ no difference for 4 drives or less. Huge difference 6 drives or more. 

    6/ Today I have 2 NAS,

       1 / 12 x 2.5 drives 1TB seagate. Very quiet, works like a charm. 2 disks for parity. And still less than 30w most of the time. I added a nvidia P600 for hardware encoding. This server has DVB, hardware encodiing, SAS card for 8 drives + nvme cache. AT full full load, I hardly consume mroe than 40w... with encoding. 

      2/  Celeron J 4105 series, 4 x 3.5 hard drives 4TB seagate. 1 for parity. In term of storage performance it is better. 150Mb/s average, watt usage is between 25 to 40W. I have just a plex server, vpn and DNS. But the noise is not the same at all. You really hear hard drives start/stop spinning or work. I do not recommend this setup if you sleep/work/live close to it.

     

    If I compare performance, on my NAS 2, write speed is around 50% better. But when I store a video file, around 4 GB... i dont care it takes 25 seconds or 45 seconds. Today I use lot more my first nas, because far better CPU and the ability to use DVB or encoding card t. Just some friends and family use plex on my second NAS, and if more than 2 persons are connected, not possible to transcode correctly ! With my first NAS and the slower storage, I can easely support 4 or more concurrent plex transcodings at 1080 and 2 or 3 concurrent transcoding at 4k.

     

  9. Hi, I sell my LSI HBA 9211-8i controller. With chipset LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

    PciE x4

    Already flashed in IT mode.

     

    Ready to use.

     

    I sell with :

     - 2 Mini SAS 36p SFF-8087. SO you can directly connect all 8 drives.

     - 2 x 1to4 sata power cable (in star, not in a row...)

     

    I can send it in France. Or direct pick up in Paris.

     

    Regards

  10. I have built a HTPC config which is just under my TV in my living room. Very silent, low power config (less than 50w, 30W average) and very cheap.

     

    First of all, when I say cheap, I mean everything is cheap except the case. I have chosen a beautiful HDplex  h5 2nd gen fanless case, metalic black finish.

    Very silent, because fanless, but also built in heavy metal to prevent noise of harddrives to be listen out of the case.

    The case cost 300€ because I bought extra hard drives racks. They deliver the case for 2x3.5 or 4x2.5. I installed 12x2.5....

     

    For power supply, I do not need a PSU with more than 80 or 90 W. So I have chosen a picoPSU 80W, found on amazon, at 20€.

    That kind of PicoPSU provides just 1 sata and 1 molex, cable... so, I let imagine the power cable extension you must add :)

     

    IMG_7246.thumb.jpg.dbf69ad41bd020044579f97b4f24facd.jpg

     

    The motherboard is a Gigabyte H110 with 8GB of ram. CPU is i5 6400T. Please not that this kind of CPU require external power supply cable.

     

    IMG_7248.thumb.jpg.6b22155d09891391552a561207a11d69.jpg

     

     

    I have also installed

    1x TBS 6281 tuner DBVT card.

    1x LSI 9211-8i HBA

    1x NVMe PCIe card for cache with 512GB Toshiba nvme drive

     

    and 12 hdd drives. just 8 were installed when I took pictures. They are all connected to the LSI card.

    The 4 missing drives are connected directly to the motherboard.

    IMG_7247.thumb.jpg.1ca41cca50c65b7d10c348b3bbf95a2f.jpg

     

    All drives are attached on racks with Orings inn plastic to prevent vibration noises.

    IMG_7249.thumb.jpg.17e5cc4fdd1a5254038836d52a50a904.jpgIMG_7253.thumb.jpg.d494c7f315f13c7f7425260bc8f054ee.jpg

     

    Hard drives racks are stacked vertically

    IMG_7250.thumb.jpg.41ed5521a9604cf85f3507418c2ba1e8.jpgIMG_7256.thumb.jpg.d787a1bd76741ddfcf469ca179842b55.jpg

     

     

     

    Finally, everything is working perfectly, consume 30W with 8 drives and around 32W with 12 drives (with nextcloud and openvpn).

    During parity check, I consume 52W.

    DVBT recording 38 W.

    Plex only 35W.

    Plex + TBS: 40-42W

     

    On pictures you can see a mix of Wd (CMR) and seagate (SMR) drives.

    I have resell of WD drives, they underperformed compared to seagate drives.

    Now I have only 12 ST1000LM035xdrives. (it took me 1 month to find  brand new or almost new seagate drives on the 2nd hand market... maximum 20€ per drive).

    IMG_7255.thumb.jpg.6358c01702d14b768972e950545cd4a3.jpg

    IMG_7257.thumb.jpg.8f2e6c56d1614546ecebcc53551ac20d.jpg

     

    I will maybe change the motherboard to a full ATX MB with Z170 or B150 chipset in order to add more PCIe slots.

    I lack ethernet router and I will surely add a Quad NIC intel network card and use it with a PfSense VM.

     

    With a bit of DIY works, I can also add 4 more 2.5 drives, then I will also need a second LSI 9211-8i card (so it requires one more PCIe 2.0 x8 slot).

     

    Finally I am around 600€ for the full configuration, completely silent.

    Please notice that WD drives are more silent than seagate and consume less power than seagate... but they are also lot less efficient. Avoid completely to buy SMR dirves from WD (for example WD10SPZX...),

    You can use WD10JPVZ, they are as efficient than ST1000LM024 drives. (35% less than LM035)

    ST1000LM048 performs better (10-15% less than LM035).

    The best one today in the 5400 speed is the ST1000LM035 drive !!

     

    I have not tried the LM049 (7200 RPM) but you can easely find some on the 2nd hand market at the same price than 5400 RPM drives. 

     

     

    IMG_7251.jpg

    IMG_7254.jpg

  11. Thanks for your feedback. I understand my issue now. You are totally right, this is the NAT feature of openvpn. I tried disabling it then It is exactly like zerotier.

     

    So when I am at home, with just the fiber modem router from my ISP, (no advanced routing inside), openvpn is my only option, with NAT included in the openvpn server I can do what I want.

     

    It would be a great option to add a "kind of admin" access with zerotier with NAT included... I would have completely remove openvpn and just use zerotier only.

    This is exactly the kind of option that devops or infra manager need. For example, since march, with covid,  not everyday hopefully, I have connect and change VPNs maybe 30 times per day !!

     

     

     

     

     

  12. You are totally right if I want to completely interconnect both LAN. And I will try to do it, you gave me a very interesting idea :)

     

    But in my case, I just want to access from my laptop (with the zerotier cli) to devices on the remote LAN such as printers, NAS, routers etc...

    For sure, if remote devices on 10.10.20.x want to connect to me (and they have no zerotier client running on), routes must be set properly to passthrough a peer with zerotier interco.

     

    For example, Tower2 is 10.10.20.10 and has an openvpn server (docker).

    If I connect with open vpn client from my laptop (on 10.10.10.xxx) to tower 2 (10.10.20.10)... I can access ALL devices on the LAN 10.10.20.x.

    If I use ZeroTier, only the server is accessible.

     

    Apparently many people get it to work properly, but me not... and I really wonder what I miss.

     

  13. On the server 10.10.10.10, those routes already exist

    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    default         GEN8            0.0.0.0         UG    0      0        0 br0
    10.10.10.0      0.0.0.0         255.255.255.128 U     0      0        0 shim-br0
    10.10.10.0      0.0.0.0         255.255.255.0   U     0      0        0 br0
    10.10.10.128    0.0.0.0         255.255.255.128 U     0      0        0 shim-br0
    10.10.20.0      Tower-2.local   255.255.255.0   UG    0      0        0 ztmjfbsomh
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-853fe7d63fa3
    172.19.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-312be3d41a1c
    192.168.191.0   0.0.0.0         255.255.255.0   U     0      0        0 ztmjfbsomh

     

    So we can see that the route to 10.10.20;x exist, and the route to 192.168.191.x. Flasg G for gateway on 10.10.20.x means to redirect ip packets to the interface of zerotier

     

    on 10.10.20.10:

    root@Tower:~# route
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    default         livebox.home    0.0.0.0         UG    0      0        0 br0
    10.10.10.0      Tower.local     255.255.255.0   UG    0      0        0 ztmjfbsomh
    10.10.20.0      0.0.0.0         255.255.255.0   U     0      0        0 br0
    172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
    172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-83a6ea76a1ec
    192.168.191.0   0.0.0.0         255.255.255.0   U     0      0        0 ztmjfbsomh

     

    AFAIK, it looks good on that part.

    I am not sure at about masquerading too. If I remind well my telco studies (and I am an telco engineer but never worked in telco :))...it should not be required.

     

  14. I am trying to setup a lan to lan access. But it constantly fails and I am running out of solution.

     

    I have 2 unraid servers with docker zerotier installed. Zerotier is working correctly. All peers can connect to other peers. In this network, I have 3 peers, 2 servers and my laptop with zerotier installed. Then I have dozen of computers, routers, NAS and printers on each LAN.

     

    Each server is in a private LAN. 10.10.20.x and 10.10.10.x

    10.10.20.10 is the server running docker in the LAN 10.10.20.x

    10.10.10.10 is the server running docker in the LAN 10.10.10.x

    My laptop is also in 10.10.10.x (the weekend) or 10.10.20.x (during the week). And sometimes during the week connected on external network (cell phone or private wifi).

     

    My problem is that I can only connect to servers, and not to peers in LAN.

    On both servers I have enable ip forwarding and update iptables as following:

     

    PHY_IFACE=eth0; ZT_IFACE=ztmjfbsomh
    iptables -t nat -A POSTROUTING -o $PHY_IFACE -j MASQUERADE
    iptables -A FORWARD -i $PHY_IFACE -o $ZT_IFACE -j ACCEPT
    iptables -A FORWARD -i $ZT_IFACE -o $PHY_IFACE -j ACCEPT
     

    the ZT_IFACE is the name of my net adaptator.  and it is the same name on the 2 servers. 

     

    for example, when I try to ping my WAN router of the LAN 10.10.20.1 :

    failed from 10.10.10.10

    failed from 10.10.10.160

    works from 10.10.20.10 (of course, in the same LAN, no zerotier)

     

    when I ping the zeroteir server of the LAN 10.10.20.1:

    works from 10.10.10.10

    works from 10.10.10.160

     

    so both servers are inter connected succesfully on zerotier network. And from my laptop I can access succesfully unraid http interfaces.

    only LAN access is not working. ZeroTier works well to interconnect peers having zeortier running on.

     

    What do I miss ? 

    Zerotier dockers are running on host network.

     

    please help.

     

    zerotier1.png

    zerotier2.png

  15. Yes, all are on PCIe slots... and I was investigating just a couple of minutes ago and I found:

     

    01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
            Subsystem: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
    ...
                    LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+

     

    04:00.0 Multimedia controller: TBS Technologies DVB Tuner PCIe Card
            Subsystem: Device 6281:0002
    ...
                    LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+

     

    05:00.0 Non-Volatile memory controller: Toshiba Corporation Device 011a (prog-if 02 [NVM Express])
            Subsystem: Toshiba Corporation Device 0001
    ...
                    LnkCtl: ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk+

     

    ASPM is only enable for my nvme drive on pcie.

    Any idea ?

     

    in dmesg I found the following lines:

    [    0.151347] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it

    [   19.362570] r8169 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control

    [   19.385178] mpt3sas 0000:01:00.0: can't disable ASPM; OS doesn't have ASPM control

     

    r8169 is the network card, and I am happy ASPM is disabled for it.

    mpt3sas is related to the LSI HBA.

     

    I see some posts about linux kernel and ASPM issues with PCIe... still investigating

     

    About power usage of HBA controler, I found this post listing many LSI cards

    https://www.servethehome.com/lsi-host-bus-adapter-hba-power-consumption-comparison/

     

    For the TBS 6281 SE, when not used, it is difficult to say, maybe 1 or 2 watt. My wattmeter is not accurate enough. I don't really see any change with or without.

    When recording + transcoding with plex (hw transcode), it consumes approx 10W. Recording without transcoding failed, I was not able to see the file even with VLC. First time I tried so maybe I used wrong settings...

    Posts on internet reports up to 25W for the TBS DVB-T card, I was not able to reproduce at all this consumption. I tested on the french DVB-T on the channel France 5.

     

  16. Bingo.... just to be sure I restarted my server and checked the BIOS. Gigabyte Platform Power Management was disable !!

    I have also disable the embed audio controler of the motherboard.

     

    So now, just a couple of minutes after reboot, I am around 25-28W. 

    If I spin down all disk, 24-25W.

     

    I have already tested in my previous post results of undervolting my CPU under load.

     

     

  17. Just tested powertop and undervolt. Great tools !

     

    My configuration is describe in my signature (and in this post : 

    With 

    undervolt --gpu -75 --core -100 --cache -100 --uncore -100 --analogio -100

    and your startup script to enable autosleep mode on devices, I see some results but not that much.

     

    First, 15mn after reboot, my server uses 28-30W. Before modification, it was 29-31W.

    But now, my powermeter seems a bit "crazy" because the power usage varies lot more than before. It is even difficult to read a number !! it changes so quickly.

    Be careful, my "idle" status is maybe not yours. My server never sleeps. I always have my seedbox running (torrent) + nextcloud + vpn and a dozen of users behind (my family phones + laptop, etc...). They do not consumes that much of server load, but they prevent disks to enter in sleep mode. For example I hardly see more than 2 data disks in sleep mode, often I can see one, and most of the time none are asleep.

     

    Under load, (recording a live TV + watching 1 movie with HW transcoding), my server uses 39-42W. Before modifications, it was around 42-44W.

    With no HW trancoding, up to 50W.

     

    I already did some test with spin down, and even If I stop the array, power usage is around 25-27W. So with your modifications I am maybe around 24-26W now...

    Most of power optimizations is not on disks (I have 8x2.5 disks). 

     

    Maybe I missed something, but powertop does not show me watt usage per device. I would like to really estimate the power usage per device, for example the my LSI HBA controler and my TBS DVB-T tuner.

     

     

     

     

  18. If I stop torrents... and wait 15mn. I am at 26.6W.

     

    8 drives on the LSI + ssd cache on sata and you are exactly in the same configuration than I was a couple of months ago.

    I have also tried 6 drives on the LSI + 2 paritys drives onboard sata + ssd onboard sata, you will improve a bit performance, but not that much. 

    Especially during heavy load or parity check, you do not see any difference. But at least process stops hanging :) haha.

     

    For my use case, using a j5005 or i5 6400T  is at the same cost of setup, cost 8 watts more for usage (so 8€ per year). But in the second case I have a NVMe drive for cache, an LSI HBA and a TBS DVB-T tuner card. You know I wonder if the T series is more "adaptative" than the J series. Maybe power usage of the CPU can fall down more with a T series than with the J series (which maybe stay always around 8 to 10W).

     

    I will add 4 new 2.5 1TB drives in a couple of weeks. When you start using unraid for NAS, you cannot stop. LOL

    next step is of course to use 2TB drives 2.5, they are becoming affordable.

     

    I have also post a pic showing my HTPC server under my TV. If people wants to know how it looks.

     

    great, I take a look at powertop !! very interesting !!

    IMG_7112.jpg

    IMG_7113.jpg

    • Like 2
  19. If I remember well, the HBA consumes 5 to 7 watts on use, based on specifications I found.

     

    The 22W was with the SATA raid control and J5005  (x4 ports marvell controller) and 6 disks. I stayed a couple of months in this configuration.

    With the I5 6400T, LSI and 8 disks, it is more 25/27.

    Then  I added the TBS and the nvme disk and now I am between 29-31 watts.

    NVMe disk consumes a lot of power compared to a sata disk.

    During a recording + watching, I see some jumps at 42 - 44 Watts.

     

    I have a picoPSU 80Watt. When my server is off, I can see 1.3 watt usage, just by the PSU.

    So during load, maybe 4 to 5 watts are used by the PSU itself. Up to 10Watt at 80 watts maybe.

    My picopsu is not GOLD/platinium/silver etc... it is chinese made :) and my watt meter is not also very "acurate" (10 bucks on amazon... ).

     

    Only 4 disks are plugged on your HBA ? so you don't see yet a bit bottleneck.

     

    You use the asrock J5005-mITX ?

     

    PS: I have edited my post and my signature to use correct numbers. thanks CSO1

    IMG_7111.jpg

    controller-conf.png

    controller-benchmark.png

    controller-footer.png

  20. I hope my experience will help some others. Especially if you plan to buy a embed CPU motherboard with intel celeron etc...

     

    My objective was to build a silent and low watt usage HTPC for home usage.

    In europe (I am in France), 1W / year cost 1€ / year. So if I build a  configuration at 250W, the cost is 250€ per year. Which is amazing just for plex !! netflix + amazon prime are cheapest per year ;)

     

    My usage is at 90% for plex/emby & seedbox, and 10% nextcloud and VPN.

     

    So the first thing I did was to buy a HTPC case at HDPlex. I have chosen the H5 gen 2 case : https://hdplex.com/hdplex-h5-fanless-computer-case.html

    Not cheap, but beautiful. I see it everyday under my TV so, it is important.

     

    I do not need a lot of power and speed. Most of the time I am the single user of this server. But occasionaly, we can be 2 or 3 at the same time watching videos.

    So do I need fastest disks, no!

    Latest CPU, no!

    10Gb/s network, no!

    critical time fo response, no!

    big cache, no! an old 256GB SSD sata witll do the job.

     

    For my first configuration I bought a all in one configuration with intel j5005 ITX motherboard with 4 sata disks 2.5 1000GB. Very cheap motherboard, around 110€. Disks can be found easely on 2nd hand market, everybody replace 2.5 hdd in laptop by SSD. You can find brand new 2.5 1TB disks for less than 30€. 2.5 disks are slower than 3.5 but very silent and just consume a couple of watt during usage and around 0.1w during iddle. Perfect for my usage

    It worked just fine with 4 disks. Disks were used at their maximum speed (average 90MB/s, max 120MB/s, min 60MB/s depending on where you read/write on disk) with the 4 onboard sata connectors. The CPU has enough power to run everything easely simultaneously. I totally recommend this configuration if you just plan to use 4 disks. And if you case can, because my hdplex h5 is bit small, but difficult to use more than 4 disks in (not impossible).

    But very soon, I needed to add more disks and I jumped to 6 disks then 8 disks. It was the beginning of a lot of issues.

     

    First and the biggest problem, celeron J5005 have limitation in terms of pcie lanes. Just 6 are available. 2 are used by satas and 1 for pcie x1 onboard connector. all the 3 others are used by network card/ usb etc... So more disks I add, more this single pcie lane is shared. 

    So whatever I can do, only 1 pcie lane is used for ALL disks after the 4th. With 6 disks, the read/write speed was around 80Mb/s. With 8 disks, read/write speed was around 50/60 MB/s etc... etc... 

     

    It was the time to invest to a real SAS controller (LSI 9xxx series) HBA pcie x4 etc... I found it easely on ebay for $20 and wait 3 weeks to receive it.

     

    This card was not usable with my motherboard which just had 1 pciex1 extension slot. So let's buy a new motherboard/CPU!
    And my brightest idea of the year was to buy a J4105 celeron embed motherboard in mATX format with 1 pcie (2.0)x16 slot and 1 pcie (2.0)x1 slot. With the pciex16 slot, I can use my new LSI card and really use this powerful controler to stop sharing a pcie lane (2.0) x1 accross all disks. But of course it did not work at all, the fucking pciex16 lane run at pciex1 speed (it was the time to read intel specification on intel.com and asrock specifications). So the situation was exactly the same than before but lot more expensive because I bought the LSI controller, new cables and and new motherboard.... for nothing...congrats me !!!

    Thanks EBay and leboncoin, I was able to resell this unusable motherboard and not loose a lot of €.

     

    Maybe you ask why I wanted to change my configuration. I said just before I do not need fast disk, fast cpu etc... so why do you do that. It was slower, but acceptable you think !!

    Response: with 8TB on 8 disks, the weekly check disks run for 16 to 20 hours !!! downloading torrents at 100MB/s use almost 100% of the bus speed of my pcie lane. My configuration was a single thread server. Even pihole was slow during disks checks or heavy torrent download, and sometimes not able to respond DNS in time :)

    Everything was dependant to this pciex1 lane used by ALL processus on my server. 

     

    From a very usable, cheap and low cost server with 4 disks, I jumped to a nightmare server, most of the time in idle trying to deal with existing process instead of responding new ones !!! Moving everything to cache helped a bit BUT :) my ssd cache is on a sata port, so it helped just a bit... or not at all, difficult to say.

     

    After this amazing experience of doing 2 times the same error, I planed to really build an extendable configuration.

    Instead of buying embed intel 10Watts platform with celeron, I have chosen intel T series at 30W max.

     

    It took me weeks to find a good opportunity at the right price. Not a lot of people sell them. I bought an intel I5-6400T for 60€.

    then I bough a brand new motherboard GA-H110M-S2H for 50€ with 1 real pcie (3.0) x16 lane and 2 pcie(2.0) x1 lane.

    Now i can really use my LSI controler.

     

    After all this changes and deceptions, it was also the time to maybe stop using the SSD sata cache disk. I had a SSD nvme 512GB toshiba from an old laptop sleeping somewhere, so I bought a pciex1 extension slot for nvme (aliexpress $8, 3 weeks delivery).

     

    And then the dream came true. ALL was running perfectly. Today with 8 disks, I do not even use onboard sata controllers. All disk are plugged on my LSI 9211 controler. All running at their maximum speed and the pcie(2.0) x4  of the controler has enough bandwith to ingest all actions (plex/torrent/pihole etc..) simultaneously on different disks. Using a disk cache on a pcie (2.0) x1 lane is also a better idea than cache on sata. The bandwith is lot larger and has direct access to CPU and bridge. I have noticed immediately a BIG improvement when I switched.

     

    So today I have a working configuration, with 8 TB on 8 disks. Very silent and low watt usage (40-45W on load, 29W idle). With the help of powertop and ASPM enabled, I reduce the watt usage to 25W on idle.

    I can say that the cost of intel T series + motherboad is the same than a brand new celeron J5005/J4105 embed on motherboard. So DO NOT BUY an intel celeron embed if you imagine using more than 4 disks... or use 3.5 disks so you can really increase your storage without adding new disks. (but bye bye silence and low cost usage)

     

    The LSI 92xx controler is perfect. Must have and cheap for more than 4 sata disks. Sata controler on pciex1 lane are not that bad, but completely struggle the bandwith available with your disks. If you really have simultaneous process, you feel immediately the difference.

     

    Same for cache, prefer a SSD on pcie extension slot. Bandwith is lot higher than on sata. I saw immediately a big difference when I switched to this cache.

     

    Overall the configuration, without the case, cost less then 350€ for 8 disks, CPU/motherboard/ram and LSI controler + pcie nvme cache.

    The case cost 300€ !!! but I cannot deal with design (and my wife asking me what is this ugly box under the TV).

     

    I added recently a DVB-T tuner card to record TV contents. Works perfectly. I recommend the TBS 6281 SE. Perfect to use with plex.

     

    Hope it helps.

     

     

     

    • Like 2
    • Thanks 1
  21. I have never used emby yet... btw why have you jump from plex to emby ?

     

    My feedback: plex is really a wonderful solution for movies, tv shows (RIP and already recorded...), cartoons etc... I notice the maturity of DVB and TV solutions is not at the same level of integration... some limitations witht he DVB are a bit annoying. But overall I really love plex.

  22. It works perfectly. The TBS 6281 SE dual TNT tuner is detected perfectly with a kernel provded by unraid.

    I have simply use the unraid DVB plugin provided by

    https://forums.unraid.net/topic/46194-plugin-linuxserverio-unraid-dvb/

    or directly https://github.com/linuxserver/Unraid-DVB-Plugin

     

    just installed the kernel with TBS, restarted my server, and immediately detected.

     

    Then I've just added the device /dev/dvb to the plex docker config.

     

    In plex, I simply did a full scan, configured a bit the calendar etc... and in less than 5 minutes I was able to record everything.

     

    I tested this configuration with an intel i5-6400t and a pentium celeron J4105. Results are completely different.

     

    with the i5 6400T, recording and transcoding is not even noticeable (less than 10% of load...)

    with the j4105, recording and trasncoding require more than 20-30% of CPU. Meanwhile if you are playing a content, you jump to 100% !!

     

    My main concerns was the mutlitasking. Plex with a celeron J4105 (or 4005 or 5005... same) has multiple limitations. The main one : the quicksync feature is not active (despite intel says it works). Consequence: playing a content requiring transcoding while recording DVB, push the platform at its limits.

    With the intel i5 6400T... you can do all an lot more at the same time.

     

    So pay attention to your architecture if you plan to use a DVB  TBS tuner.

     

  23. I've just bought a TBS 6281SE dual tuner pcie x1. I let you know later how it works.

    If anyone has something to share about it, please do not hesitate.