Jump to content

tapodufeu

Members
  • Content Count

    38
  • Joined

  • Last visited

Community Reputation

5 Neutral

About tapodufeu

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have built a HTPC config which is just under my TV in my living room. Very silent, low power config (less than 50w, 30W average) and very cheap. First of all, when I say cheap, I mean everything is cheap except the case. I have chosen a beautiful HDplex h5 2nd gen fanless case, metalic black finish. Very silent, because fanless, but also built in heavy metal to prevent noise of harddrives to be listen out of the case. The case cost 300€ because I bought extra hard drives racks. They deliver the case for 2x3.5 or 4x2.5. I installed 12x2.5.... For power supply, I do not need a PSU with more than 80 or 90 W. So I have chosen a picoPSU 80W, found on amazon, at 20€. That kind of PicoPSU provides just 1 sata and 1 molex, cable... so, I let imagine the power cable extension you must add The motherboard is a Gigabyte H110 with 8GB of ram. CPU is i5 6400T. Please not that this kind of CPU require external power supply cable. I have also installed : 1x TBS 6281 tuner DBVT card. 1x LSI 9211-8i HBA 1x NVMe PCIe card for cache with 512GB Toshiba nvme drive and 12 hdd drives. just 8 were installed when I took pictures. They are all connected to the LSI card. The 4 missing drives are connected directly to the motherboard. All drives are attached on racks with Orings inn plastic to prevent vibration noises. Hard drives racks are stacked vertically Finally, everything is working perfectly, consume 30W with 8 drives and around 32W with 12 drives (with nextcloud and openvpn). During parity check, I consume 52W. DVBT recording 38 W. Plex only 35W. Plex + TBS: 40-42W On pictures you can see a mix of Wd (CMR) and seagate (SMR) drives. I have resell of WD drives, they underperformed compared to seagate drives. Now I have only 12 ST1000LM035xdrives. (it took me 1 month to find brand new or almost new seagate drives on the 2nd hand market... maximum 20€ per drive). I will maybe change the motherboard to a full ATX MB with Z170 or B150 chipset in order to add more PCIe slots. I lack ethernet router and I will surely add a Quad NIC intel network card and use it with a PfSense VM. With a bit of DIY works, I can also add 4 more 2.5 drives, then I will also need a second LSI 9211-8i card (so it requires one more PCIe 2.0 x8 slot). Finally I am around 600€ for the full configuration, completely silent. Please notice that WD drives are more silent than seagate and consume less power than seagate... but they are also lot less efficient. Avoid completely to buy SMR dirves from WD (for example WD10SPZX...), You can use WD10JPVZ, they are as efficient than ST1000LM024 drives. (35% less than LM035) ST1000LM048 performs better (10-15% less than LM035). The best one today in the 5400 speed is the ST1000LM035 drive !! I have not tried the LM049 (7200 RPM) but you can easely find some on the 2nd hand market at the same price than 5400 RPM drives.
  2. Thanks for your feedback. I understand my issue now. You are totally right, this is the NAT feature of openvpn. I tried disabling it then It is exactly like zerotier. So when I am at home, with just the fiber modem router from my ISP, (no advanced routing inside), openvpn is my only option, with NAT included in the openvpn server I can do what I want. It would be a great option to add a "kind of admin" access with zerotier with NAT included... I would have completely remove openvpn and just use zerotier only. This is exactly the kind of option that devops or infra manager need. For example, since march, with covid, not everyday hopefully, I have connect and change VPNs maybe 30 times per day !!
  3. You are totally right if I want to completely interconnect both LAN. And I will try to do it, you gave me a very interesting idea But in my case, I just want to access from my laptop (with the zerotier cli) to devices on the remote LAN such as printers, NAS, routers etc... For sure, if remote devices on 10.10.20.x want to connect to me (and they have no zerotier client running on), routes must be set properly to passthrough a peer with zerotier interco. For example, Tower2 is 10.10.20.10 and has an openvpn server (docker). If I connect with open vpn client from my laptop (on 10.10.10.xxx) to tower 2 (10.10.20.10)... I can access ALL devices on the LAN 10.10.20.x. If I use ZeroTier, only the server is accessible. Apparently many people get it to work properly, but me not... and I really wonder what I miss.
  4. On the server 10.10.10.10, those routes already exist Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default GEN8 0.0.0.0 UG 0 0 0 br0 10.10.10.0 0.0.0.0 255.255.255.128 U 0 0 0 shim-br0 10.10.10.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 10.10.10.128 0.0.0.0 255.255.255.128 U 0 0 0 shim-br0 10.10.20.0 Tower-2.local 255.255.255.0 UG 0 0 0 ztmjfbsomh 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-853fe7d63fa3 172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-312be3d41a1c 192.168.191.0 0.0.0.0 255.255.255.0 U 0 0 0 ztmjfbsomh So we can see that the route to 10.10.20;x exist, and the route to 192.168.191.x. Flasg G for gateway on 10.10.20.x means to redirect ip packets to the interface of zerotier on 10.10.20.10: root@Tower:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default livebox.home 0.0.0.0 UG 0 0 0 br0 10.10.10.0 Tower.local 255.255.255.0 UG 0 0 0 ztmjfbsomh 10.10.20.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-83a6ea76a1ec 192.168.191.0 0.0.0.0 255.255.255.0 U 0 0 0 ztmjfbsomh AFAIK, it looks good on that part. I am not sure at about masquerading too. If I remind well my telco studies (and I am an telco engineer but never worked in telco )...it should not be required.
  5. I am trying to setup a lan to lan access. But it constantly fails and I am running out of solution. I have 2 unraid servers with docker zerotier installed. Zerotier is working correctly. All peers can connect to other peers. In this network, I have 3 peers, 2 servers and my laptop with zerotier installed. Then I have dozen of computers, routers, NAS and printers on each LAN. Each server is in a private LAN. 10.10.20.x and 10.10.10.x 10.10.20.10 is the server running docker in the LAN 10.10.20.x 10.10.10.10 is the server running docker in the LAN 10.10.10.x My laptop is also in 10.10.10.x (the weekend) or 10.10.20.x (during the week). And sometimes during the week connected on external network (cell phone or private wifi). My problem is that I can only connect to servers, and not to peers in LAN. On both servers I have enable ip forwarding and update iptables as following: PHY_IFACE=eth0; ZT_IFACE=ztmjfbsomh iptables -t nat -A POSTROUTING -o $PHY_IFACE -j MASQUERADE iptables -A FORWARD -i $PHY_IFACE -o $ZT_IFACE -j ACCEPT iptables -A FORWARD -i $ZT_IFACE -o $PHY_IFACE -j ACCEPT the ZT_IFACE is the name of my net adaptator. and it is the same name on the 2 servers. for example, when I try to ping my WAN router of the LAN 10.10.20.1 : failed from 10.10.10.10 failed from 10.10.10.160 works from 10.10.20.10 (of course, in the same LAN, no zerotier) when I ping the zeroteir server of the LAN 10.10.20.1: works from 10.10.10.10 works from 10.10.10.160 so both servers are inter connected succesfully on zerotier network. And from my laptop I can access succesfully unraid http interfaces. only LAN access is not working. ZeroTier works well to interconnect peers having zeortier running on. What do I miss ? Zerotier dockers are running on host network. please help.
  6. Yes, all are on PCIe slots... and I was investigating just a couple of minutes ago and I found: 01:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) Subsystem: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] ... LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ 04:00.0 Multimedia controller: TBS Technologies DVB Tuner PCIe Card Subsystem: Device 6281:0002 ... LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ 05:00.0 Non-Volatile memory controller: Toshiba Corporation Device 011a (prog-if 02 [NVM Express]) Subsystem: Toshiba Corporation Device 0001 ... LnkCtl: ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk+ ASPM is only enable for my nvme drive on pcie. Any idea ? in dmesg I found the following lines: [ 0.151347] ACPI FADT declares the system doesn't support PCIe ASPM, so disable it [ 19.362570] r8169 0000:02:00.0: can't disable ASPM; OS doesn't have ASPM control [ 19.385178] mpt3sas 0000:01:00.0: can't disable ASPM; OS doesn't have ASPM control r8169 is the network card, and I am happy ASPM is disabled for it. mpt3sas is related to the LSI HBA. I see some posts about linux kernel and ASPM issues with PCIe... still investigating About power usage of HBA controler, I found this post listing many LSI cards https://www.servethehome.com/lsi-host-bus-adapter-hba-power-consumption-comparison/ For the TBS 6281 SE, when not used, it is difficult to say, maybe 1 or 2 watt. My wattmeter is not accurate enough. I don't really see any change with or without. When recording + transcoding with plex (hw transcode), it consumes approx 10W. Recording without transcoding failed, I was not able to see the file even with VLC. First time I tried so maybe I used wrong settings... Posts on internet reports up to 25W for the TBS DVB-T card, I was not able to reproduce at all this consumption. I tested on the french DVB-T on the channel France 5.
  7. Bingo.... just to be sure I restarted my server and checked the BIOS. Gigabyte Platform Power Management was disable !! I have also disable the embed audio controler of the motherboard. So now, just a couple of minutes after reboot, I am around 25-28W. If I spin down all disk, 24-25W. I have already tested in my previous post results of undervolting my CPU under load.
  8. Just tested powertop and undervolt. Great tools ! My configuration is describe in my signature (and in this post : With undervolt --gpu -75 --core -100 --cache -100 --uncore -100 --analogio -100 and your startup script to enable autosleep mode on devices, I see some results but not that much. First, 15mn after reboot, my server uses 28-30W. Before modification, it was 29-31W. But now, my powermeter seems a bit "crazy" because the power usage varies lot more than before. It is even difficult to read a number !! it changes so quickly. Be careful, my "idle" status is maybe not yours. My server never sleeps. I always have my seedbox running (torrent) + nextcloud + vpn and a dozen of users behind (my family phones + laptop, etc...). They do not consumes that much of server load, but they prevent disks to enter in sleep mode. For example I hardly see more than 2 data disks in sleep mode, often I can see one, and most of the time none are asleep. Under load, (recording a live TV + watching 1 movie with HW transcoding), my server uses 39-42W. Before modifications, it was around 42-44W. With no HW trancoding, up to 50W. I already did some test with spin down, and even If I stop the array, power usage is around 25-27W. So with your modifications I am maybe around 24-26W now... Most of power optimizations is not on disks (I have 8x2.5 disks). Maybe I missed something, but powertop does not show me watt usage per device. I would like to really estimate the power usage per device, for example the my LSI HBA controler and my TBS DVB-T tuner.
  9. If I stop torrents... and wait 15mn. I am at 26.6W. 8 drives on the LSI + ssd cache on sata and you are exactly in the same configuration than I was a couple of months ago. I have also tried 6 drives on the LSI + 2 paritys drives onboard sata + ssd onboard sata, you will improve a bit performance, but not that much. Especially during heavy load or parity check, you do not see any difference. But at least process stops hanging haha. For my use case, using a j5005 or i5 6400T is at the same cost of setup, cost 8 watts more for usage (so 8€ per year). But in the second case I have a NVMe drive for cache, an LSI HBA and a TBS DVB-T tuner card. You know I wonder if the T series is more "adaptative" than the J series. Maybe power usage of the CPU can fall down more with a T series than with the J series (which maybe stay always around 8 to 10W). I will add 4 new 2.5 1TB drives in a couple of weeks. When you start using unraid for NAS, you cannot stop. LOL next step is of course to use 2TB drives 2.5, they are becoming affordable. I have also post a pic showing my HTPC server under my TV. If people wants to know how it looks. great, I take a look at powertop !! very interesting !!
  10. If I remember well, the HBA consumes 5 to 7 watts on use, based on specifications I found. The 22W was with the SATA raid control and J5005 (x4 ports marvell controller) and 6 disks. I stayed a couple of months in this configuration. With the I5 6400T, LSI and 8 disks, it is more 25/27. Then I added the TBS and the nvme disk and now I am between 29-31 watts. NVMe disk consumes a lot of power compared to a sata disk. During a recording + watching, I see some jumps at 42 - 44 Watts. I have a picoPSU 80Watt. When my server is off, I can see 1.3 watt usage, just by the PSU. So during load, maybe 4 to 5 watts are used by the PSU itself. Up to 10Watt at 80 watts maybe. My picopsu is not GOLD/platinium/silver etc... it is chinese made and my watt meter is not also very "acurate" (10 bucks on amazon... ). Only 4 disks are plugged on your HBA ? so you don't see yet a bit bottleneck. You use the asrock J5005-mITX ? PS: I have edited my post and my signature to use correct numbers. thanks CSO1
  11. I hope my experience will help some others. Especially if you plan to buy a embed CPU motherboard with intel celeron etc... My objective was to build a silent and low watt usage HTPC for home usage. In europe (I am in France), 1W / year cost 1€ / year. So if I build a configuration at 250W, the cost is 250€ per year. Which is amazing just for plex !! netflix + amazon prime are cheapest per year My usage is at 90% for plex/emby & seedbox, and 10% nextcloud and VPN. So the first thing I did was to buy a HTPC case at HDPlex. I have chosen the H5 gen 2 case : https://hdplex.com/hdplex-h5-fanless-computer-case.html Not cheap, but beautiful. I see it everyday under my TV so, it is important. I do not need a lot of power and speed. Most of the time I am the single user of this server. But occasionaly, we can be 2 or 3 at the same time watching videos. So do I need fastest disks, no! Latest CPU, no! 10Gb/s network, no! critical time fo response, no! big cache, no! an old 256GB SSD sata witll do the job. For my first configuration I bought a all in one configuration with intel j5005 ITX motherboard with 4 sata disks 2.5 1000GB. Very cheap motherboard, around 110€. Disks can be found easely on 2nd hand market, everybody replace 2.5 hdd in laptop by SSD. You can find brand new 2.5 1TB disks for less than 30€. 2.5 disks are slower than 3.5 but very silent and just consume a couple of watt during usage and around 0.1w during iddle. Perfect for my usage It worked just fine with 4 disks. Disks were used at their maximum speed (average 90MB/s, max 120MB/s, min 60MB/s depending on where you read/write on disk) with the 4 onboard sata connectors. The CPU has enough power to run everything easely simultaneously. I totally recommend this configuration if you just plan to use 4 disks. And if you case can, because my hdplex h5 is bit small, but difficult to use more than 4 disks in (not impossible). But very soon, I needed to add more disks and I jumped to 6 disks then 8 disks. It was the beginning of a lot of issues. First and the biggest problem, celeron J5005 have limitation in terms of pcie lanes. Just 6 are available. 2 are used by satas and 1 for pcie x1 onboard connector. all the 3 others are used by network card/ usb etc... So more disks I add, more this single pcie lane is shared. So whatever I can do, only 1 pcie lane is used for ALL disks after the 4th. With 6 disks, the read/write speed was around 80Mb/s. With 8 disks, read/write speed was around 50/60 MB/s etc... etc... It was the time to invest to a real SAS controller (LSI 9xxx series) HBA pcie x4 etc... I found it easely on ebay for $20 and wait 3 weeks to receive it. This card was not usable with my motherboard which just had 1 pciex1 extension slot. So let's buy a new motherboard/CPU! And my brightest idea of the year was to buy a J4105 celeron embed motherboard in mATX format with 1 pcie (2.0)x16 slot and 1 pcie (2.0)x1 slot. With the pciex16 slot, I can use my new LSI card and really use this powerful controler to stop sharing a pcie lane (2.0) x1 accross all disks. But of course it did not work at all, the fucking pciex16 lane run at pciex1 speed (it was the time to read intel specification on intel.com and asrock specifications). So the situation was exactly the same than before but lot more expensive because I bought the LSI controller, new cables and and new motherboard.... for nothing...congrats me !!! Thanks EBay and leboncoin, I was able to resell this unusable motherboard and not loose a lot of €. Maybe you ask why I wanted to change my configuration. I said just before I do not need fast disk, fast cpu etc... so why do you do that. It was slower, but acceptable you think !! Response: with 8TB on 8 disks, the weekly check disks run for 16 to 20 hours !!! downloading torrents at 100MB/s use almost 100% of the bus speed of my pcie lane. My configuration was a single thread server. Even pihole was slow during disks checks or heavy torrent download, and sometimes not able to respond DNS in time Everything was dependant to this pciex1 lane used by ALL processus on my server. From a very usable, cheap and low cost server with 4 disks, I jumped to a nightmare server, most of the time in idle trying to deal with existing process instead of responding new ones !!! Moving everything to cache helped a bit BUT my ssd cache is on a sata port, so it helped just a bit... or not at all, difficult to say. After this amazing experience of doing 2 times the same error, I planed to really build an extendable configuration. Instead of buying embed intel 10Watts platform with celeron, I have chosen intel T series at 30W max. It took me weeks to find a good opportunity at the right price. Not a lot of people sell them. I bought an intel I5-6400T for 60€. then I bough a brand new motherboard GA-H110M-S2H for 50€ with 1 real pcie (3.0) x16 lane and 2 pcie(2.0) x1 lane. Now i can really use my LSI controler. After all this changes and deceptions, it was also the time to maybe stop using the SSD sata cache disk. I had a SSD nvme 512GB toshiba from an old laptop sleeping somewhere, so I bought a pciex1 extension slot for nvme (aliexpress $8, 3 weeks delivery). And then the dream came true. ALL was running perfectly. Today with 8 disks, I do not even use onboard sata controllers. All disk are plugged on my LSI 9211 controler. All running at their maximum speed and the pcie(2.0) x4 of the controler has enough bandwith to ingest all actions (plex/torrent/pihole etc..) simultaneously on different disks. Using a disk cache on a pcie (2.0) x1 lane is also a better idea than cache on sata. The bandwith is lot larger and has direct access to CPU and bridge. I have noticed immediately a BIG improvement when I switched. So today I have a working configuration, with 8 TB on 8 disks. Very silent and low watt usage (40-45W on load, 29W idle). With the help of powertop and ASPM enabled, I reduce the watt usage to 25W on idle. I can say that the cost of intel T series + motherboad is the same than a brand new celeron J5005/J4105 embed on motherboard. So DO NOT BUY an intel celeron embed if you imagine using more than 4 disks... or use 3.5 disks so you can really increase your storage without adding new disks. (but bye bye silence and low cost usage) The LSI 92xx controler is perfect. Must have and cheap for more than 4 sata disks. Sata controler on pciex1 lane are not that bad, but completely struggle the bandwith available with your disks. If you really have simultaneous process, you feel immediately the difference. Same for cache, prefer a SSD on pcie extension slot. Bandwith is lot higher than on sata. I saw immediately a big difference when I switched to this cache. Overall the configuration, without the case, cost less then 350€ for 8 disks, CPU/motherboard/ram and LSI controler + pcie nvme cache. The case cost 300€ !!! but I cannot deal with design (and my wife asking me what is this ugly box under the TV). I added recently a DVB-T tuner card to record TV contents. Works perfectly. I recommend the TBS 6281 SE. Perfect to use with plex. Hope it helps.
  12. I have never used emby yet... btw why have you jump from plex to emby ? My feedback: plex is really a wonderful solution for movies, tv shows (RIP and already recorded...), cartoons etc... I notice the maturity of DVB and TV solutions is not at the same level of integration... some limitations witht he DVB are a bit annoying. But overall I really love plex.
  13. It works perfectly. The TBS 6281 SE dual TNT tuner is detected perfectly with a kernel provded by unraid. I have simply use the unraid DVB plugin provided by https://forums.unraid.net/topic/46194-plugin-linuxserverio-unraid-dvb/ or directly https://github.com/linuxserver/Unraid-DVB-Plugin just installed the kernel with TBS, restarted my server, and immediately detected. Then I've just added the device /dev/dvb to the plex docker config. In plex, I simply did a full scan, configured a bit the calendar etc... and in less than 5 minutes I was able to record everything. I tested this configuration with an intel i5-6400t and a pentium celeron J4105. Results are completely different. with the i5 6400T, recording and transcoding is not even noticeable (less than 10% of load...) with the j4105, recording and trasncoding require more than 20-30% of CPU. Meanwhile if you are playing a content, you jump to 100% !! My main concerns was the mutlitasking. Plex with a celeron J4105 (or 4005 or 5005... same) has multiple limitations. The main one : the quicksync feature is not active (despite intel says it works). Consequence: playing a content requiring transcoding while recording DVB, push the platform at its limits. With the intel i5 6400T... you can do all an lot more at the same time. So pay attention to your architecture if you plan to use a DVB TBS tuner.
  14. I've just bought a TBS 6281SE dual tuner pcie x1. I let you know later how it works. If anyone has something to share about it, please do not hesitate.
  15. Hey Guys, Same for me. I wonder if can add a new fresh feature to my favorite unraid NAS with DVR etc... I already use plex for years and I have a premium access. Now I am looking for a pcie x1 DVBT card to add in my server to use DVR with my samsung TV. No HDHomeRun or any other kind of network tuner, I lack power plug, power cord and ethernet ports.... And it is absolutely horrible to see tons of cable around/behind your tv. So I just need a simple DVB tuner card to be able to record live DVB-t stream. I have the latest unraid with plex docker. Does anyone can share experience in this thread... good or bad. thanks