Jump to content

jit-010101

Members
  • Posts

    53
  • Joined

  • Last visited

Everything posted by jit-010101

  1. There are two parts to this issue: 1. Google Chrome is really becoming a pain in the ass when it comes to security vs obscurity -> switch to Firefox for more sane Defaults, heck even Edge is better nowadays ... They're (Google) at a point in regards to blocking ad blockers (YouTube), masking search results and blocking various "potential" unwanted subdomains ... yes, its sucks but that's just Google nowadays - prepare for the migration rather sooner then later ... 2. I would never advise to open up a port to expose Unraid or its Docker-Containers / VMs to the Internet. Its just not architected for this. As soon as a port is open you open up a can of worms - there's a shit ton to consider and prevent that the usual cluseless home-user cannot and will not see ... its really time consuming and super risky. Use Wireguard VPN or Tailscale to establish a trusted VPN-Network is litterally the only thing you should ever do. Tailscale is a lot simpler to setup mind you ...
  2. Seems like you're wasting quite some space there with RAID-Z2 ... considering it sounds like you don't need dual parity? A big upside is the mixing of different drive sizes, that's true - however keep in mind that the biggest drive size needs to be the parity, and you can't mix drive sizes in parity. So if you're using just one array - then that means if you upgrade to 20TiB then you'll have to buy at least 2x to make use of it (or just move 1x10TB from parity to the array and live with the 10 TiB loss for the time being until you upgrade to 20 TiB drives in the array too) However you have the option of multiple pools with Unraid too - since 6.12 introduced ZFS you can for example mix various levels of parity for important or less important data or not. One thing to keep in mind: Yes storage performance by default is not the highest and the system isn't architected for it - I mean absolute performance. I really like the options and flexibility that Unraid has - let me throw in something unusual: 8TB 870 QVO SSDs if you want low power - media storage ... prices are still higher then spinning rust but its at a level where you can consider HDDs are becoming obsolete, especially since the newer HDDS are getting more and more into the regions of being Power Hogs in standby. This is not something you'd want to have in a ZFS-Raid by default, and its one of the places where Unraid really shines. You can just give it a go with a trial - the USB-Boot-Disk design and Trial (can be extended twice) is really painless and usefull (but requires a bit time to get around it if it's about customization, something thats hardly documentable, but comes with experience only)
  3. Intel N100 has just 9 PCIe 3.0 lanes and the ASRock N100DC board is a waste of money if you ask me. You can go with the classical Topton/Kingnovy GreenPCB N5105-NAS Mainboard with onboard ASM1166 and 8 PCIe 3.0 lanes (just 1 less wont make much of a difference), that one at least has everything you might need and is also capable of doing Plex HW-Acceleration (sold on Amazon). Both are hardly performance monsters and will not scale if you want to do more. If that's your only NAS i'd really go with the recommendation to go with LGA1201/LGA1700 instead. Here's some really good insight about that: https://mattgadient.com/7-watts-idle-on-intel-12th-13th-gen-the-foundation-for-building-a-low-power-server-nas/
  4. Keep in mind that the M2 slot in this board is only connected via 3.0 x1 - so you'll top out at 1 GB/s. In any case I've read reports that people had troubles with either the m2 slot or PCIe card and a second ASM1166 controller (there's already a x1 connected ASM1166 for the SATA-Ports onboard). So if you're going that route make sure that you can return the cards without any troubles or repurpose them. I agree however that the ASM1166 is a good option - if you get it work (I have the same board - just havent gone that path down yet).
  5. Ich frag mich gerade wie du es schaffst auf 110W Verbrauch zu kommen. Bei mir läuft der Ryzen Pro 5750G mit 64GB ECC Ram auf 21W - ohne PicoPSU oder sonstwas, allerdings liegt da auch dauerhaft Verbrauch an mit einer Windows VM und ziemlich vielen Docker-Container - wobei ich ausschließlich 5x SSDs nutze auf dem Gerät ... nebst Parität. Die AMD iGPU in VMs zu nutzen ist quasi fast unmöglich (dazu gibt es hier auch ein gutes Thema mit der grundsätzlichen Tendenz: Also Finger weg, falls du das vorhast nimm Intel oder eine Stromsparend eGPU (RTX 2050 z. B.) - aber vielleicht funktioniert das ja mit der neuen AM5 Generation besser ... oder aber einfach mit einem anderen Board - und ist eher ein Thema von meinem Board selbst. Ich würde mal schwer auf den 4x4TB HDD Pool für Stroji tippen? Das würdest du auch mitnehmen, wenn du dir komplett Neue Hardware holst. Für mich klingt das eher so als ob du versuchst dein Konzept zwangsmäßig umzustellen ohne wirklich dem Kernproblem auf die Schliche zu kommen. Hast du überhaupt mal mit powertop und den Bios-Settings gearbeitet bzw. darüber nachgedacht was das Problem sein könnte? Gehen die Platten bei dir jemals in den Spin-Down? Nutzt du ASPM? Nutzt du ein effizientes Netzteil oder hast du da ein 1000W Oschi dranhängen? Dazu: Server Mainboards sind NICHT auf Energiesparen ausgelegt - mit Ausnahme z. B. Kontron die das z. B. wegen der ganz klaren reduzierten Komponentenzahl machen, d. h. Wert auf das Wesentliche legen und nicht die eierlegende Wollmilchsau. Solche Fragen solltest du dir erstmal stellen ... HDDs hab ich grundsätzlich in ein zweites Low-Power System ausgelagert mit einem Kingnovy NAS-Mainboard auf N5105 Basis ... aber rein theoretisch wäre es natürlich noch effizienter das alles in einem System zu haben (kommt bei mir aber nicht in Frage z. B. da ich Backups strikt getrennt haben möchte). Zieht aktuell 11.8W im Idle (75% vom Tag) in dem 3xHDDs und 2x SSDs laufen. Ohne Optimierungen, d. h. Spindown, ASPM, system, docker, VM auf SSD, das Folder Caching Plugin für die HDDs und PicoPSU wäre es auch auf 40W dauerhaft gelaufen. Also fast das 4x ... ohne besondere Hardware.
  6. Well that at least gives you the option to go with Jellyfin instead of Plex If that's an option for you because Jellyfin supports the Ryzen 5700G iGPU natively much unlike Plex. That way you could skip the eGPU
  7. Why the heck would anyone ever buy a I225 out of free will ... Don't you know the giant revision circus that Intel pulled because when you run it a 2.5GBits that thing will pull a pile because of Hardware-Design faults? This started in 2020 and didn't stop until 2023 - i226 ... after 5-Revisions and a pile of driver updates. They even admited some of the hardware design faults behind it (at least the first one). https://www.tomshardware.com/news/raptor-lake-motherboard-ethernet-flaw They straighly made a cut after that and released I226 because of all the drama - apparently I226 also had drama at the start, but apparently they FINALLY tackled it after a few million driver-updates. At least I'm not seeing any issues with the ones I do have ... I226-V is the lowest you should ever get. In fact if you can skip 2.5GBe entirely (Realtek has issues here apprently) and go straight for 10GBe SFP No Intel 2.5GBe is not just Plug and Play like 1GBe NICs back in the day from PFSense days ... same goes for their Wifi NICs ax210, unreliable as crap (albeit ax200 was good)... had to switch back to a Realtek one to get a stable connection again because of constant drop-outs. The days of "buy any intel and it will be painless" are long gone (well at least their 10GBe NICs seem to be trouble free so far)
  8. Welp wow - that makes no sense then remove my vote for that - x16 slot with x2 interface is just blantly stupid. Go for LGA1700 then ...
  9. Its all dependend on the Board - that's why I ruled out the N100DC too ... Not only because it has a shitty setup overall but also power consumption issues. Make sure to get a good and efficient PSU! I'm personally using the Inter-Tech 88882190 PicoPSU 200W ... however I only use 3 HDDs with it and 1 NVMe so far ... there's also the HDPlex 250/500w Gan which are much more efficient the most ATX PSUs in the lower wattage levels too (albeit just as expensive as the titanium ones). However their ASRock N100M variant is vastly different - not only because it has a full sized x16 PCIe port, but also because the power consumption with a little bit of tuning can be below 10W easily (at least mentioned in multiple reviews on Amazon.de). So my vote would go for that if you can fit an mATX board in - ASRock isn't really known to be a vendor that produces many good boards that have low power consumption - quite the opposite because they are rather known to have a C3-State Wall embeeded that you can't cross ... Edit: Dafuq - x16 Port @ x2 Gigabyte and ASUS are more known to produce boards with the possibility to achieve lower power consumption and reach lower C-States more easily and overall lower Wattage Usage. Otherwise if you really want to go with LGA1700 and max customizability and top of the end you could go for a Kontron board. These are mentioned many times in regards to sub 10W consumption too - but they're quite expensive simliar to Supermicro. As for the excel spreadsheet list ... I found that to be of almost no use. Because the components there are hardly comparable not only because many of them are hardly performant but e.g. none of them use many SATA-Ports or do show the need for them. Its more or less a random / unchecked list of the lowest low power list for small desktop systems / SFF ... If I'd need anything like that I'd go to a OrangePi 5 / RockPi 5 RK3588S SBC with M2 NVMe Slot embeeded which reach <2W idle and 6W power consumption levels at load - with a M2 Slot at 8 Cores/16GB for a competitive price nowadays. Or heck x86 based, HP/DELL/Lenovo SFF repurposed from eBay if price is a concern to an apple and a dime ... with simliar power use levels.
  10. Well I recon its the SATA-Controller / PCIe device / Mainboard Components difference here at play in terms of hardware. Maybe its related to firmware of the controllers too ... apparently ASRock is known for blocking lower then C3 on many of their boards (I can confirm that for my AM4 mainboard, even with ASPM enabled) so there seems to be a limit as to what you can do to improve idle consumption. But apparently not all Chinaboards are bad either too! I have this here: https://www.amazon.de/gp/product/B0BYVMNMR9/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 Since my board (Green PCB - BKHD-NAS-XXXX Label Onboard), almost the same as the Changwang NAS-Board (Black PCB - CW-NAS-XXXX Label onboard) - biggest difference was that mine has a ASM1166 SATA-Controller for the 6x SATA-Ports On-Board. All PCIe Slots had ASPM disabled and I was stuck at C3 - after modding the BIOS and enabling it afterwards per Port (auto) I'm idling at 11.8W now successfully moving into C8 state with N5105 with 3HDDs and 1 NVMe connected (spindown, obviously) after all the powertop tuning (that's with an PicoPSU 200W, mind you).
  11. Was genau die GPIO-Settings bezwecken hab ich noch nicht ausprobiert ... ... aber rein theoretisch ich mir gut vorstellen dass es eben für die Konfiguration der PINs der Header hier gedacht ist - leider gibt's da kein Handbuch für. Die SSD ist definitiv auch über PCIe x1 angeschlossen - siehe (im BIOS verfügbar): ... aber ob das soviel Sinn macht hier nochmal 5/6 SSDs ranzukloppen wag ich gerade bei SSDs sehr zu bezweifeln ... da haut das nochmal ein deutliches Stück mehr rein. Auch wenn man hier genau über das bisher nicht freigeschaltete Menu sicherlich was drehen kann: In dem Fall wäre es wohl deutlich sinnvoller in den PCIe x2 Port eine entsprechende Karte reinzuhängen, dann könnte man wieder meckern dass dann kein Platz für ein 10GBe NIC bleibt. Die eierlegende Wollmilchsau gibt's halt nicht ... Man muss ja auch die Preis-Leistung beachten für 150€ und auch kein N100/200/305 wird hier in die High-End Regionen vorstoßen von wegen Maximum Storage Performance - 9x lanes PCIe Gen3 sind jetzt kein riesiger Schritt nach Vorne gegenüber 8x PCIe Gen3 ... dazu braucht man ja auch einen 10GBe NIC. Da geht es doch viel eher in Richtung Kontron/Supermicro/ASRock Rack High-End ohne On-Board SoCs ... https://www.intel.de/content/www/de/de/products/sku/231805/intel-core-i3n305-processor-6m-cache-up-to-3-80-ghz/specifications.html Es gibt übrigens für das Board einen direkten Nachfolger vom selben Hersteller mit 2x m2 Ports vielleicht gibt es auch Andere Varianten des CW-NAS Boards mit neueren Controllern die das anders/besser gelöst haben ... wie auch immer ... mal schauen ob ich es mittels Undervolting / Modding kaputt bekomme oder nochmal unter die 14.4W Idle komme bzw. tiefer als C3 Edit: Es gibt auf der oben verlinkten Herstellerseite sogar ein Manual - mit ein paar hilfreichen Infos vzw. Explosionszeichnungen gerade was die Header anbelangt. Diese Ganzen Sachen wurden so von Kingnovy nicht mitgeliefert ... ergo würde das vielleicht die GPIO-Settings erklären. Allerdings ist TXD/RXD ja eigentlich z. B. als eigenständige Pins festgelegt. Naja ... wie auch immer. In Summe kann man da wohl Kingnovy ankreiden dass die Infos nicht mitgegeben wurden, obwohl sie eigentlich vorhanden sind.
  12. Vermurkst ist da nix - die haben hauptsächlich die Configs ausgeblendet und vielleicht einige Sachen nicht besonders Optimal eingestellt, deswegen modde ich gerade auch das BIOS bzw. schalte die ganzen Menus frei - die standardmäßig fehlen (relativ simpel, teste ich gleich und stelle ich gerne auch zur Verfügung). Der PCIE Port auf dem Board hat PCIe 3.0 x2 - verstehe wer will .... PCIe 3.0 x1 sollte ja theoretisch 1000 MB/s liefern - das wären bei 6x HDDs immer noch 166 MB/s gleichzeitig bei allen Platten (worauf Unraid ja sowieso nicht ausgelegt ist - auch wenn es theoretisch möglich ist, vor allem mit ZFS). Für SSDs wäre mir die Leistung der CPU sowieso zu wenig ... Dafür hab ich einen Hauptserver der auch auf Unraid läuft mit Ryzen 5750 Pro Nachdem ich die Wärmeleitpaste durch Thermal Grizzly Kyornaut ersetzt hatte, dümmpelt das Ding bei mir nur noch auf 3x °C rum ... und auch sonst hatte ich bisher 0 Probleme - allerdings hab ich z. B. auch noch keine iGPU benötigt. Eben weil es rein als Backup-Server dient mit 32GB RAM die noch rumlagen ...
  13. Ja, ganz sicher dass es der ASM1166 ist! Ich weis nicht wieviele Ports damit genau angebunden wurden ... so genau hab ich noch nicht hingeschaut. In dem Fall weil ich nicht die CW-XX NAS-Board Variante habe sondern BKHD, die ganzen CW BIOS-Varianten haben aus dem Grund bei mir auch nicht funktioniert und ich war Gestern froh hier wieder das Original-Bios gefunden zu haben - jetzt läuft es wieder ... und ich schau mit AMIBCP drüber bzw. modde mir das BIOS eben selbst zusammen. https://www.bkipc.com/en/product/BK-NAS-N510X-MB.html Gekauft hab ich es hier auf Amazon.de: https://www.amazon.de/gp/product/B0BYVMNMR9/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 Wie gesagt - es ist eine andere Variante, als die übliche die so anderweitig rumschwirrt ... in diesem Fall wohl die bessere/aktuellere (bis auf die bescheidene Kühlleistung, allerdings gibt es hier noch eine etwas teurere Variante, die ähnlich wie bei den CW-Varianten auch den Heatsink/Kühler etwas ändert und einen 2. M2-Slot ergänzt). Ich vermute mal BKHD hat hier auch nur bei CW in Auftrag gegeben, aber halt die Komponenten entsprechend geändert, deswegen funktioniert es auch nicht mit dem regulärem Bios (sowohl LAN als auch die ganzen SATA-Ports wurde nicht mehr erkannt). Das Board wird in der Variante übrigens sowohl von TopTon als auch Kingnovy auf Amazon und AliExpress verkauft ... nicht nur die CW-XX Variante ... das hatte ich bis dato auch nirgendwo gelesen . Ich kann bei Bedarf gerne Detail-Fotos posten, allerdings hab ich gerade das Gehäuse zu ... nachdem ich die Odysee Gestern durchgestanden habe, also kann es etwas dauern ... ---- Im BIOS sind übrigens auf jeden Fall auch ASPM-Settings versteckt ... ich hab vorhin was von wegen CTRL+F1 im BIOS für den "Hidden Mode" gelesen ... das probier ich gleich mal. Edit: Nope, geht nicht - obwohl die ganzen Punkte angeblich auf Show sind werden sie nicht angezeigt ... und STRG+F1 bringt bei mir nix ... Sonst könnte man wohl über AMIBCP die Defaults einfach ändern - die stehen Standardmäßig alle auf DISABLED -> das würde auch den Zustand nicht tiefer als C3 erklären ... siehe: Durch diesen GUIDE hier bin ich übrigens überhaupt erst darauf gestoßen: https://forums.servethehome.com/index.php?threads/topton-jasper-lake-quad-i225v-mini-pc-report.36699/page-103#post-359615 Edit: Wird vermutlich auf einen BIOS-Mod hinauslaufen ...scheinbar muss das ACCESS LEVEL von DEFAULT -> USER Dann kann ich auch gleich theoretisch den Microcode selbst updaten, und mal schauen ob mit UBB auch die ASM1166 Firmware extrahiert/ersetzt werden kann ...
  14. Mal eine blöde Frage ... mein Board hat Onboard zum N5105 einen ASM1166 Controller Powertop sagt: PCI Device: ASMedia Technology Inc. ASM1166 Serial ATA Controller Theoretisch ist das ja auch über PCIe angebunden ... allerdings halt eben gelötet. Hat schon mal jemand versucht einen Onboard-Controller mit neuer Firmware zu flashen (oder ist das eher die Aufgabe vom BIOS)? Ich weiß nicht mal welche Firmware drauf läuft, im Moment hab ich noch kein bootfähiges Windows bereit ... --- Jedenfalls traue ich mich nicht powertop auf den ASM1166 controller nebst den eigentlichen Disks (sdx) loszulassen - da ich 3 ziemlich langsame HDDs dranhängen habe könnte das sonst ziemlich unangenehm werden ... aber ich hab im Moment das Problem dass ich nicht tiefer als C3 komme und ich vermute dass es eben an eben jenem ASM1166 controller liegt. Onboard hab ich noch 4xIntel I226-V controller von denen 3 im BIOS deaktiviert sind + den x2 PCIe Port an dem aktuell nichts hängt (auch deaktiviert). Oder könnte das an den Enhanced C-States ggf. liegen?
  15. There are two options to get more space out of it that have sufficient power too 1 or 2. Inter-Tech PicoPSU 200W as a budget option (cables can be unplugged) or HDPlex GAN 250W which can be daisy chained or 500W variant the later ones are apparently much more expensive (but also of much higher quality) Both options will likely provide more efficient power then any SFX Power Supply available right now You will likely get 20-25W max with the most expensive SFX PSUs (30W with Gold) while either of the above almost make single digit wattages possible (10W) Just wanted to drop that info here because SFX with good low power efficency are apparently almost impossible to get nowadays - and it sucks that they're not tested for it most of the time either. Edit: Event the Corsair SF600 Titanium rated power supply power efficiency below 25W is sub par (as far as I can see): https://www.cybenetics.com/d/cybenetics_4v1.pdf The HDPlex gan is much better suited for sub 50W - see: https://smallformfactor.net/forum/threads/hdplex-250w-gan.17801/page-2
  16. Honestly speaking? Sell the Synology - because they are much more price stable - and use the I5 13500 for Unraid. Maybe the 8700k is sufficient too .... Your old Q9500 doesnt have QuickSync so that will be largely useless for Transcoding with Plex and generally anything else other then backups ... As far as OMV goes: I used to have both Unraid and OMV and switched the second ine to Unraid too. I dont regret it one second - largely because of the painless Encryption implementation which is worth so much ...
  17. Edit - little bit more accurate data: My one is a green PCB - Kingnovy v1.2 - N5105 with 6x SATA and 4xIntel I226-V Its a white label BKHD - as inprinted on the Mainboard itself - be carefull there are two variants sold under both Kingnovy and Topton! BKDH: https://www.bkipc.com/en/product/BK-NAS-N510X-MB.html CW (chenwen): http://chenwen.com/ (no direct link available anymore) The black ones are usually Chenwen - check the product picutres before you order! Only Chenwen provides Bios Updates ... and Over/Undervolting e.g. I almost bricked mine - but the BKDH had the original bios file there, took the bin and put it ontop of the Chenwen N5105 NAS BIOS files so I was able to successfully flash back to the old version. The additional SATA controller(s) on board are all ASM1166 - since N5105 by default only can do 2x SATA by default It came with BIOS 20.04.2022 preinstalled in my case. Ordered from Amazon.de here: https://www.amazon.de/gp/product/B0BYVMNMR9/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1 Had to redo the thermal paste and its now idling at 36° (before 50ish) Power consumption with 3x Intel 226-V disabled via Bios and Pico PSU is 14.5W in Standby ... nothing special or what I expected but its ok for 6 SATA Ports and 3HDDs + 1 NVMe attached I guess ... --- Update - 08.11.23 - 00:30: I successfully modded the original BIOS to enable both the Advanced -> OverClocking Performance Menu and Chipset PCH-IO Configuration (where you can enable ASPM for example) I used this tool here: UEFI Editor (boringboredom.github.io) to disable the OpCodes for hiding the respective Menu-Entries after extracting it with UEFITool and replacing the Setup.sct again ... In case somebody is interested I will pack up a zip file that can be placed on an FAT32 USB stick that has that modded bios (ONLY FOR BKDH Variant) First test shows that enabling ASPM -> auto on all PCIe Ports seemingly did something ... down to 12.6W right now (before ~14.5W min) but I can also enable ASPM L1 Substates now and havent undervolted yet ... Let me do a quick powertop ... YAY! Powerstates are now finally working!
  18. No, I personally stayed with my /boot/config/go solution because all that it does is create crontab entries - and that works fine the way I did it. I really don't have any need to wait for array start too ... however I do check in the individual cron jobs if the /mnt/user/[share] actually exists and cancel the execution if it doesnt.
  19. How to fix Grafana Permission Errors properly (on first start and later) Its not ok to set the permissions of the appdata/grafana folder anything else then root or increase the permissions blindly. to correctly fix this there needs to be an extra parameter added do that the internal user id using the same id as the root user from unraid Grafana Container -> Edit -> Advanced -> Extra Parameters -> add: --user 0 after that make sure owner of /mnt/user/appdata/Grafana is root:root and grafana will start just fine: chown -R root:root /mnt/user/appdata/Grafana
  20. I guessed as much ... how do you spawn the shell and should it create a log-file (if not could you add that - same as for cronjobs)? I wonder what's preventing it from writing to log files and crontab ... seems almost like its a permission error or something
  21. At Startup of Array and At Fist Start of Array Only are not working for me on two machines with 6.12.4 If you use either - then trying to run a script in background it simply wont do anything. For example try to do a simple echo "hello world" 1> /var/log/hello.log in a new script and then a cat /var/log/hello.log in the terminal afterwards - either by trying to run in background, or via array start -> nothing produced. You have to actually run the user-script manually in foreground - otherwise it wont work. I recon this has something to do with scheduling? Happens for me on two systems - both on 6.12.4 ... one of that brand new, and with very little plugins at all beside user scripts. No clue maybe its just me or I'm missing something very basic but both dont seem to do anything at all ... even on freshly created scripts of any name. I since then moved to add this to my /boot/config/go ... works just fine (albeit before the array starts) # wait 30 seconds then enable the crond jobs ... /bin/bash -c 'sleep 30; /usr/bin/resticprofile --log /var/log/resticprofile schedule --all' & Edit forgot to mention: If I run the same script for example with a custom scheduler every 5 minutes it works just fine.
  22. This App is not properly maintained ... otherwise the link for the config would be pointing to https://raw.githubusercontent.com/grafana/loki/v2.9.1/cmd/loki/loki-docker-config.yaml and specifically tagging against latest is a bad idea advised against by the grafana documentation (even tough - at the current time of writing it works with latest too) https://raw.githubusercontent.com/grafana/loki/main/cmd/loki/loki-docker-config.yaml
  23. Maybe is time to improve the Mover then instead of staying with that legacy code for ages? ... and make it use something like rclone sync / copy which works multi-threaded not single-threaded like rsync does ... I have had a lot of small files in cache, and the share ... this makes the Mover actually do ~700 kb/s transfer only - for a few hundred GiB this will take ages ... Using rclone copy -> 130 mb/s over the whole set with hardly and drops in speed... and that's an 8TB SMR drive (Seagate Archive) as a target here!
  24. As for someone with really extensive ARM experience ... from ARMv7a to ARM64 SBCs powering RK3399 ... forget ARM. No matter which SBC, even the Raspberry Pi 4B will ever catch up on the software side. Yes there has been a shit ton of support added, but those are for Server-Boards that are run for example at Amazon, nothing on the consumer side. There are so many issues on a large scale that I can't even begin where to start and why. Linux, BSD ... there's nothing that's really stable. All these consumer ARM platforms have one thing in common and that's cluster-fuck bootloader support - horrible U-Boot (I hate it) and all the crap involved when it comes to Kernels (forget Mainline). You clearly have no clue how hard it would be to add support to an closed source board like the Apple-Devices. I can also warn against using the Odroid H2/H3 boards (had 4 of them testing a Bare Metal Kuberenetes Cluster) - they suffer from the same design issues and limitations that all SBCs and Mini-PCs do: Storage (not only Storage) IO is simply crap as is the Power Circuit design - Stick to Mini-ITX and call it a day. Simply get a Jonsbo N2 + PicoPSU 90W + Leike 90W Power Supply + Mini-ITX board if you want something power efficient. I have wasted SO much time of my life on ARM and x86 SBCs - don't do it ... they will never be stable and you will suffer, if not today then tomorrow or the year after. Maybe RISC-V will change all that one day because it build with Open-Source in mind but ARM will never take off - for that it's way to cluster fucked (even in the Smartphone world).
  25. Thanks for the explanation. So far I'm quite happy with 6.11 already ... got everything working that I wanted and everything is fast and snappy even without my 10GBe NIC that didn't arrive yet ... will have to switch USB-Stick anyway so might as well start from scratch if things go south, but its also a good way to try backup/restore/migration and evaluate my own backup routines that I'm going to have with Unraid. As far as ZFS specifically ZOL with Docker goes: My experience with that was pretty abysmal on a really high end Dell Workstation with Ubuntu LTS. At least for development with a lot of docker builds and containers ... On OpenSuse I switched to XFSfor Docker straight from the start (instead of BTRFS) because that had the least overhead. Right now I'm evaluating btrfs because of the subvolumes, bitrot and snapshot possibilities. Its also what I'm using day to day on OpenSuse Thumbleweed for everything else and didnt have any issues all this time.
×
×
  • Create New...