pappaq

Members
  • Posts

    208
  • Joined

  • Last visited

Everything posted by pappaq

  1. Da gibt es so viele Faktoren, dass es schon schwierig ist den ganzen Aufwand für ein paar Watt im Jahr zu rechtfertigen...Aber plane jetzt mit: Board: https://www.amazon.de/dp/B0BPMK4MJV/?coliid=I1UEIFGGN3I31F&colid=1Y80W1IS2T62H&psc=1&ref_=list_c_wl_lv_ov_lig_dp_it PCIe Controller: https://www.amazon.de/dp/B08F56WKW7/?coliid=I1MHSRJ23IIVBI&colid=1Y80W1IS2T62H&psc=1&ref_=list_c_wl_lv_ov_lig_dp_it M.2 Controller: https://www.amazon.de/dp/B0BVMC37SX/?coliid=I1DQ62N6SKOF3A&colid=1Y80W1IS2T62H&psc=1&ref_=list_c_wl_lv_ov_lig_dp_it Ich bin auf Unraid 6.12.5, da ich inzwischen ZFS nutze. Mein HBA H310 von Dell verbraucht wirklich viel und soll raus. Bezüglich des Realtek NICs gibt es leider keine Alternative die irgendwie kostenmäßig in die Rechnung passen würde. Ziel ist es also C8 unter Unraid 6.12.5 mit insgesamt 10HDDs und 6 Sata SSDs und einer NVMe auf dem Board zu erreichen...Mal sehen, ob das klappt! Zur Zeit läuft das alles auf einem ASUS PRIME B760-PLUS Gaming (https://www.amazon.de/dp/B0BRYY69YD?psc=1&ref=ppx_yo2ov_dt_b_product_details) mit DDR5, das ich aber leider nicht über C3 bekomme, selbst ohne den HBA. Mit den beiden Controllern (HBA + altem 4x Sata) läuft das Ganze bei 33W im idle, mit knapp 30 Dockern. Finde ich schon ganz gut, würde aber gerne unter 20W kommen. Ohne alles schafft es das System bei max. C3 auf 14,8W im Idle. Als Netzteil kommt das Seasonic PRIME FOCUS Modular (80+Platinum) 550 Watt zum Einsatz. Sollte für die sub 20W ausreichen. Auch sub 25W wären schon absolut super. Mein vorheriges Ryzen System war bei 60W Idle, also ist es jetzt schon ein Win.
  2. Hey, danke für die schnelle Antwort! Welche SSD war das? Ich habe zur Zeit eine WD Black SN850X, die ich aber noch zurückgeben kann. Habe hier schon gelesen, dass die Samsungs gut funktionieren sollen? Wäre natürlich krass das Board mit den ganzen Komponenten bestücken zu können und trotzdem C8 zu erreichen!
  3. Ich bin gerade dabei auf einen Intel i3 12100 Build umzusteigen. Auf dem Foto sehe ich, dass auf dem Board auch noch eine NVME M2A_CPU Slot steckt. Verhindert diese nicht C8? Ich brauche 16 Sata Anschlüsse + 1x nvme in meiner jetzigen Konfiguration und frage mich, ob ich das Board entsprechend so bestücken könnte deiner Meinung nach und weiterhin C8 erreichen können würde?
  4. Hey there, I woke up to a strange behavior of my server. The following is shown on the screen (attached because of debuggin other problems...) BUG: Bad page map in process conn3 pte:ffff8881548afd80 pmd:21e24b067 addr:00005605ce9d3000 vm_flags:00000075 anon_vma:0000000000000000 mapping:ffff888583473d28 index:2b1b file:mongod fault:filemap_fault mmap:btrfs_file_map read_folio:btrfs_read_folio I can't find anything googling or here in the forums that comes close to that. dringenet-ms-diagnostics-20231201-0910.zip
  5. Würde ich auch gerne wissen, liebäugle gerade mit einem Ryzen 5 oder Ryzen 3 Pro der 4xxx Serie...Oder ich stelle komplett auf ein i3 12100 System um, obwohl ich dann ECC RAM verliere...aber mein Ryzen 1700 System ist jetzt bei 60W Idle...
  6. Hey, habe mit großem Interesse deine Posts gelesen und schaue mir die Kontron Boards an. Ich habe noch nicht ganz verstanden was du mit dem AST1166 meinst, finde im Netz nichts dazu. Ist das ein M.2 SATA Controller? Mein System nähert sich den 65-70W idle und ich bin am Überlegen, ob ich nicht auf eine solche Kombination hier umsteige. Mein jetziges System: CPU: Ryzen 7 1700 Board: Asus Rog Strix B450 RAM: 32gb unregistered ECC HDDs: 10x 8TB WD SSDs: 6x Sata M.2 NVME: 1x 1TB dGPU: 1050Ti für emby transcoding SATA Controller 1: Dell HD310 geflasht auf SATA 8x Sata SATA Controller 2: Sata Controller 4x Sata 2.5 Gbit LAN Karte 5x Noctua FANs Ich strebe an die dGPU rauszuwerfen und die 2.5 Gbit Karte auszubauen. Die Frage ist, wie ich entsprechend die ganzen SATA Anschlüsse weiterhin fahren kann. Auf einen der M.2 Slots, der den C10 State nicht beeinflusst kann ja die M.2 drauf und über die 4 Sata Ports onboard + 8 Controller + 4 Controller komme ich ja auf meine 16 gebrauchten Ports. Ich könnte wahrscheinlich auch meine M.2 rausschmeißen und auf den Slot einen 6 Port Sata Controller setzen und dafür den 4 Port PCIe rausschmeißen. Sata SSDs würden vollkommen ausreichen, wodurch ich auf 18 Ports kommen würde. Das System würde natürlich mehr verbrauchen als dein System, aber alles unter 20W wären ja super. Wenn ich mit 65W im Idle rechne und auf ca. 25W im idle kommen würde, dann wären das 75€ pro Jahr weniger. Mit Abstoßen des Altsystems würde sich das neue System in 3 Jahren amortisieren. Wie ist deine/eure Meinung dazu?
  7. Set down the ZFS arc to 8GB because of swap doing a lot of CPU stress cause my RAM was filling up. Disabled duplicati, as it was another source of unwanted high CPU usage. Getting there. The systems seems to take array and cache loads so much better now. Pretty much no unwanted IOWAIT. Keeping an eye on it for a few days more. Maybe it is now gone for good.
  8. It was not the solution. Currently I'm trying to pin it down by booting in safe mode. I've moved the docker image and the VMs to a btrfs encrypted cache pool and the iowait ist gone again for over 12 hours now. Will keep a close eye on it. Steps so far: - Updated BIOS to newest version - increased ZFS arc from 4 to 16GB - moved docker image (not appdata) and VMs off of the ZFS encrypted cache pool to a btrfs encrypted cache pool
  9. I've set my ZFS arc from 4GB to 16GB. The io issue seems to be gone...time will tell.
  10. I've tried CPU pinning...every docker which was pinned by me is now an orphaned image and deleted...this is ridiculous.
  11. This is iowait when I invoke the mover to move files from my encrypted ZFS cache pool to my encrypted XFS array: I could understand that CPU usage is high because auf decrypting and encrypting while moving, but I do not understand the high iowait?! Could a reason for this be using too many PCIe lanes of the system? This is the output of: sudo lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:" The different LnkSta's of the blue entries are partially downgraded, but it still should be plenty of bandwidth to do the job, right?
  12. I think I've isolated one of the issues down to my EmbyServer Docker. This high number of iowait only occures, when playing a really big movie via infuse directly. The thing is, that this does not occure when I playback the file via VLC over smb... Funny thing is, that the red line marks the point where I stop the playback via infuse and it really takes some time to go back to normal. I rule out transcoding, because it's a direct play. So my thoughts on this is, that the use of /mnt/user is the issue? But acturally I'm using a rootshare network share which is set in the smb config of unraid to playback via VLC over smb, utalizing the /mnt/user as well... This is the config of the EmbyServer Docker: Maybe it's not the fault on unraid side but on Emby, occupying the system even after the playback has stopped? Does anybody has a thought on this?
  13. Not a solution in my opinion. And it also appears when nothing is trying to access the disks...the wa value goes up and beyond 90, rendering the system completely unresponsive... Yesterday it worked normally for hours on end...I don't now what happened. I've changed nothing.
  14. It's reaching ridiculous levels...
  15. Hello everyone, I'm at the end of my tether...for weeks now I've been trying to fix iowait problems with my Unraid system. Whenever I think I have found the reason for iowait, the problem reappears after a short time. I don't know what to do anymore. First I changed my BTRFS cache to ZFS, which improved the performance a bit at first. (incl. upgrade from 6.9.2 to 6.12.4). Then, when high iowait reappeared, I suspected my download cache SSD. I replaced it, it was a Crucial P2 with QLC memory, which was very slow when writing. Now I have a WD Black in it, which works so far so good. Tonight I suddenly have completely absurd iowait numbers that I can't explain. Unraid is becoming less and less usable for me. The hours of fixing I've put in over the last few weeks are unbelievably high. Please, could someone help me with this? Whatever information you need, I'll try to provide it. One or more cores are always spiking to 100% because of iowait. dringenet-ms-diagnostics-20231115-1952.zip
  16. Hey, thanks for pushing me with my nose on it...I needed the extra kick. I deleted all my 5 year old Filebot Docker, installed this one and bought a license. Everything works like before, I just had to look properly
  17. Hey, maybe I'm searching for the wrong words but I can't find information about this docker regarding running headless, so without GUI. Is it possible to just give it the right renaming scheme and its input and output and let it run and everything is done automatically? My super old filebot instances do that but are running very old javascript versions...
  18. Hey there, I've got a problem with my duplicati configuration. Since I've changed my cache pool to ZFS duplicati can't backup the folders which are stored on chache. In the backup configuration window it looks like this: The folders in the red box are not shown as folders and the backups can't run. Any ideas?
  19. The stutter only occures when copying to the cachepool and only when bigger files are copied. I'm fine with this because the overall performance of the system has increased dramatically.
  20. After moving away from btrfs for the caches there is nearly no iowait. The performance is so much better, it's absurd. Thank you JorgeB for your support!
  21. I've got no problem with the encryption overhead but the stuttering in transfers. That seems odd...
  22. I did start dockers one by one and it seems like it only occurred when transcoding from emby was done. But not everytime. When zfs is not the solution I'm going to build up my emby installation from scratch.
  23. Hello there, in the process of converting my cache pool to zfs encrypted I noticed spikes in CPU usage, while the mover is moving the appdate etc. back onto the cache. I've then tried to copy a file from an SSD to the cache. When writing to the cache the CPU spikes and the copy process pauses for a few seconds, kworker tasks ramp up and the copy process continues. I've got no issue with zfs using CPU but with sluggish writes. On the other hand copying FROM the cache pool to the said SSD does not trigger this behavior and everything runs smooth. My concern is that this haltering behavior has an impact on the future use by dockers and VMs etc. Is there anything I can do about it or is it an expected behavior? I've intentionally downgraded to 6.12.3 because of known issues regarding AVX2. dringenet-ms-diagnostics-20231022-1026.zip
  24. Currently moving all data from my btrfs caches to convert them to zfs...lets see if this helps.
  25. I've update unraid to 6.12.4, changed all possible shares to exclusive shares and updates everything I could. The same problem persists. I've even changed the transcoding of emby to the RAM. It seems like it is randomly doing memory writeback and that's causing the iowait to lock up the system. But I can't pin down what is causing this. Any ideas? dringenet-ms-diagnostics-20231021-1605.zip