casperse

Members
  • Posts

    810
  • Joined

  • Last visited

Everything posted by casperse

  1. Hi Vr2Io Sorry for the late reply I have ben waiting for new cables and also the fan controller "HUBs" I wasn't happy with the control of the all fans on one MB Fan controller "HUB". Sound before getting FAN controlling away from the backplane WAS LOUD !!! After getting the power cable for the IPMI controller I now have all 8 fans connected to the controller. 6 big 120 (RED cables) - Picture above) and 2 x Noctua (Black cables) NF-A8 PWM, above 8 in total. The CPU fan is on the MB and set to 100% because its Noctua low noise. I have set the curve to 100% for the silent 80mm fans and for the "Turbine" ones to the lowest setting 20% Also the CPU temp is controlling the 6 x 120mm fans so keeping that under 30C is keeping them at 20% I did try the Server enterprise fans from Noctua (NF12 3000 PWM), but even at 100% they didn't manage to keep things cool. So best to keep the high pressure fans even if they are loud and draws more power, they get the job done! Its pretty cold outside now, but summertime will be interesting, I dont have any Aircondition room for my home server ๐Ÿ™‚
  2. Can anyone tell me why I have "AMI_Virtual_HDisk" on this motherboard? Is it a Bios setting?
  3. I know this, but recreating my custom docker lan and then selecting only the ones I used before (Seems I have many old dockers in my previous section under Apps) is allot of work (+50-60 dockers). Also it seems that some of my dockers have lost their repository so I cant reinstall them ๐Ÿ˜ž Maybe the new ZFS with snapshot could be used in the future for the vdisk file? (I cant remember if it was only the Appdata? Or a script to copy the vdisk every week? Sorry I was so sure that this plugin had the posibility to take a backup of the vdisk, maybbe it was another one?
  4. I seem to have experienced a corrupt docker.img vdsik and I think this plugin used/or do backup the whole vdisk file? I just cant find it? is it compressed between the many App backups?
  5. Have anyone revised the recommended settings with the new Bios above? (Efficient Cores)
  6. The only change I can think off is changing the HBA card no Bios changes but maybe something has happened? I will get a monitor and check the bios
  7. No it only works with 1 Nvme and my goal was to have 2 (8x slot) I think its because the card would need to bifurc. to 4x/4x on the slot? and not the 8x/8X in this MB. I am now looking into installing a dual U.2 PCIe card. (Much longer run life time and IO performance is great) and keeping my NVMe on the board (3 of them) And prices is going down on some U.2 drives (8TB). Also looking into using the slimsas on the MB for a U.2 drive
  8. Sorry I didnt press the upload ๐Ÿ™‚ diagnostics-20240122-1309.zip
  9. Very strange my iGPU stopped showing status on the dashboard: I can see that it works: (Live TV transcoding) And the iGPU top tool is installed Command: intel_gpu_top Again its working it do HW transcoding?
  10. I can se its listed in the cronjob but I never got any updated EPG guide. So I tried to just do a script in the user script plugin with: #!/bin/bash docker exec xteve_g2g_owi owi2plex.py -h 192.168.0.250 -b "PLEX (TV)" -o /owi2plex/stue-epg.xml >> /dev/null docker exec xteve_g2g_owi owi2plex.py -h 192.168.0.200 -b "PLEX (TV)" -o /owi2plex/sov-epg.xml >> /dev/null echo "EPG guide updated" exit And now I get the EPG every day, without problems dont know why Xteve didn't work?
  11. No and Yes - did the Bios update first and then booted a windows 11 to update with the "MEUpdateTool" it worked but I did it in the wrong order. Why the i5? My thought is that with a platform that is end of live now (This socket will not get a new CPU) I might as Weel "future proof it" and with the tweaks I currently have temp at 36-52 C doing Intel burning test tool - haven't measured the power usage yet. Passmark CPU gave me around 30.000 and the official number is above 60.000 so I might turn it up a bit. (Hmm my existing server had a passmark score of 13586 so Maybe I should just leave it as it is :-) )
  12. I am also trying to get the power consumption down on my i9 but the latest Bios update (3101) have a new setting for E-cores? Do you? or anyone here know what settings is recommended to use here? (try to follow the guide below): Also I would like to know what U.2 NVMe drives are you using and what adapter? I have seen these (Intel DC P4510 Series SSDPE2KX080T801 8TB) go down in price lately even used ones are worth it since they last a very very long time. I gues it could be running in a 4x PCIe slot? And I could scrap my Nvme adapter on the 8x slot ๐Ÿ™‚
  13. Great write up! I just got the MB but with a Bios: v. 3101 And I cant find a way to set the (1-2) AI Tweaker > Efficiency Core Ratio: By Core Usage and for all E-Core apply 36 Anymore instead I have this, and its not possible to input 36 IMPORTANT: I just installed the GIGABYTE - AORUS Gen4 AIC Adaptor, PCIe 4.0 GC-4XM2G4 in the 8x bifurcation slot and it dosent work on this MB, no matter what I do, even setting it to 16x and removing all card it only sees 1 Nvme ๐Ÿ˜ž IF ANYONE HERE have a Nvme expansion card that works on this Motherboard then please write me! ๐Ÿ™‚
  14. Thanks @Vr2Io I haven't found industri server fans from Noctua above the 2000rpm. Comparing the Noctua to the SILENT WINGS PRO 4 (There is literally almost no $ price difference between them!) rpm and Airflow & dB spring to mind. I think we are missing the static pressure? Noctua 2K: 3,94 mm Hโ‚‚O Noctua 3K: 7,63 mm Hโ‚‚O Wings 4: 5,31 mm Hโ‚‚O Yes the mounting is very thick, I think I can secure them in the "fan dock" anyway, adhesive 3M tape? or the tight fit would be enough? Correction I actually found two NF12 3000 PWM from my old upgrade fan Synology project even at low speed they moved allot of air in a very small case. https://noctua.at/en/nf-f12-industrialppc-3000-pwm/specification I think these fans with the Fan controller you showed me would resolve the problem and keep the server cool & keep power level down 24/7
  15. Thanks! - Its not that I want to use macvlan ๐Ÿ™‚ Seems like I have to!? I am missing the understanding on what solution I need for my setup? I have a proxynet defined for my Dockers. Proxynet I also need/use bridge + host when running two Plex servers side by side I have read the below posts but the right solution eludes me.... I also use the Nginxproxy and the Authilia as a gateway to access my services (with ipvlan I cant fetch new certificates) Right now I only have two LAN connection (I am in the middle of an upgrade so this could change). All my Network HW is Unifi... Update just changed it back to ipvlan again (It breaks some dockers, but so far no crash of the server, didnt get errors about macvlan before the crash thow?). https://docs.unraid.net/unraid-os/release-notes/6.12.4/#fix-for-macvlan-call-traces
  16. +1 YES! and for the APP (I know there is a script/way to do this but native funct. is much better)
  17. Thanks! - Yes its 3A, I think my contact is a sales rep. so no I dont think there is any way to control the fans from backplane to MB ๐Ÿ˜ž I got message that even at 30% load the fans will be LOUD! so even if I already ordered 5 of the fan controllers (for other different PC's) I dont think that will have enough impact. So I think I need to go the Noctua way: NF-F12 industrialPPC-2000 PWM - https://noctua.at/en/nf-f12-industrialppc-2000-pwm/specification These is so far the best I have ben able to find (2000rpm and highest pressure) Do you have alternative fans to recommend?
  18. Update my server crashed at 22:43 yesterday and again right now. I have enabled syslog. I did try to change the macvlan but that made many things not work. Nginx proxymanager couldnt fetch certificates, and my proxynet didnt work 100% So changed it back (I didnt have crash for last 2 months before so this might have caused this?
  19. HELP - So I did a reboot yesterday but today I got error e-mail: * **/var/log is getting full (currently 96 % used)** * **Out Of Memory errors detected on your server** error: Compressing program wrote following message to stderr when compressing log /var/log/nginx/error.log.1: gzip: stdout: No space left on device error: failed to compress log /var/log/nginx/error.log.1 and in the logs I see this again:
  20. I will try to take a picture of the Fan when I get home (Should be some sticker with data on them) I might end up swapping them all with Noctua fans like many others.... Just got a strange message from the vendor, claiming that I should be able to control all fans by a cable between the backplane and the MB using the CHA_FAN pin? But I know that each fan can do 3 amps and I have 6 of them, wouldn't feel comfortable trying that. They say to use the third empty fan connector. That would requires a IOT Grove cable - with a 4 pin male to 4 pin male not exactly a standard cable!
  21. Hi All Having this problem and I haven't find any docker that haven't a restriction on logging size I dont have the memory error anymore in Fix common problem but I can see it in the logs diagnostics-20240107-1355.zip
  22. Excellent I can see that all my fans is with the above 4 wires: Looks like they are glued to the backplane: I was just hooping that I could control the speed by one of the wires on the backplane, but I guess not. UPDATE: Found one https://www.aliexpress.com/item/1005003951908311.html?spm=a2g0o.cart.0.0.373a38daAnMrdk&mp=1 Thanks!
  23. Hi All I got myself a new "home-made" "Storinator case" but with bays and it meet my needs a bigger build height for a bigger Air CPU cooler ๐Ÿ™‚ Case: S865-48 Anyway I am trying to figure out how to control the case fans? (THEY ARE VERY VERY LOUD !!!) I can see the 3 x 3 fans are connected to each of the two backplanes - 1 and backplane 2 (Yellow circles there are 4 but only 3 are used) The extra cable pins on the case are listed below: PWR Fail +/- ? POWER LED +/- (Power indicator?) NIC 1LED (Network?) NIC 2LED (Network?) HDD LED (Attached to MB) UID LED OH/FAN Fail +/- ? ID SW +/- ? So far I only have the "Normal" pins below attached to the MB: POWER SW +/- --> PWRSW PIN BELOW POWER LED +/- --> PLED PIN BELOW RESET SW +/- --> RESET PIN BELOW HDD LED +/- --> HDD LED PIN BELOW My MB: ASUS Pro WS W680-ACE IPMI should have plenty of FAN control possibilities. Or is these server case fans just supposed to run at a 100% speed by design 24/7? It just looks like the backplane would have some control option for the fans. DUAL PSU ISSUE: Also this is my first server with server redundant PSU's ๐Ÿ™‚ Are they supposed to keep running the fans when you turn the server off? Even when I cut the power it seems like it shuts down slowly like a inbuilt UPS/Battery? Any special cable needed between the PSU and the MB? Sorry if I am asking stupid questions. I am in new HW territory not 10% servergrade HW - I am still building with new HW so its a mix. And yes this can cause problems I know. But thanks to Unraid and all the great people here I decided to go for it ๐Ÿ™‚ (I selected the MB from the comments about it in this forum ๐Ÿ™‚) I have attached the Backplane data sheet PDF below: 24bays 12Gb Expander Specs V1 0 - MC.pdf
  24. Hi All I am trying to figure out the best way to do a move of all my drives (Array) to a new server. But still keeping the old one running with my existing cache drives and VM + Dockers on cache and still keep all my defined shares (This would get smaller New-old drives for the new array) I have successfully cloned my existing Unraid USB and I bought a new Pro license. ๐Ÿ™‚ I changed the static IP in the conf. file on the USB - And Unraid booted up (But I haven't moved the drives/array yet!) So what would be the right way to have minimal downtime? Is it possible to create a new cache pool (New server) and then transfer the cache data from the old server to the new server without starting the array by LAN? Or do I need to do a "move" all data from the cache drives --> to the array (OLD server), and then afterwards I could do a rsync to the existing old cachepool, and move the old USB and drives/array to the new server, then create a new cache pool and do a "move" everything back from the array. Then using my new Unraid USB to boot my OLD server with a new array (all new-old smaller drives - No data) and hopefully still keep all my shares folder setup and configuration, and then from here copy data back from the new server. Sorry if this is a stupid Q. But my goal is to build a new server but still keep my existing cache pools and dockers + VM running on the old and still have a "clone" on the new one on a new pool (ZFS) ๐Ÿ™‚ (Except for the data, I will have to copy the critical data to the new server array afterwards, like normal). I know the "move" data away from cache and "move" it back again is going to take a very long time. And since this is done while the docker and VM is offline it would result in a very long downtime. Regards Casperse