frodr

Members
  • Posts

    526
  • Joined

  • Last visited

Everything posted by frodr

  1. My hope lies in Unraid supporting multiple partitions in ZFS, or that mach.2 will be supported in the Linux kernel. Which one should I put my money on? Well, kernel support is not enough I guess, Unraid must also support multiple partitions in ZFS. I do expect Unraid to do that one mans requisition. What if I ship 2 drives to 10 persons, then demand increases........ I guess I just set them up as is, and maybe one day.........
  2. From the FAQ: Q: How can I configure an Exos ® 2X SATA drive in my Linux system? A: You can partition both actuators, stripe the actuators into a software RAID, or use as-is. Using the drive as-is would be a sufficient solution if you are migrating data to fill (or almost fill) the whole drive so that both actuators will be kept sufficiently busy. If you would like to treat each actuator as an individual device, then simple partitioning is an easy way to utilize Exos 2X SATA. So, Seagate says it can be used as-is in Linux. If you are migrating data to fill (or almost fill) the whole drive so that both actuators will be kept sufficiently busy. My understanding to this is that both actuators don't kick in until its fill up??? I had the drives in a Z2 Pool, but speed was as standard hdd.
  3. Is it likely that Unraid will support multiple partitions on the same zfs pool member? MACH.2 FAQ Leveraging Seagate’s Mach.2® to Accelerate Enterprise Storage
  4. I have 6 x Exos 2X18 Sata version ready for a ZFS Pool. Refurbished from ServerPartDeals at 220USD. In the link, it is also a script for the Sata drives. I would need a step by step how-to.
  5. Seagate Mach.2 tech doubles the hdd read/write speed. There are some scripts out there, but I guess an adaptation to Unraid is probably needed. A plugin would be nice. https://www.seagate.com/gb/en/products/enterprise-drives/exos-x/2x18/
  6. Then I have changed to Supermicro X13SAE-F (W680 chipset). No more problems with IPMI and other things. Power usage seems 1-3W lower the the Asus Pro WS W680-ACE IPMI. But I had to add a HBA (LSI 9400-8i) to hold 10x SSD´s. I have ordered a 10 port Sata pcie card, maybe that reduces power usage a little bit. Another 15-20W power usage comes from the Intel E810-XXVDA2 NIC. The hba and nics cards makes it impossible to come in power usage. The server is 58-62W. Running the lspcie command it seems that the LnkCtl ASPM is disabled on a few items. PCI bridge: Intel Corporation Device a70d (rev 01) (prog-if 00 [Normal decode]) LnkCap: Port #2, Speed 32GT/s, Width x8, ASPM L1, Exit Latency L1 <16us LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ WHat does the sentence in italic mean? ASPM not Supported is only the NIC if I understands the read out correct?
  7. Thank you very much for the advice. It was a bios setting I missed. IGPU is now there, and I do not have to buy a new CPU. I am running non-turbo, so no risk of frying the mobo.
  8. After switching to SM W680 mobo, I can't "find" the iGPU. Intel GPU TOP plugin is installed (also deleted and reinstalled). Then I discovered that this mobo supports up to 150W TDP, while the I7-13700 runs 219W TDP. Is it possible/likely that the iGPU gets disabled? The i770 is not in the System Device list either. kjell-diagnostics-20230929-2329.zip
  9. The mobo (server1) is totally stuck in C6. I talk to Asus Support, a very strange conversion. They say it can (suddenly after bios update) be the RAM sticks. And refers to a QVL list with tested/approve RAM. Going thru this iten by item (long list), it is only 2 R-DIMM in 3200, one of the not available anywhere. Asking about this I get this answer: " We have forwarded this feedback to our HQ as well as with the parts and bios information Regarding the QVL, it is only a recommended list, and other memory modules can work as well, but that is something that is not guaranteed in those cases, but we have asked them to check if there is any upcoming updated QVL with more modules, we understand why it might be problematic The problem is often that since our Research and Development is located in Taiwan only, they cannot often not get memory modules from the Nordic market to test, but I inquired about this to see if they could help with that The sad thing is that our market here in the Nordic is very small compared to others when it comes to ECC memory, and its more common that people buy non ECC memory even for workstation products, or that many buys pure server products which is another type of market completely" According to Asus HQ Support, we have RAM sticks only for the Nordic market, and Asus HQ do not no how to order products Worldwide. What a strange Company.
  10. I updated the bios. That went well....... Now the mobo is stuck on C6. No matter what I do, it stops there. I removed every nvme and pcie card possible. C6 is not referenced in the manual. But I see a few people out there have had this problem. Hummm. I am in contact with Asus support........., for what it's worth .... RMA is requested at the Reseller.
  11. Thanks, I will do a bios update, then we see.
  12. How to add diagnostics that shows shutdown? After shutdown in the server, the GUI drops, but the pc stays on. The only way to get it running, is to hold on/off button for a few sec.
  13. My server (Server1) will not shutdown, nor reboot. I do not have the correct cable to connect to the monitor to see any shutdown info. And the Asus IPMI Remote Control is not connecting either. Diagnostic from Tools and from flash /logs are included. helmaxx-diagnostics-20230920-1637.zip unbalance.log maxx-diagnostics-20230417-2127.zip
  14. That's right. IPMI onboard. Nice to get rid of 3 untidy cables.
  15. I think I saw a recommondation on a 10 port Sata pcie card in the tread. But now I can´t find it. Anyone?
  16. You have the fans I miss in my setup. Wonder why?
  17. I have ordered this Supermicro. I want the IPMI functionality if possible.
  18. I have this board, and I guess I'm the unlucky duck here. I had all kinds of trouble: Startup sequence randomly stuck on F6, specially after adding/removing pcie cards. Startup sequence randomly stays on OData Server info screen forever, or fully stuck. IPMI card looses contact with mainboard, sometime I have to physically remove/install it to get it up and running. IPMI remote connection fall out "all the time". IPMI remote control due not follow the startup procedure until end. Bios resets itself to default, mostly after tuning main power off. I'm in contact with Asus Support, that is sometimes like talking to a dement person asking the same question over and over again. A question to this board. Is the 2 pcie5 slots "connected? My understanding is the if I run 16 lanes card in slot 1, there is only 4 lanes to the CPU in pcie slot2? Does this mean a 8 lane card in pcie slot will not work? Or will it work with 4 lanes bandwidth? //
  19. Thanks. OK. Yes Intel GPU Top plugin installed. I can not see any reference to iGPU stat at the bottom, see picture. I see. I will test that one day. After I get this mobo working correctly. It has a lot of issues, boot stops at F6, AO, looses IPMI card physical connection (often needs to remove the card, start mobo, stop mobo, insert IPMI to get it ip again). So I'm wrestling Asus support already.
  20. Sorry guys, this is a long one. Thanks @mgutt for this great tread. I´ve been thru all 19 pages of posts to get a understanding of the topic. I started with a AMD gamer PC which I almost never used. It ran at 160-180W without doing much. So, I terminated the game PC, put the Nvidia GPU in the Server1 and sat up the a gamer VM there. Then the plan was to set up a (somewhat) power efficiant Server 2 to run 24/7 for Plex, RoonServer, plus a few other dockers and possible som Forex trading Windows VM that have to be up 24/7. I bought a LGA1700 mobo and I7 processor with iGPU for transcoding. I sat up a zfs pool of 5 hdd included 4 NVMe ssd for spesial cache. I acquired Highpoint 1508 for 8 NVMe ssd due to LGA1700 not supporting pcie X4/X4/X4/X4 bifurcation. The 1508 has switching on the card. The power usage came down, to around 100W if I remeber correct. But the rig was still to beefy. I then removed the water cooling, a 20W+ pump and 10 x 120 mm fans. The strange thing, this was only 10W difference. My messuments was probably not that prisise. I have had (and still have) major problems with the mobo. Sometimes it reset the bios and all the settings. Then I moved from hhd´s to 8TB sata ssd, pulled the 4 NVMe ssd cache drives and the 1508 hba. Removed the X540-T2 NIC and today the ConnectX3 40gbe NIC. The ConnectX3 draws 10W on idle. So, now power draw is about 30W idle with Plex and and RoonServer running (no streaming). I want to add a highspeed nic, maybe an Intel Intel XXV710. That will pull some. But the big question, can I reduse power draw futher from 30W on current HW? I am running powertop --auto-tune. Powertop tunables all Good. Am I correct that Powertop do not support newest CPU´s? Idle stats Pkg only C2 and CPU(OS) only C3. ASPM in bios enabled, spesific ASPM mostly set to value other than Auto. Turbo and Asus Performance disabled in bios. Asus tweaker (bios) can not be disabled, only 3 different setting. Measure power draw with TPLink Smart Plug. No pcie card to remove - only X1 IPMI card that was part of the mobo. ASPM status show all L1 enabled except ASPEED AST1150 and a Intel device, see ASPM report. For detailed hw listing, see server 2 in the signature. Under "Commands" I ran "enable IEEE 802.3az" and "Enable SATA link power management". As usual, happy for any feedback. //
  21. System temp plugin: The plugin finds a driver, but no sensors. It worked sometimes, but not anymore. Is it a fix for this?
  22. I removed the Highpoint 1508 , 4 x 1 GB NVMe drives, Intet T540-2 and a USB pcie card. At first power usage was exactly the same as before. But when the sata drive fell to sleep the power consumption is 60-65 W. Tomorrow I will take down the water cooling, a pump which is running at 20W+ and 10 fans. After that the only hardware change is the power supply. Which is a 1200W Xilence today.
  23. The plan this summer was to build Server2, kjell, into a fairly power efficient server on latest tech. Spinners was changed to ssd´s, GPU was removed, The server is running at 100-110W. Autotune changed lines from Bad to Good, but did not change power usage more than 2-3 W. Powersave in bios as well as in the Server settings. I had hoped for 70-80W at this point, but I guess water-cooling with pump drawing +20W and 10 fans, and a Highpoint 8 nvme hba with 5 nvme drives need some power. I will change to passive cooling, remove the special/logs nvme drives from the Pool and change to a smaller, hopefully more efficient power supply. No Corsair RM 550 to be found, so maybe a Corsair RM750e V2.