Jump to content

frodr

Members
  • Posts

    527
  • Joined

  • Last visited

Posts posted by frodr

  1. My hope lies in Unraid supporting multiple partitions in ZFS, or that mach.2 will be supported in the Linux kernel. Which one should I put my money on? Well, kernel support is not enough I guess, Unraid must also support multiple partitions in ZFS. I do expect Unraid to do that one mans requisition. What if I ship 2 drives to 10 persons, then demand increases........

     

    I guess I just set them up as is, and maybe one day.........

  2. From the FAQ:

     

    Q: How can I configure an Exos ® 2X SATA drive in my Linux system?
    A: You can partition both actuators, stripe the actuators into a software RAID, or use as-is.
    Using the drive as-is would be a sufficient solution if you are migrating data to fill (or almost
    fill) the whole drive so that both actuators will be kept sufficiently busy. If you would like to
    treat each actuator as an individual device, then simple partitioning is an easy way to utilize
    Exos 2X SATA.
     

    So, Seagate says it can be used as-is in Linux. If you are migrating data to fill (or almost fill) the whole drive so that both actuators will be kept sufficiently busy. My understanding to this is that both actuators don't kick in until its fill up??? I had the drives in a Z2 Pool, but speed was as standard hdd. 

     

     

     

  3. 1 hour ago, ich777 said:

    Do you plan on using SATA or SAS devices?

     

    This script is only for ZFS and SAS drivers, as far as I can tell SATA can not represent two drives over one connection so you would need to create two separate partitions where you don't have the same speed bump over SAS.

     

    This script should even work fine for Unraid as far as I can tell.

     

    I have 6 x Exos 2X18 Sata version ready for a ZFS Pool. Refurbished from ServerPartDeals at 220USD. In the link, it is also a script for the Sata drives. I would need a step by step how-to. 

     

     

  4. Then I have changed to Supermicro X13SAE-F (W680 chipset). No more problems with IPMI and other things. Power usage seems 1-3W lower the the Asus Pro WS W680-ACE IPMI. But I had to add a HBA (LSI 9400-8i) to hold 10x SSD´s. I have ordered a 10 port Sata pcie card, maybe that reduces power usage a little bit.

     

    Another 15-20W power usage comes from the Intel E810-XXVDA2 NIC. The hba and nics cards makes it impossible to come in power usage. The server is 58-62W.

     

    Running the lspcie command it seems that the LnkCtl ASPM is disabled on a few items.

     

    PCI bridge: Intel Corporation Device a70d (rev 01) (prog-if 00 [Normal decode])
                    LnkCap: Port #2, Speed 32GT/s, Width x8, ASPM L1, Exit Latency L1 <16us
                    LnkCtl: ASPM Disabled; RCB 64 bytes, Disabled- CommClk+

     

    WHat does the sentence in italic mean?

     

    ASPM not Supported is only the NIC if I understands the read out correct?

     

    599758233_Screenshot2023-09-30at19_04_24.thumb.png.31d721e3ede774044e185f586b587b79.png

  5. 4 hours ago, ich777 said:

    I don't know, I would recommend that you visit the support page from your motherboard and see if your CPU is listed as compatible or if there is any additional information.

     

    You Diagnostics show that only the integrated ASPEED GPU is activated:

    09:00.0 PCI bridge [0604]: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge [1a03:1150] (rev 06)
        Subsystem: Super Micro Computer Inc AST1150 PCI-to-PCI Bridge [15d9:1c48]
    0a:00.0 VGA compatible controller [0300]: ASPEED Technology, Inc. ASPEED Graphics Family [1a03:2000] (rev 52)
        DeviceName:  ASPEED Video AST2600
        Subsystem: Super Micro Computer Inc ASPEED Graphics Family [15d9:1c48]
        Kernel driver in use: ast
        Kernel modules: ast

     

    I would rather recommend that you double check in the BIOS if you have set your iGPU to the Primary Graphics output and/or if you can deactivated the integrated one in the BIOS or with a dedicated jumper on the motherboard itself.

     

    Thank you very much for the advice. It was a bios setting I missed. IGPU is now there, and I do not have to buy a new CPU. I am running non-turbo, so no risk of frying the mobo.

    • Like 1
  6. The mobo (server1) is totally stuck in C6. I talk to Asus Support, a very strange conversion. They say it can (suddenly after bios update) be the RAM sticks. And refers to a QVL list with tested/approve RAM. Going thru this iten by item (long list), it is only 2 R-DIMM in 3200, one of the not available anywhere. Asking about this I get this answer:

     

    "

    We have forwarded this feedback to our HQ as well as with the parts and bios information

    Regarding the QVL, it is only a recommended list, and other memory modules can work as well, but that is something that is not guaranteed in those cases, but we have asked them to check if there is any upcoming updated QVL with more modules, we understand why it might be problematic

    The problem is often that since our Research and Development is located in Taiwan only, they cannot often not get memory modules from the Nordic market to test, but I inquired about this to see if they could help with that 

    The sad thing is that our market here in the Nordic is very small compared to others when it comes to ECC memory, and its more common that people buy non ECC memory even for workstation products, or that many buys pure server products which is another type of market completely"

     

    According to Asus HQ Support, we have RAM sticks only for the Nordic market, and Asus HQ do not no how to order products Worldwide. What a strange Company.

     

     

     

  7. I updated the bios. That went well....... Now the mobo is stuck on C6. No matter what I do, it stops there. I removed every nvme and pcie card possible. C6 is not referenced in the manual. But I see a few people out there have had this problem. Hummm. I am in contact with Asus support........., for what it's worth .... RMA is requested at the Reseller. 

  8. 5 hours ago, JorgeB said:

    If Unraid is failing to do a clean shutdown it will save the diags in the flash drive, /logs folder, if it's just the server that stays on after Unraid shuts down not much you can do, look for a BIOS update.

     

    Thanks, I will do a bios update, then we see. 

  9. 14 minutes ago, JimmyGerms said:

    Hey all!

     

    I wanted to share my experience as well. Just got it up and running. Took a bit but things seem to be smooth with the IPMI (for those who are pulling their hair out to get to the web interface put 'https://' in front of your BMC IP)

     

    I did have issues when I updated to to the latest bios with my RAM. It kept down-clocking them to 3200 instead of 4800 and I could not get the speeds back up without going back to bios version 2602.

     

    One question I had for the group, is there a way to tie the fan speed to HDD temps? I currently don't see a way without buying separate temp probes and using that in the fan settings. There doesn't seem to be a way to use anything other then CPU or temp probes. I learned the Fan tab in the IPMI tools plugin only appears for ASRock and SuperMicro boards. Wonder if there's a way to hack that?

    This one in IPMI?

     

    1473895103_Screenshot2023-09-18at20_12_02.thumb.png.bcbd57f5102b915043b99ba151c6c8e0.png

  10. I have this board, and I guess I'm the unlucky duck here. I had all kinds of trouble:

    • Startup sequence randomly stuck on F6, specially after adding/removing pcie cards.
    • Startup sequence randomly stays on OData Server info screen forever, or fully stuck.
    • IPMI card looses contact with mainboard, sometime I have to physically remove/install it to get it up and running. 
    • IPMI remote connection fall out "all the time".
    • IPMI remote control due not follow the startup procedure until end. 
    • Bios resets itself to default, mostly after tuning main power off. 

    I'm in contact with Asus Support, that is sometimes like talking to a dement person asking the same question over and over again. 

     

    A question to this board. Is the 2 pcie5 slots "connected? My understanding is the if I run 16 lanes card in slot 1, there is only 4 lanes to the CPU in pcie slot2? Does this mean a 8 lane card in pcie slot will not work? Or will it work with 4 lanes bandwidth?

     

    // 

  11. 10 minutes ago, mgutt said:

     

    Yes. The version you have installed seems to detect the package c-states but has problem with the cpu c-states:

     

    image.png.2b4b64f9cea2fd772bf8dbe343a3a106.png

     

    Is this a problem? No. Powertop is only a monitoring tool or sets some power state functions of the hardware. So if it displays "C3_ACPI" it means something between C3 and C10, but they still work.

    Thanks. OK.

    10 minutes ago, mgutt said:

     

    Did you install the Intel GPU top plugin to install the Intel iGPU drivers? Package C-States won't work on some hardware as long the iGPU driver is not installed and the iGPU is not in RC6 state. Scroll down the "Idle stats" page. At the bottom you find the iGPU status.

    Yes Intel GPU Top plugin installed. I can not see any reference to iGPU stat at the bottom, see picture.

    2041489494_Screenshot2023-09-10at23_14_44.thumb.png.36b1353b14a7311608b0f69900979289.png

     

     

     

     

    10 minutes ago, mgutt said:

     

    I'd say the BIOS settings look good. I think the problem is the board itself. I suggest:

    - remove all disks

    - boot Ubuntu through USB flash drive

    - execute powertop --auto-tune

    - execute powertop

    - confront Asus with the results

     

     

    I see. I will test that one day. After I get this mobo working correctly. It has a lot of issues, boot stops at F6, AO, looses IPMI card physical connection (often needs to remove the card, start mobo, stop mobo, insert IPMI to get it ip again). So I'm wrestling Asus support already.  

     

     

  12. Sorry guys, this is a long one.

     

    Thanks @mgutt for this great tread. I´ve been thru all 19 pages of posts to get a understanding of the topic. I started with a AMD gamer PC which I almost never used. It ran at 160-180W without doing much. So, I terminated the game PC, put the Nvidia GPU in the Server1 and sat up the a gamer VM there.

     

    Then the plan was to set up a (somewhat) power efficiant Server 2 to run 24/7 for Plex, RoonServer, plus a few other dockers and possible som Forex trading Windows VM that have to be up 24/7. I bought a LGA1700 mobo and I7 processor with iGPU for transcoding. I sat up a zfs pool of 5 hdd included 4 NVMe ssd for spesial cache. I acquired Highpoint 1508 for 8 NVMe ssd due to LGA1700 not supporting pcie X4/X4/X4/X4 bifurcation. The 1508 has switching on the card. The power usage came down, to around 100W if I remeber correct. But the rig was still to beefy. I then removed the water cooling, a 20W+ pump and 10 x 120 mm fans. The strange thing, this was only 10W difference. My messuments was probably not that prisise. I have had (and still have) major problems with the mobo. Sometimes it reset the bios and all the settings.

     

    Then I moved from hhd´s to 8TB sata ssd, pulled the 4 NVMe ssd cache drives and the 1508 hba. Removed the X540-T2 NIC and today the ConnectX3 40gbe NIC. The ConnectX3 draws 10W on idle.

     

    So, now power draw is about 30W idle with Plex and and RoonServer running (no streaming). I want to add a highspeed nic, maybe an Intel Intel XXV710. That will pull some.

     

    But the big question, can I reduse power draw futher from 30W on current HW?

     

    • I am running powertop --auto-tune.
    • Powertop tunables all Good.
    • Am I correct that Powertop do not support newest CPU´s?
    • Idle stats Pkg only C2 and CPU(OS) only C3.
    • ASPM in bios enabled, spesific ASPM mostly set to value other than Auto.
    • Turbo and Asus Performance disabled in bios.
    • Asus tweaker (bios) can not be disabled, only 3 different setting.
    • Measure power draw with TPLink Smart Plug.
    • No pcie card to remove - only X1 IPMI card that was part of the mobo.
    • ASPM status show all L1 enabled except ASPEED AST1150 and a Intel device, see ASPM report.
    • For detailed hw listing, see server 2 in the signature.
    • Under "Commands" I ran "enable IEEE 802.3az" and "Enable SATA link power management".

     

    As usual, happy for any feedback.

     

    //

     

     

     

    1424187297_Screenshot2023-09-10at19_51_14.thumb.png.c32654c78da6ae02e24fc00f8ccf7a9d.png

     

    515221948_Screenshot2023-09-10at16_34_07.thumb.png.a1a63bed394f68f24662f0eaf3226ea7.png

     

    293327038_Screenshot2023-09-10at22_31_07.png.465e0ef3dd191fe3850381d3582df964.png

     

    1607011624_Screenshot2023-09-10at22_31_30.png.3746653d4b10439d2fdb75a96bde83e2.png

     

     

    Screenshot 2023-09-10 at 22.32.12.png

  13. On 9/2/2023 at 10:51 PM, frodr said:

    The plan this summer was to build Server2, kjell, into a fairly power efficient server on latest tech.  Spinners was changed to ssd´s, GPU was removed, The server is running at 100-110W. Autotune changed lines from Bad to Good, but did not change power usage more than 2-3 W. Powersave in bios as well as in the Server settings. I had hoped for 70-80W at this point, but I guess water-cooling with pump drawing +20W and 10 fans, and a Highpoint 8 nvme hba with 5 nvme drives need some power.

     

    I will change to passive cooling, remove the special/logs nvme drives from the Pool and change to a smaller, hopefully more efficient power supply. No Corsair RM 550 to be found, so maybe a Corsair RM750e V2.

     

    1752479282_Screenshot2023-09-02at22_22_45.thumb.png.e12f457ab23bcedf99995a992935c3bc.png

     

    I removed the Highpoint 1508 , 4 x 1 GB NVMe drives, Intet T540-2 and a USB pcie card. At first power usage was exactly the same as before. But when the sata drive fell to sleep the power consumption is 60-65 W. Tomorrow I will take down the water cooling, a pump which is running at 20W+ and 10 fans. After that the only hardware change is the power supply. Which is a 1200W Xilence today.

     

     

     

     

     

  14. The plan this summer was to build Server2, kjell, into a fairly power efficient server on latest tech.  Spinners was changed to ssd´s, GPU was removed, The server is running at 100-110W. Autotune changed lines from Bad to Good, but did not change power usage more than 2-3 W. Powersave in bios as well as in the Server settings. I had hoped for 70-80W at this point, but I guess water-cooling with pump drawing +20W and 10 fans, and a Highpoint 8 nvme hba with 5 nvme drives need some power.

     

    I will change to passive cooling, remove the special/logs nvme drives from the Pool and change to a smaller, hopefully more efficient power supply. No Corsair RM 550 to be found, so maybe a Corsair RM750e V2.

     

    1752479282_Screenshot2023-09-02at22_22_45.thumb.png.e12f457ab23bcedf99995a992935c3bc.png

×
×
  • Create New...