Jump to content

24-Bay ITX Server in 16L case (SSD-only)


Recommended Posts

It's time for a new project. This time SSD only. I mean theoretically I already finished it 😅

image.thumb.png.1bcb7e8246b4847e1b1f82fe3a218287.png

 

Joke aside, although I installed my second M.2 ASM1166 card and could connect up to 13 SSDs, I'm fairly sure this Lenovo M920q is already above its 5V amperage limit. 🙈

 

And as it's already consuming 17W in idle I don't see a reason to stay with this "efficient tiny pc" 😅

 

The targets

- 24 SATA SSDs

- ECC RAM

- 2.5G Ethernet (could be onboard, wifi slot adapter or USB adapter, but it should allow deep C-states)

- compact and nice looking 

- undercut 17 watts 😅

 

Where to put all those SSDs

First I thought using these Trio-Adapters (EZ-Fit MB610SP), to install 3 SSDs in a single 3.5 inch HDD slot, but they are sold out and for 24 SSDs I would need a case with 8 HDD slots. This would work with the big 22 liter Silverstone DS380, but then I have to solve the cabling as the Backplate is at the back of the slots and the SSD connectors would be at the front?!

https://www.techspot.com/review/826-silverstone-ds380-nas/page2.html

image.png.337d0071b1e7c9e84cbdb376624a036b.png

 

In addition it allows only SFX power supplies (we don't know any low load efficient one).

 

Alternatively we could install additional 3.5 inch cages into a Li Q25B, if you get one as it's is out of production since many years:

https://forums.unraid.net/topic/45070-hardware-question-build-skylake-and-lian-li-pc-q25b/#comment-442862

image.png.60ca685d249abd1ab30b30a3173df4e8.png

 

By the way I own the bigger Lian Li Q26B. It has 10x 3.5 inch slots, but it's a way bigger with its 32 liters. By using Icy Dock EZ-Fit Pro MB082SP Dual Adapters I could install up to 22 SSDs. I keep this option open...

 

With much less hassle we could use 5.25 inch cages with 8x 2.5 inch hot swap bays. For example:

- Icy Dock ExpressCage MB038SP-B https://de.icydock.com/product_1359.html ~ 125 €

- Icy Dock ToughArmor MB998IP-B https://de.icydock.com/product_169.html ~ 250 € (Feature: SFF-8643, so much less cabling neededl

- Fantec MR-SA1082 https://www.fantec.de/produkte/serverprodukte/backplane-systeme/backplane-systeme-mr-serie/produkt/details/artikel/2196_fantec_mr_sa1082-1/ ~ 110 €

 

I started through buying the MB038SP-B, I mean... Wow this type of parts are expensive. I thought Chia Mining is dead 🫣

image.thumb.png.614b1d78b7cf363d484508fc9e790597.png

 

How to connect all those SSDs

 

Aside of the 6 SATA ports of an ASM1166 card (of which we would need three, to connect 24 SSDs), I did in the past some research about power efficient HBA controllers, but they were extremely expensive. In the meantime the prices dropped and a few days ago I was able to by a Broadcom 9500-16i for 300 €. It was sold from a different Unraid user. I'm excited if it reaches any deep C-states.

image.thumb.png.85fcccb519d2553c5457e792724edd35.png

 

Which ITX mainboard with 8 SATA ports and ECC RAM support

As far as I know, there is no power efficient HBA supporting 24 SATA ports (?), so mainboard with 8 sata onboard ports is required (16 from HBA + 8 onboard). Another requirement of mine is support of ECC RAM. Which leaves us with the following:

- Gigabyte C246N-WU2, only available as chinese import for 300-400 €

- Asrock Industrial IMB-X1231 + M2X4-SATA-4P, presumably more than 500 €, if you find a reseller, and power consumption is unknown

- Asrock Rack C246 WSI, sold out, I had it in the past and it needed more than 12 watts, which is worse compared to the Gigabyte

 

But don't beat around the bush: I bought a very expensive Gigabyte C246N-WU2 🙈

 

This videos fits perfectly to this problem:

 

I will install a Xeon E-2126G (6 cores, no HT, used 120 €) and 64GB ECC RAM.

 

Which ITX Case offers enough space for multiple 5.25" bays

Surprisingly only two:

- iStarUSA S-35EX, 16 liters, https://www.istarusa.com/en/istarusa/products.php?model=S-35EX

- Urack P4003N0000, 12 liters, no shop found, https://www.urack.com.tw/product/mini-itx-nas-chassis/p4003n0000

 

So I ordered the iStarUSA S-35EX. Here is an installation video:

 

Sadly it does not officially support installing an ATX power supply. But from the images it seems it could fit (in a similar position as with the Lian-Li PC-Q25B, but this case is bigger). Nevertheless I will try to squeeze it in 😅

 

If it does not fit, at least I could install a huge CPU cooler:

https://www.tweaktown.com/reviews/5936/istarusa-s35-mini-itx-nas-tower-chassis-review/index.html

image.png.fdf8732b0763154d8bdd3f927c0636eb.png

 

 

How to power all those SSDs

That will be a problem. As the iStarUSA case only supports a Flex ATX power supply I ordered the FSP FlexGURU PRO FSP500-50FDB bestellt:

https://www.fsplifestyle.com/en/product/Flexguru500W.html

image.png.4242385a6684ff4b8a9cc398c4736bdb.png

 

Regarding the specs it delivers 16 amps over 5V:

https://gzhls.at/blob/ldb/8/3/1/3/9643c7f2aaf12c55234d12d850b1afaa5c25.pdf

image.png.b581f5096ce80e4637ee4763066bb396.png

 

As of Samsung an 8TB QVO needs 5.5 watts in "Burst Mode", which is Random Write I assume:

https://www.samsung.com/de/memory-storage/sata-ssd/ssd-870-qvo-sata-3-2-5-inch-8tb-mz-77q8t0bw/

image.png.f0f38586953e27b75ad5c88521de986a.png

 

Which means:

24 SSDs x 5.5 Watt = 132 Watt / 5 Volt = 26.4 Ampere

 

Even if the Corsair RM550x (2021) ATX power supply fits, with 20 amps its to less, too. Although it should be very unusual to create Random Write on all SSDs in paralle inside the Unraid Array. I think I will measure a pack of 8 SSDs when everything is setup. Else I could use a 12V to 5V step down converter as follows:

https://www.ebay.de/itm/276038390560 (this one is crap, read some posts later!)

image.png.f247df6762838eaf9914e76b0f9e2afe.png

 

Yeah thats the plan by now. As soon the parts arrive I will start assembling.

 

2.5G network

Maybe I used the Intel I226-T1 2.5G card, but the PCIe slot is already occupied by the HBA card. I have to think about that. Or I will use a bifurcation adapter or M.2 adapter or USB Adapter. 🤷

  • Like 7
Link to comment
  • mgutt changed the title to ITX Server mit 24 SATA SSDs

The 5v requirement is most trouble part, you are in gambling if you allow insufficient power for that.

If SSD draw 1A+, that means the x8 cage will handle 8A+, really a challenge.

 

I always wish a day, HDD / SDD only need single 12v input.

Edited by Vr2Io
Link to comment

I will translate the first post and move it to the international section later...

 

Today I received the iStarUSA S-35X. It has a very good build quality, but I don't understand why they install heavy painted 5.25 steel covers as they are thrown in the trash 🤔 In addition they still don't provide USB3, not talking about the missing USB-C 🙈 Moreover I'm missing to install a 120mm fan on the top (this would allow to produce more vacuum pressure, which would maybe allow to remove the tiny fans from the SSD cages).

 

I started the project by installing the three Icy Dock MB038SP-B 8-Bay Cages:

 

image.thumb.png.1fd7536f78ce5b548bfb2353a2d24864.png

image.thumb.png.44b55105ead0936c38bd4f1dd8395e59.png

 

Then I tried to find a position for an ATX power supply:

 

Horizontal positioning

- this will cause problems with the SATA connectors

- could be maybe solved, if the top row of fans are removed and 90 degree sata connectors are used

 partially covers the 120mm fan

- could be solved if the 120mm is placed outside of the case and the power supply moved to the left

- or we use a flat 120mm fan like the Noctua NF-A12x15, but it could be to weak for this project 🤔

- needs drilling holes in the top of the lid, so the hot air of the power supply can leave the case or we need to rotate the power supply so the fan is blowing against the cpu cooler

- the power supply does not get cold air from outside of the case

- leaves plenty of room for a cpu cooler

 

image.thumb.png.fa9f895dc10049554d02a0dc4aa6c6f5.png

 

image.thumb.png.4788e119d095975d4f2cfd894b53002e.png

 

 

Vertical positioning

- we only have ~65 mm space for a cpu cooler (73mm total - 6mm standoffs - 2mm mainboard)

- my planned cpu cooler Noctua NH-L12S does not fit (needs 70 mm clearance)

- the Noctua NH-L9i needs only 37 mm clearance and should be a ok (I use the Xeon E-2126G 80W TDP, which seems to be in the allowed range)

- leaves plenty of room between the sata connectors and the power supply

- allows installing a 25mm thick 120mm fan

- the power supply blows its hot air hot onto the 120mm fan (should be ok)

- the power supply does not get cold air from outside of the case (drilling holes in the side of the lid could cause problems with the general air flow of the case)

- the ventilation holes are partially covered by the internal SSD bracket (which can't be removed, but maybe we could cut some holes into it)

 

image.thumb.png.239aadcba16bdfeaa70cab9b0c648300.png

 

image.png.c2e6521f514345729d8a90a20d9d333d.png

 

But at first I will measure the FSP FlexGURU PRO 500W power supply and compare it with the Corsair RM550x (2021) regarding its low load efficiency. Maybe I don't need this mod at all 🤷‍♂️

 

 

 

Link to comment

I installed the Broadcom 9500-16i and the power consumption (with the RM550x power supply) raised from 5.8 to 11.6 watts while reaching C7, which is nice.

image.thumb.png.f872323eb7840f420e7a18cf52d8a835.png

 

Sadly I'm not able to connect any SSDs to the HBA card as many of the Aliexpress sellers are using "SFF-8654" in their descriptions although the cables are equipped with SFF-TA-1016 connectors and I received such a wrong labeled cable 😠

image.thumb.png.f7f9aecdd804ce60f257da106e334654.png

 

It's really crazy how many wrong listings exist:

image.thumb.png.cca8fbfae2c7135bb9e180aee179903e.png

 

 

SFF-8654 spec:

https://members.snia.org/document/dl/26744

image.png.399d863e3869a9dc21d1b5ca26223da2.png

 

SFF-TA-1016 spec:

https://members.snia.org/document/dl/33768

image.png.e9658f6cddd2234e3d1401b52ea3ab73.png

 

This means I need to wait additional 2 weeks and now I try to get my money back 😒

Link to comment

Next power measurement is regarding an ExpressCage MB038SP-B with two SSDs, which I connected to the onboard sata ports:

 

2024-09-1123_12_04.thumb.png.5a3c7c5dbab42e6e6244416230d372a8.png

 

In this scenario we reach C10 and the power consumption raised from 5.8 watts to 8 watts

 

2024-09-1123_14_31.png.7f9452e6e45f8195b97eeaba382eb02a.png

 

After disconnecting both mini fans it dropped to 6.9 watts. Maybe I won't use them at all. I will decide this in a later stage of the project. But if they are needed, I will replace them against Noctua NF-A4x20 FLX for sure as they are much quieter.

Link to comment

Today I received the Noctua NH-L9i, so I decided to assemble as much as possible.

 

At first I started by drilling some holes and cutting the ssd bracket to allow a better air flow for the power supply:

image.thumb.png.6c11720ca0d3dd41e32135200194e736.png

 

This result was promising, but ... stay tuned 😅

 

The next step was to install the mainboard with its components:

- Gigabyte C246N-WU2 mainboard

- Intel Xeon E-2126G (6 Core, no Hyper Threading) CPU

- Samsung 2666 MHz 64GB ECC DDR4 RAM

- Noctua NH-L9i chromax black CPU cooler

- Noctua NF-A12x25 PWM chromax black 120mm case fan

- CableDeconn Mini SAS (SFF-8643) to 4x SATA

 

image.thumb.png.6de0b05629ca107f923c6df88164052a.png

 

Closed the ventilation holes of the case lid. My targets:

- the fresh air should only flow through the SSD racks

- no need for extra SSD rack fans (as you can see in the last picture I already removed them)

image.thumb.png.400c621a57def3abbdb6743db7e0f7ee.png

image.thumb.png.ee55db92c82bea5e7ced86af84d93b40.png

 

 

At this step of the project I thought I could even add an x8x4x4 Bifurcation adapter to add the 2.5G card, but ...

image.thumb.png.7b403f155d626428c0cb22a583a517d5.png

 

... Houston we have a problem 🤪

image.thumb.png.0b358e3ec90dc057500a878743187b1d.png

 

So I removed the bifurcation adapter and ... this is not tight fit... its bend to fit 🙈

image.thumb.png.694bf57d1aa8de88b981525176880df8.png

 

But hey, it works 😅

image.thumb.png.a2c98866ce560d3fc940ab7a279740a8.png

 

image.thumb.png.2df334cb960f41385b5c2a53d057b3cc.png

 

image.thumb.png.7ecb086024bd6e46d8021798ffc9ce42.png

 

To solve this I plan to remove most of the ssd bracket and install two square tubes vertically:

image.thumb.png.4eec561650fb84906f18b84dfcf2b6c3.png

 

8 SATA SSDs are already working 😁

image.thumb.png.746900a04b188e3b75de5c765cb4c938.png

 

 

Power consumption in idle: 16.8 watts

 

No joke. It is a little bit less than with the M920q frankenstein thing, while including the 9500-16i... but to be fair, without a 2.5G card. I'm already exited how much it finally consumes.

 

Regarding the temps: Looks promising in idle after some hours:

image.png.156f7fe6486146bf1f198a0798eab4d4.png

 

image.thumb.png.1fe7a70866e8de955606fb4ab4b1e060.png

 

After I received my 8TB M.2 SSD I will post the results while the parity is created.

 

Link to comment

As I installed the File Integrity Plugin I was able to check the temps while files were hashed.

 

After around 2 hours:

 

image.png.77ed412711ad18599f2695fc1da5df59.png

 

image.png.d5fd978bdbf3f294a491edaa28498092.png

 

So I never received a warning. My warning temp is 65 °C:

image.png.2b7109e186bc49a8f6397d0a3cec9625.png

 

This, because the maximum allowed temperature of a Samsung QVO is 70 °C:

image.png.1574fefe467c8087b834cda0d8f174f0.png

 

So for 8 SATA SSDs a single 120mm fan seems to be sufficient. But lower temperatures are always better. Maybe I'm able to lower the position of the ATX power supply, so it would be possible to install a second 120mm in the top 🤷‍♂️

Link to comment

I'm very proud to present you the idea of converting 12V to 5V:

 

image.png.50bf83bab7c632235385d7b41be577be.png

 

image.png.b57215ccafd10ae80d033f570befabbb.png

 

image.png.16000e3235736ab74c05e51deb15f31a.png

 

5V and 12V is working:

image.thumb.png.3618e834ed029eef5897341717ed5b31.png

 

Plugged in the 8-bay rack and .... the stepdown converter switches off. Not sure why. I even tried to disconnect 3.3V and 12V. And I tried to avoid any ground connection by removing the rack from the case, but still not luck. Maybe the stepdown converter produces to much ripple noise?!

 

I now ordered two much more expensive and bigger DC-DC converters:

 

Bauer Electronics DC-DC 8V-36V to 5V 10A/50W

https://amzn.to/3Tw0y2i

I think, this is only a cheap chinese part with a German sticker, but I give it a try.

 

Mean Well RSD-60G-5 9V-36V to 5V 12A/60W

https://amzn.to/3zhV0S8

If this doesn't work, nothing will work.

 

 

Link to comment

I received the first SFF-8654 to 8x SATA cable (the expensive one from Areca). I connected an additional SSD rack and 3 SSDs:

image.png.d5106171ca5d1d82dc8ebb0f20c66e43.png

 

The power consumption raised by 2 watts to 19.9 watts:

image.png.ce150c32cabc27fa7c7d3fd811150563.png

 

The C-State is still C7 which should be good for a setup with an HBA card (Broadcom 9500-16i):

image.png.2072b86c2fc3a17ccf59ca9a3cfe2ad0.png

 

Spindown works, but only with the "SAS Spindown" Plugin:

image.thumb.png.0544d1fcd53bf234ed12ab0993d7bc19.png

 

But I had some AER errors 😒

Sep 17 14:42:58 horus kernel: pcieport 0000:00:01.0: AER: Corrected error message received from 0000:01:00.0
Sep 17 14:42:58 horus kernel: mpt3sas 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID)
Sep 17 14:42:58 horus kernel: mpt3sas 0000:01:00.0:   device [1000:00e6] error status/mask=00001000/0000e000
Sep 17 14:42:58 horus kernel: mpt3sas 0000:01:00.0:    [12] Timeout 

 

Because of that I shut down and restarted the server. I hope those errors do not appear again.

 

 

Link to comment
On 9/17/2024 at 3:03 PM, mgutt said:

But I had some AER errors 😒

...

Because of that I shut down and restarted the server. I hope those errors do not appear again.

Ok, didn't came back by now. Nice.

 

On 9/15/2024 at 9:19 PM, mgutt said:

Bauer Electronics DC-DC 8V-36V to 5V 10A/50W

https://amzn.to/3Tw0y2i

Installed and works. This DC-DC converter delivers now the 5V power for the top rack, which means 8 SSDs, each up to 6.25 watts:

image.thumb.png.32fd399096a9c44f0b1b6ef00fe730b7.png

 

image.thumb.png.f9ad75fe5846416e27054d88e1a65489.png

 

Regarding my measurement it does not have any impact on idle power consumption, which is even nicer.

 

The next step was a little bit risky because I already drilled the hole and installed the 80 mm fan:

image.thumb.png.6ea856a6ea40bef3575cddf0966db9d2.png

 

image.thumb.png.cbe02f99aeb0af219cf8928f169baa75.png

 

It's risky because at the moment I'm not able to close the lid anymore as the new power supply mounting bracket is still on its way. So I hope it will turn out as planned 😬

 

With the lid closed, in theory, the fan's position should be here:

image.thumb.png.4e106326f64c1f7d03e6ea2c73bc1cfc.png

 

Used for this mod:

- NF-A8 PWM chromax.black.swap 80 mm fan

- Euroharry hex fan grille 80mm (I only needed the grille, the plate went to the trash ^^)

 

And last but not least I installed all SSDs I found:

image.thumb.png.a93aa8f3c35c5f9c67b953aac5152e9e.png

 

If the Bifurcation card fits, I could finally add the 2.5G card and two 8TB NVMe SSDs to get a Dual Parity Protection.

 

P.S. This are my File Integrity settings:

image.thumb.png.0282a11ec5821dee8906189f647b9ff0.png

 

As you can see I do weekly checks and every SSD has its own task, which means a single SSD needs 14 weeks to get rechecked. Maybe I will change this to two SSDs per week 🤔

Link to comment
  • mgutt changed the title to 24-Bay ITX Server in 16L case (SSD-only)

Still waiting for the power supply bracket, but received the extra short 15cm SATA cables, the SFF-8654 to 8x SATA breakout cable from Aliexpress and the WD Black SN850X 8TB NVMe:

image.thumb.png.91f963611a88e0e897d3c70ca8eeba86.png

 

Cabling:

image.thumb.png.fc7164ed37a29ebe5473096c49946135.png

 

image.thumb.png.6db123d9f83cdc38d5a1eec4aa035b99.png

 

 

But: The machine does not boot anymore. Removed the NVMe. Boots... 😒

 

As I knew that mainboards with the C246 chipset had problems booting with a Seagate Ironwolf NVMe in the past, I tried to update the BIOS from F1 to F4:

 

image.thumb.png.16938408ef0e18aba1b5357963f8c80f.png

 

But sadly: Still black screen.

 

I tested a Samsung Evo Plus 2TB and WD Black SN750 1TB without problems. So the slot is working properly.

 

Last chance: I opened a support ticket via the Gigabyte eSupport. I hope they have a Custom BIOS which covers this problem. 🙏

 

Link to comment

Ok I found a "solution". It still does not boot with the WD Black 8TB in the M.2 slot (chipset), but it works with the X8X4X4 bifurcation riser:

 

image.thumb.png.9ad01d32e45cd16c7e7b04dc5d44853b.png

 

But as mentioned above: I need more clearance to use it. At the moment, the power supply is sitting outside of the case 😅

 

If it works, I could even install a second NVMe on the other side of the card:

image.thumb.png.c98ee856a7a1f2fff540475979d09b8c.png

 

As everything was already disassembled I even found a "solution" for the 2.5G network problem. With a M.2 to PCIE X4 riser I was able to run a 2.5G I225 card in the onboard M.2 chipset slot, so I ordered this card:

https://de.aliexpress.com/item/1005005575635263.html

image.thumb.png.63e75ac6163a787c6f021bc228c45a67.png

 

It has the advantage of being installed directly without any riser and I'm able to move the ethernet port to the backpanel of the case.

 

 

P.S. One of my last tests will be to replace the HBA card against two ASM1166 cards. This would only allow 12 additional = 20 SATA ports in total, but I want to know how much energy I could save using those instead.

 

Building parity is limited to 4.3 GB/s. I'm not sure which part causes the limitation (I'm fine with it, it's just a matter of interest)

image.png.c5e1be26caeb725c677aab90b607cdc6.png

 

9 of the SSDs are currently connected to the HBA card:

image.thumb.png.b571471d313d8c43b6644c6710871679.png

 

 

 

 

 

Link to comment
4 hours ago, mgutt said:

Building parity is limited to 4.3 GB/s. I'm not sure which part causes the limitation (I'm fine with it, it's just a matter of interest)

At first I thought it is the memory bandwidth limit per CPU core, but after it created more than 4TB of parity data and started skipping the 4TB SSDs, the read speed didn't became much faster as I would expect:

image.png.f89dff3b026118677a329c5d940d52ed.png

 

So there must be a different reason for this limitation 🤔

 

I started checking the bandwidth of the HBA card with lspci -vv and yes, it seems I have a problem:

LnkSta: Speed 8GT/s (downgraded), Width x4 (downgraded)

image.png.2b87e69d4b12ec2ccac137e952a53d10.png

 

The downgrade to 8 GT/s means I'm using PCIe 3.0, which is expected as my board does not support PCIe 4.0 (16 GT/s). But the Width of only X4 is not correct. It should be X8. After the parity is created I will try a different bifurcation riser and/or without the riser to check the reason of this behaviour.

 

But: Although 8 GT/s and X4 results in a limitation of 4 GB/s... I'm now limited to 3.2 - 0,3 = 2.9 GB/s (8 of the currently active SSDs are connected to the HBA and one to the mainboards chipset).

 

So there must be still an other reason I think.

 

The NVMe looks fine (downgraded from PCIe 4.0 to PCIe 3.0, but X4 width):

LnkSta: Speed 8GT/s (downgraded), Width x4

 

 

The next idea was to test the speed for every SSD to exclude that a single SSD is limiting the total read speed of the array with the following command:

dd if=/dev/sdX of=/dev/null bs=128k iflag=count_bytes count=10G

 

I parallel checked the dashboard and no... every SSD is able to reach more than 500 MB/s.

 

I then paused the parity creation to obtain the maximum read speed in total:

for letter in e f g h i k l m n; do dd if="/dev/sd$letter" of=/dev/null bs=128k iflag=count_bytes count=10G & done

 

Which was around 4.1 GB/s regarding the dashboard. Here the single results:

10737418240 bytes (11 GB, 10 GiB) copied, 16.5498 s, 649 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 19.2425 s, 558 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 19.922 s, 539 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 21.3399 s, 503 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 21.7646 s, 493 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 22.1851 s, 484 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 22.5027 s, 477 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 23.2565 s, 462 MB/s
10737418240 bytes (11 GB, 10 GiB) copied, 24.1979 s, 444 MB/s

 

Some data was already in the RAM, so I repeated it, but emptied the caches first:

echo 3 >/proc/sys/vm/drop_caches

 

But again it reached 4.1 GB/s in total.

 

So where do I loose ~1 GB/s?! 

 

Next step was to check CPU load. And I think that's the real cause of the problem: The process "unraidd0" is only single threaded and my CPU core is not able to calculate more than 330 to 360 MB/s of parity data:

 

image.thumb.png.bfcf5c37db2be3114cc78442b2d70937.png

 

I will discuss this point in this thread:

 

Feedback regarding the wrong HBA bandwidth in a few days after I tested everything out.

 

 

 

Link to comment

The plan was good, but sadly the ATX power supply does not fit.

 

 

But at first what I tried:

 

I bought another additional power supply bracket with a fast delivery as the other one from Aliexpress is announced to arrive in several weeks 🤨

 

But the other one was good enough, too. It is a bracket for the Cooler Master NR200 and looks as follows:

image.png.d849cb7df0d1a279999803be9ab3a287.png

 

After some cutting:

image.thumb.png.588e15da4ee1ca8d544cc9744af1aadc.png

 

image.png.05237ae166fd49ba94e6cf36372caca0.png

 

As explained above, I wanted to place it on the bar and surprisingly the two original threads would fit perfectly:

image.thumb.png.1aae1b65f1ec686eb08d61e392dc0d45.png

 

So I shortened the SSD bracket and at first it seemed to work as planned:

image.thumb.png.b9d1ef3bf4cc9431d18e74a408a4c473.png

 

But sadly the clearance did not increase enough 😭

 

I even tried to mount the custom bracket underneath the bar, to move the power supply even more to the side, but it is still not enough:

image.thumb.png.795fe7ca20a0a175691a67d6ccaaa060.png

 

image.thumb.png.2f703e3e9cf821d3c6ade59dfd57f754.png

 

 

Now I have two options left:

 

1.) Move the PCIe Slot. Yes, would be possible by using an Angled PCIe Riser:

image.png.b69a40128e8a4801149d9dbe2d138db3.png

 

and Reverse Angled PCIe Riser:

image.png.858a342bf1cf6b5bfcb7070dbe540edd.png

 

But I'm not sure how stable this construction would be and if it even fits. Let's give it a try 😅

 

2.) Measure the efficiency of the RM750x Shift. This is the only power supply which has its connectors on the side:

https://www.corsair.com/de/de/p/psu/cp-9020251-eu/rm750x-shift-80-plus-gold-fully-modular-atx-power-supply-cp-9020251-eu

 

In my case they would be on the top, exactly as shown in this image:

image.png.124c60c9a7819992aa0aea9985c0ebdb.png

 

 

Link to comment

Finally everything turned out well...

 

At first I measured the Corsair RM750x Shift. Although it has so much power it is comparable with the RM550x (2021) and consumes only 0.4 watts more at idle:

 

Corsair RM550x (2021) 7.15 watts

image.thumb.png.143520adc899bfcd6a6d9ab4ee1c3b1e.png

 

Corsair RM750x Shift 7.51 watts

image.thumb.png.ec396fdda64bb85d29ed06aafa592bab.png

 

Bending the cables worked as planned:

image.thumb.png.6b425c3cce763e04ab501683230808b2.png

 

But the fan was a huge problem. As I was dumb and didn't wait for the final power supply position the hole was now at a complete wrong position:

image.thumb.png.f6357bf2ac41989eaba5b633aafba20e.png

 

So I decided to buy the 15mm flat 92mm Noctua fan and drilled a new hole:

image.thumb.png.022cac60741dcb9b3dd0fd96d25e220a.png

 

And... the fan still didn't fit as the power supply had a too high position... 😒

 

But.. the long awaited delivery of the Aliexpress Power Supply bracket arrived:

https://de.aliexpress.com/item/1005006011082092.html (SYJ-ATX-E "Color: 6")

image.thumb.png.7d9395293d6fccaabd839aea70ad5130.png

 

and my luck came back as it allowed to lower the install position of the power supply around 3 mm, which left enough clearance for the top fan 🙏

image.thumb.png.29be3e50f9447573ec42e0a2a17e4d10.png

 

The disadvantage is, that I now only have 4mm clearance between the CPU fan and the power supply, but I don't think this will cause any problems:

image.thumb.png.558baf8666fc210abcc4416f048e6380.png

 

A little side task: An usual mounting method of the HBA card:

image.thumb.png.b87d2a508d821b04a8f8ecd5fa2cbf8d.png

 

Connected the 92mm fan before closing the case:

image.thumb.png.fd96fced0308033f0d4881938e732b33.png

 

Done:

image.thumb.png.6a8faa2a7681e1f1af28a7db547d0808.png

 

image.thumb.png.de59b5f03a3fa4879e35631a9c262c8b.png

 

This is the used fan cover (currently without the dust filter):

https://www.amazon.de/dp/B09LYBMP85

 

And after a 2 hour parity check the temps look good:

image.thumb.png.dfce7adfaccad9b548cca83504b0a288.png

 

Note: The fans are set to "quiet mode" in the BIOS (sadly it is not possible to control the fans through Linux).

 

Next steps:

  • Install and measure a 2.5G M.2 card
  • Replace the HBA card against two ASM1166 cards (compare power consumption and performance)
  • Check if the Bifurcation adapter supports X8 and/or the HBA card has a defect (as it is only using X4)

But at the moment I'm just happy with the result 🎉🥳

Link to comment
On 9/2/2024 at 8:06 AM, mgutt said:

Broadcom 9500-16i for 300 €. It was sold from a different Unraid user. I'm excited if it reaches any deep C-states.


Hi! could you share your observations about this card? A few questions

* What is the idle power consumption without any disks attached? C states on the same?

* What is the idle power consumption during array rebuild and C?

 

I wonder if any SAS3808 controller will do - anyone knows if China fakes of this device are common as they are for LSI cards... i may assume yes there are cheap fakes on ebay i must be careful.

Link to comment
1 minute ago, TheLinuxGuy said:

Hi! could you share your observations about this card? A few questions

* What is the idle power consumption without any disks attached? C states on the same?

* What is the idle power consumption during array rebuild and C?

I already measured it:

https://forums.unraid.net/topic/174221-24-bay-itx-server-in-16l-case-ssd-only/?do=findComment&comment=1463847

 

 

It consumed additional 2 watts after connecting several SSDs. So the total consumption should be 5 to 7 watts.

 

But: I'm not sure if the card worked with X8 in the X16 slot. At the moment the card runs only in X4 mode. Maybe this influences C States.

 

But finally ASM1166 SATA cards would be more efficient (they idle at 2 watts).

  • Like 1
Link to comment

Hi @mgutt,

I can see from your various screenshots that you have chosen to use a traditional unRAID Array using (SATA) SSDs.

I thought SSDs within the unRAID Array was not fully supported (issues with TRIM?).

Do you know if these are no longer valid and SSDs are now suitable for being used within a traditional unRAID Array?

Thanks.

Link to comment
17 minutes ago, PPH said:

 

I thought SSDs within the unRAID Array was not fully supported (issues with TRIM?).

As long TRIM is disabled, it is fully supported. Some of the Unraid devs claim that TRIM can cause parity corruption, because a single SSD model returned unexpected data after TRIM. It seems nobody did more tests and since then SSDs are flagged as "unsupported" in the array, which is a contradiction as without TRIM, data corruption can't happen (TRIM was then disabled in the Array by Unraid). More info:

https://forums.unraid.net/topic/53433-ssds-as-array-drives-question/?do=findComment&comment=1088459

 

 

And here the theoretical discussion how Unraid could solve this, but obviously it was never realized:

https://forums.unraid.net/topic/73110-ssd-array-for-unraid/page/2/#comment-923513

 

 

And performance impacts shouldn't be noticeable as Unraid's parity creation in the array is extremely slow. Even if we would loose 50% of the write speed, we should still see up to ~1.8 GB/s for an NVMe PCIe 3.0 X4 or as in my example 100% SATA read speed (~ 550 MB/s) while parity creation. But we don't reach the maximum as unraid's parity creation is already fully utilizing a single CPU core. This can be discussed here:

https://forums.unraid.net/topic/102498-is-parity-check-rebuild-single-threaded/

 

 

Conclusion: SSDs can be used without any problems and they are not slower. 

  • Thanks 1
Link to comment
31 minutes ago, mgutt said:

As long TRIM is disabled, it is fully supported. Some of the Unraid devs claim that TRIM can cause parity corruption, because a single SSD model returned unexpected data after TRIM. It seems nobody did more tests and since then SSDs are flagged as "unsupported" in the array, which is a contradiction as without TRIM, data corruption can't happen (TRIM was then disabled in the Array by Unraid). More info:

https://forums.unraid.net/topic/53433-ssds-as-array-drives-question/?do=findComment&comment=1088459

 

 

And here the theoretical discussion how Unraid could solve this, but obviously it was never realized:

https://forums.unraid.net/topic/73110-ssd-array-for-unraid/page/2/#comment-923513

 

 

And performance impacts shouldn't be noticeable as Unraid's parity creation in the array is extremely slow. Even if we would loose 50% of the write speed, we should still see up to ~1.8 GB/s for an NVMe PCIe 3.0 X4 or as in my example 100% SATA read speed (~ 550 MB/s) while parity creation. But we don't reach the maximum as unraid's parity creation is already fully utilizing a single CPU core. This can be discussed here:

https://forums.unraid.net/topic/102498-is-parity-check-rebuild-single-threaded/

 

 

Conclusion: SSDs can be used without any problems and they are not slower. 

Thank you for the detailed response. 👍

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...