mgutt Posted September 2 Share Posted September 2 It's time for a new project. This time SSD only. I mean theoretically I already finished it 😅 Joke aside, although I installed my second M.2 ASM1166 card and could connect up to 13 SSDs, I'm fairly sure this Lenovo M920q is already above its 5V amperage limit. 🙈 And as it's already consuming 17W in idle I don't see a reason to stay with this "efficient tiny pc" 😅 The targets - 24 SATA SSDs - ECC RAM - 2.5G Ethernet (could be onboard, wifi slot adapter or USB adapter, but it should allow deep C-states) - compact and nice looking - undercut 17 watts 😅 Where to put all those SSDs First I thought using these Trio-Adapters (EZ-Fit MB610SP), to install 3 SSDs in a single 3.5 inch HDD slot, but they are sold out and for 24 SSDs I would need a case with 8 HDD slots. This would work with the big 22 liter Silverstone DS380, but then I have to solve the cabling as the Backplate is at the back of the slots and the SSD connectors would be at the front?! https://www.techspot.com/review/826-silverstone-ds380-nas/page2.html In addition it allows only SFX power supplies (we don't know any low load efficient one). Alternatively we could install additional 3.5 inch cages into a Li Q25B, if you get one as it's is out of production since many years: https://forums.unraid.net/topic/45070-hardware-question-build-skylake-and-lian-li-pc-q25b/#comment-442862 By the way I own the bigger Lian Li Q26B. It has 10x 3.5 inch slots, but it's a way bigger with its 32 liters. By using Icy Dock EZ-Fit Pro MB082SP Dual Adapters I could install up to 22 SSDs. I keep this option open... With much less hassle we could use 5.25 inch cages with 8x 2.5 inch hot swap bays. For example: - Icy Dock ExpressCage MB038SP-B https://de.icydock.com/product_1359.html ~ 125 € - Icy Dock ToughArmor MB998IP-B https://de.icydock.com/product_169.html ~ 250 € (Feature: SFF-8643, so much less cabling neededl - Fantec MR-SA1082 https://www.fantec.de/produkte/serverprodukte/backplane-systeme/backplane-systeme-mr-serie/produkt/details/artikel/2196_fantec_mr_sa1082-1/ ~ 110 € I started through buying the MB038SP-B, I mean... Wow this type of parts are expensive. I thought Chia Mining is dead 🫣 How to connect all those SSDs Aside of the 6 SATA ports of an ASM1166 card (of which we would need three, to connect 24 SSDs), I did in the past some research about power efficient HBA controllers, but they were extremely expensive. In the meantime the prices dropped and a few days ago I was able to by a Broadcom 9500-16i for 300 €. It was sold from a different Unraid user. I'm excited if it reaches any deep C-states. Which ITX mainboard with 8 SATA ports and ECC RAM support As far as I know, there is no power efficient HBA supporting 24 SATA ports (?), so mainboard with 8 sata onboard ports is required (16 from HBA + 8 onboard). Another requirement of mine is support of ECC RAM. Which leaves us with the following: - Gigabyte C246N-WU2, only available as chinese import for 300-400 € - Asrock Industrial IMB-X1231 + M2X4-SATA-4P, presumably more than 500 €, if you find a reseller, and power consumption is unknown - Asrock Rack C246 WSI, sold out, I had it in the past and it needed more than 12 watts, which is worse compared to the Gigabyte But don't beat around the bush: I bought a very expensive Gigabyte C246N-WU2 🙈 This videos fits perfectly to this problem: I will install a Xeon E-2126G (6 cores, no HT, used 120 €) and 64GB ECC RAM. Which ITX Case offers enough space for multiple 5.25" bays Surprisingly only two: - iStarUSA S-35EX, 16 liters, https://www.istarusa.com/en/istarusa/products.php?model=S-35EX - Urack P4003N0000, 12 liters, no shop found, https://www.urack.com.tw/product/mini-itx-nas-chassis/p4003n0000 So I ordered the iStarUSA S-35EX. Here is an installation video: Sadly it does not officially support installing an ATX power supply. But from the images it seems it could fit (in a similar position as with the Lian-Li PC-Q25B, but this case is bigger). Nevertheless I will try to squeeze it in 😅 If it does not fit, at least I could install a huge CPU cooler: https://www.tweaktown.com/reviews/5936/istarusa-s35-mini-itx-nas-tower-chassis-review/index.html How to power all those SSDs That will be a problem. As the iStarUSA case only supports a Flex ATX power supply I ordered the FSP FlexGURU PRO FSP500-50FDB bestellt: https://www.fsplifestyle.com/en/product/Flexguru500W.html Regarding the specs it delivers 16 amps over 5V: https://gzhls.at/blob/ldb/8/3/1/3/9643c7f2aaf12c55234d12d850b1afaa5c25.pdf As of Samsung an 8TB QVO needs 5.5 watts in "Burst Mode", which is Random Write I assume: https://www.samsung.com/de/memory-storage/sata-ssd/ssd-870-qvo-sata-3-2-5-inch-8tb-mz-77q8t0bw/ Which means: 24 SSDs x 5.5 Watt = 132 Watt / 5 Volt = 26.4 Ampere Even if the Corsair RM550x (2021) ATX power supply fits, with 20 amps its to less, too. Although it should be very unusual to create Random Write on all SSDs in paralle inside the Unraid Array. I think I will measure a pack of 8 SSDs when everything is setup. Else I could use a 12V to 5V step down converter as follows: https://www.ebay.de/itm/276038390560 (this one is crap, read some posts later!) Yeah thats the plan by now. As soon the parts arrive I will start assembling. 2.5G network Maybe I used the Intel I226-T1 2.5G card, but the PCIe slot is already occupied by the HBA card. I have to think about that. Or I will use a bifurcation adapter or M.2 adapter or USB Adapter. 🤷 7 Quote Link to comment
bagican Posted September 2 Share Posted September 2 nice! I think that many people would be glad if you repost it to global (EN) subforum too Quote Link to comment
Vr2Io Posted September 2 Share Posted September 2 (edited) The 5v requirement is most trouble part, you are in gambling if you allow insufficient power for that. If SSD draw 1A+, that means the x8 cage will handle 8A+, really a challenge. I always wish a day, HDD / SDD only need single 12v input. Edited September 2 by Vr2Io Quote Link to comment
mgutt Posted September 10 Author Share Posted September 10 I will translate the first post and move it to the international section later... Today I received the iStarUSA S-35X. It has a very good build quality, but I don't understand why they install heavy painted 5.25 steel covers as they are thrown in the trash 🤔 In addition they still don't provide USB3, not talking about the missing USB-C 🙈 Moreover I'm missing to install a 120mm fan on the top (this would allow to produce more vacuum pressure, which would maybe allow to remove the tiny fans from the SSD cages). I started the project by installing the three Icy Dock MB038SP-B 8-Bay Cages: Then I tried to find a position for an ATX power supply: Horizontal positioning - this will cause problems with the SATA connectors - could be maybe solved, if the top row of fans are removed and 90 degree sata connectors are used partially covers the 120mm fan - could be solved if the 120mm is placed outside of the case and the power supply moved to the left - or we use a flat 120mm fan like the Noctua NF-A12x15, but it could be to weak for this project 🤔 - needs drilling holes in the top of the lid, so the hot air of the power supply can leave the case or we need to rotate the power supply so the fan is blowing against the cpu cooler - the power supply does not get cold air from outside of the case - leaves plenty of room for a cpu cooler Vertical positioning - we only have ~65 mm space for a cpu cooler (73mm total - 6mm standoffs - 2mm mainboard) - my planned cpu cooler Noctua NH-L12S does not fit (needs 70 mm clearance) - the Noctua NH-L9i needs only 37 mm clearance and should be a ok (I use the Xeon E-2126G 80W TDP, which seems to be in the allowed range) - leaves plenty of room between the sata connectors and the power supply - allows installing a 25mm thick 120mm fan - the power supply blows its hot air hot onto the 120mm fan (should be ok) - the power supply does not get cold air from outside of the case (drilling holes in the side of the lid could cause problems with the general air flow of the case) - the ventilation holes are partially covered by the internal SSD bracket (which can't be removed, but maybe we could cut some holes into it) But at first I will measure the FSP FlexGURU PRO 500W power supply and compare it with the Corsair RM550x (2021) regarding its low load efficiency. Maybe I don't need this mod at all 🤷♂️ Quote Link to comment
mgutt Posted September 11 Author Share Posted September 11 I installed the Broadcom 9500-16i and the power consumption (with the RM550x power supply) raised from 5.8 to 11.6 watts while reaching C7, which is nice. Sadly I'm not able to connect any SSDs to the HBA card as many of the Aliexpress sellers are using "SFF-8654" in their descriptions although the cables are equipped with SFF-TA-1016 connectors and I received such a wrong labeled cable 😠 It's really crazy how many wrong listings exist: SFF-8654 spec: https://members.snia.org/document/dl/26744 SFF-TA-1016 spec: https://members.snia.org/document/dl/33768 This means I need to wait additional 2 weeks and now I try to get my money back 😒 Quote Link to comment
mgutt Posted September 11 Author Share Posted September 11 Next power measurement is regarding an ExpressCage MB038SP-B with two SSDs, which I connected to the onboard sata ports: In this scenario we reach C10 and the power consumption raised from 5.8 watts to 8 watts After disconnecting both mini fans it dropped to 6.9 watts. Maybe I won't use them at all. I will decide this in a later stage of the project. But if they are needed, I will replace them against Noctua NF-A4x20 FLX for sure as they are much quieter. Quote Link to comment
mgutt Posted September 12 Author Share Posted September 12 Today I received the Noctua NH-L9i, so I decided to assemble as much as possible. At first I started by drilling some holes and cutting the ssd bracket to allow a better air flow for the power supply: This result was promising, but ... stay tuned 😅 The next step was to install the mainboard with its components: - Gigabyte C246N-WU2 mainboard - Intel Xeon E-2126G (6 Core, no Hyper Threading) CPU - Samsung 2666 MHz 64GB ECC DDR4 RAM - Noctua NH-L9i chromax black CPU cooler - Noctua NF-A12x25 PWM chromax black 120mm case fan - CableDeconn Mini SAS (SFF-8643) to 4x SATA Closed the ventilation holes of the case lid. My targets: - the fresh air should only flow through the SSD racks - no need for extra SSD rack fans (as you can see in the last picture I already removed them) At this step of the project I thought I could even add an x8x4x4 Bifurcation adapter to add the 2.5G card, but ... ... Houston we have a problem 🤪 So I removed the bifurcation adapter and ... this is not tight fit... its bend to fit 🙈 But hey, it works 😅 To solve this I plan to remove most of the ssd bracket and install two square tubes vertically: 8 SATA SSDs are already working 😁 Power consumption in idle: 16.8 watts No joke. It is a little bit less than with the M920q frankenstein thing, while including the 9500-16i... but to be fair, without a 2.5G card. I'm already exited how much it finally consumes. Regarding the temps: Looks promising in idle after some hours: After I received my 8TB M.2 SSD I will post the results while the parity is created. Quote Link to comment
mgutt Posted September 14 Author Share Posted September 14 As I installed the File Integrity Plugin I was able to check the temps while files were hashed. After around 2 hours: So I never received a warning. My warning temp is 65 °C: This, because the maximum allowed temperature of a Samsung QVO is 70 °C: So for 8 SATA SSDs a single 120mm fan seems to be sufficient. But lower temperatures are always better. Maybe I'm able to lower the position of the ATX power supply, so it would be possible to install a second 120mm in the top 🤷♂️ Quote Link to comment
mgutt Posted September 14 Author Share Posted September 14 I ordered this "ATX-E" bracket (model 6): https://de.aliexpress.com/item/1005006011082092.html?spm=a2g0o.order_list.order_list_main.5.5dee6c1b0XHD4X&gatewayAdapt=glo2deu to install the power supply on the top bar as follows: By that I get the needed clearance between the HBA card and the power supply and I don't need to realize a custom bracket on my own. Quote Link to comment
mgutt Posted September 15 Author Share Posted September 15 Upgraded the iStarUSA S-35X to USB 3.0: Removed the front panel: Delock 82941 19 Pin to Dual USB 3.0 Panel Mount unit: The cable is much shorter, which is perfect: Of course I will never use them 😅 Quote Link to comment
mgutt Posted September 15 Author Share Posted September 15 I'm very proud to present you the idea of converting 12V to 5V: 5V and 12V is working: Plugged in the 8-bay rack and .... the stepdown converter switches off. Not sure why. I even tried to disconnect 3.3V and 12V. And I tried to avoid any ground connection by removing the rack from the case, but still not luck. Maybe the stepdown converter produces to much ripple noise?! I now ordered two much more expensive and bigger DC-DC converters: Bauer Electronics DC-DC 8V-36V to 5V 10A/50W https://amzn.to/3Tw0y2i I think, this is only a cheap chinese part with a German sticker, but I give it a try. Mean Well RSD-60G-5 9V-36V to 5V 12A/60W https://amzn.to/3zhV0S8 If this doesn't work, nothing will work. Quote Link to comment
mgutt Posted September 17 Author Share Posted September 17 I received the first SFF-8654 to 8x SATA cable (the expensive one from Areca). I connected an additional SSD rack and 3 SSDs: The power consumption raised by 2 watts to 19.9 watts: The C-State is still C7 which should be good for a setup with an HBA card (Broadcom 9500-16i): Spindown works, but only with the "SAS Spindown" Plugin: But I had some AER errors 😒 Sep 17 14:42:58 horus kernel: pcieport 0000:00:01.0: AER: Corrected error message received from 0000:01:00.0 Sep 17 14:42:58 horus kernel: mpt3sas 0000:01:00.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, (Transmitter ID) Sep 17 14:42:58 horus kernel: mpt3sas 0000:01:00.0: device [1000:00e6] error status/mask=00001000/0000e000 Sep 17 14:42:58 horus kernel: mpt3sas 0000:01:00.0: [12] Timeout Because of that I shut down and restarted the server. I hope those errors do not appear again. Quote Link to comment
mgutt Posted September 17 Author Share Posted September 17 On 9/17/2024 at 3:03 PM, mgutt said: But I had some AER errors 😒 ... Because of that I shut down and restarted the server. I hope those errors do not appear again. Ok, didn't came back by now. Nice. On 9/15/2024 at 9:19 PM, mgutt said: Bauer Electronics DC-DC 8V-36V to 5V 10A/50W https://amzn.to/3Tw0y2i Installed and works. This DC-DC converter delivers now the 5V power for the top rack, which means 8 SSDs, each up to 6.25 watts: Regarding my measurement it does not have any impact on idle power consumption, which is even nicer. The next step was a little bit risky because I already drilled the hole and installed the 80 mm fan: It's risky because at the moment I'm not able to close the lid anymore as the new power supply mounting bracket is still on its way. So I hope it will turn out as planned 😬 With the lid closed, in theory, the fan's position should be here: Used for this mod: - NF-A8 PWM chromax.black.swap 80 mm fan - Euroharry hex fan grille 80mm (I only needed the grille, the plate went to the trash ^^) And last but not least I installed all SSDs I found: If the Bifurcation card fits, I could finally add the 2.5G card and two 8TB NVMe SSDs to get a Dual Parity Protection. P.S. This are my File Integrity settings: As you can see I do weekly checks and every SSD has its own task, which means a single SSD needs 14 weeks to get rechecked. Maybe I will change this to two SSDs per week 🤔 Quote Link to comment
mgutt Posted September 21 Author Share Posted September 21 Still waiting for the power supply bracket, but received the extra short 15cm SATA cables, the SFF-8654 to 8x SATA breakout cable from Aliexpress and the WD Black SN850X 8TB NVMe: Cabling: But: The machine does not boot anymore. Removed the NVMe. Boots... 😒 As I knew that mainboards with the C246 chipset had problems booting with a Seagate Ironwolf NVMe in the past, I tried to update the BIOS from F1 to F4: But sadly: Still black screen. I tested a Samsung Evo Plus 2TB and WD Black SN750 1TB without problems. So the slot is working properly. Last chance: I opened a support ticket via the Gigabyte eSupport. I hope they have a Custom BIOS which covers this problem. 🙏 Quote Link to comment
mgutt Posted September 22 Author Share Posted September 22 Ok I found a "solution". It still does not boot with the WD Black 8TB in the M.2 slot (chipset), but it works with the X8X4X4 bifurcation riser: But as mentioned above: I need more clearance to use it. At the moment, the power supply is sitting outside of the case 😅 If it works, I could even install a second NVMe on the other side of the card: As everything was already disassembled I even found a "solution" for the 2.5G network problem. With a M.2 to PCIE X4 riser I was able to run a 2.5G I225 card in the onboard M.2 chipset slot, so I ordered this card: https://de.aliexpress.com/item/1005005575635263.html It has the advantage of being installed directly without any riser and I'm able to move the ethernet port to the backpanel of the case. P.S. One of my last tests will be to replace the HBA card against two ASM1166 cards. This would only allow 12 additional = 20 SATA ports in total, but I want to know how much energy I could save using those instead. Building parity is limited to 4.3 GB/s. I'm not sure which part causes the limitation (I'm fine with it, it's just a matter of interest) 9 of the SSDs are currently connected to the HBA card: Quote Link to comment
mgutt Posted September 22 Author Share Posted September 22 4 hours ago, mgutt said: Building parity is limited to 4.3 GB/s. I'm not sure which part causes the limitation (I'm fine with it, it's just a matter of interest) At first I thought it is the memory bandwidth limit per CPU core, but after it created more than 4TB of parity data and started skipping the 4TB SSDs, the read speed didn't became much faster as I would expect: So there must be a different reason for this limitation 🤔 I started checking the bandwidth of the HBA card with lspci -vv and yes, it seems I have a problem: LnkSta: Speed 8GT/s (downgraded), Width x4 (downgraded) The downgrade to 8 GT/s means I'm using PCIe 3.0, which is expected as my board does not support PCIe 4.0 (16 GT/s). But the Width of only X4 is not correct. It should be X8. After the parity is created I will try a different bifurcation riser and/or without the riser to check the reason of this behaviour. But: Although 8 GT/s and X4 results in a limitation of 4 GB/s... I'm now limited to 3.2 - 0,3 = 2.9 GB/s (8 of the currently active SSDs are connected to the HBA and one to the mainboards chipset). So there must be still an other reason I think. The NVMe looks fine (downgraded from PCIe 4.0 to PCIe 3.0, but X4 width): LnkSta: Speed 8GT/s (downgraded), Width x4 The next idea was to test the speed for every SSD to exclude that a single SSD is limiting the total read speed of the array with the following command: dd if=/dev/sdX of=/dev/null bs=128k iflag=count_bytes count=10G I parallel checked the dashboard and no... every SSD is able to reach more than 500 MB/s. I then paused the parity creation to obtain the maximum read speed in total: for letter in e f g h i k l m n; do dd if="/dev/sd$letter" of=/dev/null bs=128k iflag=count_bytes count=10G & done Which was around 4.1 GB/s regarding the dashboard. Here the single results: 10737418240 bytes (11 GB, 10 GiB) copied, 16.5498 s, 649 MB/s 10737418240 bytes (11 GB, 10 GiB) copied, 19.2425 s, 558 MB/s 10737418240 bytes (11 GB, 10 GiB) copied, 19.922 s, 539 MB/s 10737418240 bytes (11 GB, 10 GiB) copied, 21.3399 s, 503 MB/s 10737418240 bytes (11 GB, 10 GiB) copied, 21.7646 s, 493 MB/s 10737418240 bytes (11 GB, 10 GiB) copied, 22.1851 s, 484 MB/s 10737418240 bytes (11 GB, 10 GiB) copied, 22.5027 s, 477 MB/s 10737418240 bytes (11 GB, 10 GiB) copied, 23.2565 s, 462 MB/s 10737418240 bytes (11 GB, 10 GiB) copied, 24.1979 s, 444 MB/s Some data was already in the RAM, so I repeated it, but emptied the caches first: echo 3 >/proc/sys/vm/drop_caches But again it reached 4.1 GB/s in total. So where do I loose ~1 GB/s?! Next step was to check CPU load. And I think that's the real cause of the problem: The process "unraidd0" is only single threaded and my CPU core is not able to calculate more than 330 to 360 MB/s of parity data: I will discuss this point in this thread: Feedback regarding the wrong HBA bandwidth in a few days after I tested everything out. Quote Link to comment
mgutt Posted September 25 Author Share Posted September 25 The plan was good, but sadly the ATX power supply does not fit. But at first what I tried: I bought another additional power supply bracket with a fast delivery as the other one from Aliexpress is announced to arrive in several weeks 🤨 But the other one was good enough, too. It is a bracket for the Cooler Master NR200 and looks as follows: After some cutting: As explained above, I wanted to place it on the bar and surprisingly the two original threads would fit perfectly: So I shortened the SSD bracket and at first it seemed to work as planned: But sadly the clearance did not increase enough 😭 I even tried to mount the custom bracket underneath the bar, to move the power supply even more to the side, but it is still not enough: Now I have two options left: 1.) Move the PCIe Slot. Yes, would be possible by using an Angled PCIe Riser: and Reverse Angled PCIe Riser: But I'm not sure how stable this construction would be and if it even fits. Let's give it a try 😅 2.) Measure the efficiency of the RM750x Shift. This is the only power supply which has its connectors on the side: https://www.corsair.com/de/de/p/psu/cp-9020251-eu/rm750x-shift-80-plus-gold-fully-modular-atx-power-supply-cp-9020251-eu In my case they would be on the top, exactly as shown in this image: Quote Link to comment
mgutt Posted October 1 Author Share Posted October 1 Finally everything turned out well... At first I measured the Corsair RM750x Shift. Although it has so much power it is comparable with the RM550x (2021) and consumes only 0.4 watts more at idle: Corsair RM550x (2021) 7.15 watts Corsair RM750x Shift 7.51 watts Bending the cables worked as planned: But the fan was a huge problem. As I was dumb and didn't wait for the final power supply position the hole was now at a complete wrong position: So I decided to buy the 15mm flat 92mm Noctua fan and drilled a new hole: And... the fan still didn't fit as the power supply had a too high position... 😒 But.. the long awaited delivery of the Aliexpress Power Supply bracket arrived: https://de.aliexpress.com/item/1005006011082092.html (SYJ-ATX-E "Color: 6") and my luck came back as it allowed to lower the install position of the power supply around 3 mm, which left enough clearance for the top fan 🙏 The disadvantage is, that I now only have 4mm clearance between the CPU fan and the power supply, but I don't think this will cause any problems: A little side task: An usual mounting method of the HBA card: Connected the 92mm fan before closing the case: Done: This is the used fan cover (currently without the dust filter): https://www.amazon.de/dp/B09LYBMP85 And after a 2 hour parity check the temps look good: Note: The fans are set to "quiet mode" in the BIOS (sadly it is not possible to control the fans through Linux). Next steps: Install and measure a 2.5G M.2 card Replace the HBA card against two ASM1166 cards (compare power consumption and performance) Check if the Bifurcation adapter supports X8 and/or the HBA card has a defect (as it is only using X4) But at the moment I'm just happy with the result 🎉🥳 Quote Link to comment
TheLinuxGuy Posted October 2 Share Posted October 2 On 9/2/2024 at 8:06 AM, mgutt said: Broadcom 9500-16i for 300 €. It was sold from a different Unraid user. I'm excited if it reaches any deep C-states. Hi! could you share your observations about this card? A few questions * What is the idle power consumption without any disks attached? C states on the same? * What is the idle power consumption during array rebuild and C? I wonder if any SAS3808 controller will do - anyone knows if China fakes of this device are common as they are for LSI cards... i may assume yes there are cheap fakes on ebay i must be careful. Quote Link to comment
mgutt Posted October 2 Author Share Posted October 2 1 minute ago, TheLinuxGuy said: Hi! could you share your observations about this card? A few questions * What is the idle power consumption without any disks attached? C states on the same? * What is the idle power consumption during array rebuild and C? I already measured it: https://forums.unraid.net/topic/174221-24-bay-itx-server-in-16l-case-ssd-only/?do=findComment&comment=1463847 It consumed additional 2 watts after connecting several SSDs. So the total consumption should be 5 to 7 watts. But: I'm not sure if the card worked with X8 in the X16 slot. At the moment the card runs only in X4 mode. Maybe this influences C States. But finally ASM1166 SATA cards would be more efficient (they idle at 2 watts). 1 Quote Link to comment
PPH Posted October 3 Share Posted October 3 Hi @mgutt, I can see from your various screenshots that you have chosen to use a traditional unRAID Array using (SATA) SSDs. I thought SSDs within the unRAID Array was not fully supported (issues with TRIM?). Do you know if these are no longer valid and SSDs are now suitable for being used within a traditional unRAID Array? Thanks. Quote Link to comment
mgutt Posted October 3 Author Share Posted October 3 17 minutes ago, PPH said: I thought SSDs within the unRAID Array was not fully supported (issues with TRIM?). As long TRIM is disabled, it is fully supported. Some of the Unraid devs claim that TRIM can cause parity corruption, because a single SSD model returned unexpected data after TRIM. It seems nobody did more tests and since then SSDs are flagged as "unsupported" in the array, which is a contradiction as without TRIM, data corruption can't happen (TRIM was then disabled in the Array by Unraid). More info: https://forums.unraid.net/topic/53433-ssds-as-array-drives-question/?do=findComment&comment=1088459 And here the theoretical discussion how Unraid could solve this, but obviously it was never realized: https://forums.unraid.net/topic/73110-ssd-array-for-unraid/page/2/#comment-923513 And performance impacts shouldn't be noticeable as Unraid's parity creation in the array is extremely slow. Even if we would loose 50% of the write speed, we should still see up to ~1.8 GB/s for an NVMe PCIe 3.0 X4 or as in my example 100% SATA read speed (~ 550 MB/s) while parity creation. But we don't reach the maximum as unraid's parity creation is already fully utilizing a single CPU core. This can be discussed here: https://forums.unraid.net/topic/102498-is-parity-check-rebuild-single-threaded/ Conclusion: SSDs can be used without any problems and they are not slower. 1 Quote Link to comment
PPH Posted October 3 Share Posted October 3 31 minutes ago, mgutt said: As long TRIM is disabled, it is fully supported. Some of the Unraid devs claim that TRIM can cause parity corruption, because a single SSD model returned unexpected data after TRIM. It seems nobody did more tests and since then SSDs are flagged as "unsupported" in the array, which is a contradiction as without TRIM, data corruption can't happen (TRIM was then disabled in the Array by Unraid). More info: https://forums.unraid.net/topic/53433-ssds-as-array-drives-question/?do=findComment&comment=1088459 And here the theoretical discussion how Unraid could solve this, but obviously it was never realized: https://forums.unraid.net/topic/73110-ssd-array-for-unraid/page/2/#comment-923513 And performance impacts shouldn't be noticeable as Unraid's parity creation in the array is extremely slow. Even if we would loose 50% of the write speed, we should still see up to ~1.8 GB/s for an NVMe PCIe 3.0 X4 or as in my example 100% SATA read speed (~ 550 MB/s) while parity creation. But we don't reach the maximum as unraid's parity creation is already fully utilizing a single CPU core. This can be discussed here: https://forums.unraid.net/topic/102498-is-parity-check-rebuild-single-threaded/ Conclusion: SSDs can be used without any problems and they are not slower. Thank you for the detailed response. 👍 Quote Link to comment
wpm Posted October 13 Share Posted October 13 On 10/2/2024 at 12:46 AM, mgutt said: Note: The fans are set to "quiet mode" in the BIOS (sadly it is not possible to control the fans through Linux). @mgutt just a sidenote, install the ITE IT87 Driver Plugin and after a reboot the fans should be controlable via the Dynamix Auto Fan Control Plugin. It's working fine for me since a couple of years. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.