Jump to content

punk

Members
  • Posts

    6
  • Joined

  • Last visited

Posts posted by punk

  1. On 10/10/2019 at 11:51 PM, LordShaaver said:

    Hi.

     

    Did you ever replace the stock fans? I also have a SilverStone RM420 case and I replaced my stock fans with four Noctua 80mm fans (NF-A8 PWM). It works and the server is quiet, but in the summer for example the fans isn't quite enough.

    If you replaced the fans, what modification did you have to do on the case to get the fans to fit?

     

    Thanks!

     

    I did, but I went a somewhat unconventional route. I designed and 3D printed a replacement fan shroud that holds three 140mm fans and populated it with Noctua NF-A14 PPC-3000 PWM fans. The setup seemed to work well. The 8x 15K SAS HDDs I had installed hovered around 35C-38C with low to moderate use, which remained GREEN on UnRAIDs dashboard temperature-wise. The SSDs that were installed were inconsequential.  The room its in is around 23C. Unfortunately, the server is torn down / offline at the moment for an unrelated rebuild, so I'm unable to pull CPU temps, etc.

     

    Noise-wise they remained audible at all times, but it was a low background hum that was almost unnoticeable. In fact, I didn't realize how used to them I was until I shut the server off ... the office is creepy quiet, and the white noise from the fans really helped drown out neighbors, etc.

     

    The NF-A8s are far too wimpy, unfortunately. 😞

     

    What I've found interesting is the different fans that different server/case manufacturers build into their chassis. It makes me curious where their data comes from, what kind of tests they perform, what they design for, etc. For example, a 2U Supermicro box that I'm working on right now has 4x 80mm fans up front that are 11,000 RPM, 116 CFM, 54.3 mmH2O, 62.5 dBA monsters - for a 2U box with 12 drives. Meanwhile, the 4U UnRAID box I built with the same specs (same CPUs, same RAM, etc.) came with four 80mm fans with the specs in my original post.

     

    EDIT: Both CPUs were actively cooled in my RM430 as well, which helped the Noctua fans pass muster.

  2. Confirmed a 9300-8i (P16) resolves the TRIM issues with BTRFS.

    root@arnold:~# fstrim -av
    /etc/libvirt: 920.7 MiB (965439488 bytes) trimmed
    /mnt/disks/vm_ssd_1: 219 GiB (235152539648 bytes) trimmed
    /mnt/cache: 202.6 GiB (217502179328 bytes) trimmed

     

    • Like 2
  3. If I replace the intake fans in my chassis, how high of a static pressure rating do I need to overcome the constraints of the chassis' HDD bays? Numerous Google searches have yielded no real helpful information. I wonder if it's not as simple an answer as I hope it is. I ran into similar issues quantifying what "high static pressure" meant when someone would make watercooling radiator fan suggestions - when comparing fan models, "high static pressure" would often be around 2.0 mmH2O compared to others at 0.5 or lower. Compared to a fan that can push 5 or 10 or 15 mmH2O, 2 mmH2O doesn't seem that high.
     
    I have a SilverStone RM420 that comes with 4x DYNATRON / Top Motor 80x80x40mm PWM fans, P/N DF128038BH-PWM. They're rated for:

    1000RPM (20%)	18.2 dBA	17.1 CFM	0.62 mmH2O
    2500RPM (50%)	38.1 dBA	42.8 CFM	3.95 mmH2O
    5000RPM (100%)	53.2 dBA	85.5 CFM	15.8 mmH2O

     

    They're not running at 100%, but I'd be hard-pressed to guess where they're at. Fan control is via the SAS backplane's PWM fans and temperature sensor, so I can't tell the exact speed of the fans. I intend to leave them connected there for now since it allows me to regulate fan performance based on ambient temperature of the HDD bay in hardware. HDDs are around 35C idle and the fans provide enough airflow to keep my magma-hot LSI HBAs at round 50C.

     

    I want to replace them with three 140mm fans to see if I can push the noise level down a little bit. I've seen quite a few posts about the Norco 4224 or similar where users replaced their fan wall with the 120mm variant (or the zip-tie variant), threw in some Noctua NF-F12s (or equivalents), and saw decent results both noise- and temperature-wise.

     

    If I replace them with 3x Noctua NF-A14 PPC-3000s I'm sure I'd be close enough, but I'm curious whether or not I could get by with an PPC-2000 or even a normal NF-A14 and if it's possible to determine that without actually going out and buying all three types of fans, trying them all, and returning the rest. 😁

     

    NF-A14 PPC-3000 PWM	3000RPM (100%)	41.3 dBA	158.5 CFM	10.52 mmH2O
    NF-A14 PPC-2000 PWM	2000RPM (100%)	31.5 dBA	107.4 CFM	4.18 mmH2O
    NF-A14 PWM		1500RPM (100%)	24.6 dBA	82.5 CFM	2.08 mmH2O

     

    Thanks!

  4. I've tossed around the idea of moving my cache drive (redundancy not needed in my case) to a PCIe adapter card and leave all 8 ports on the 9207-8i for "other" use. It's that or for the sake of a skewed sense of "doing it right," I spend ~$300 on a 9300-8i, assuming I can find a genuine one.

  5. 17 minutes ago, johnnie.black said:

    This might help explain why I can't find many other Linux users complain about the LSI trim issue, if they are using ext4 looks like it still works, still would expect at least some xfs users complaining, maybe not so much with btrfs since it's likely much less used.

    I failed to mention that the other "disks" that succeeded are btrfs. I assume these are calls to paths located somewhere else.

    root@Tower:/var/log# df -Th
    Filesystem     Type       Size  Used Avail Use% Mounted on
    ...
    /dev/loop2     btrfs       20G   17M   18G   1% /var/lib/docker
    /dev/loop3     btrfs      1.0G   17M  905M   2% /etc/libvirt
    ...
    /dev/sdc1      btrfs      224G   17M  223G   1% /mnt/cache
    /dev/sdb1      ext4       220G   60M  208G   1% /mnt/disks/vm_ssd_1

    If I could switch the cache to ext4, I'd be set.

  6. I'm building a box running 6.6.6 with a 9207-8i in IT mode (20.00.00.00 at the moment, plan to update to 20.00.07.00) that has two Intel S3500s connected via SAS backplane - one as cache and one as a VM boot disk. I've experienced the same issue reported, but with a possible twist - despite being connected to the same controller (different ports), one SSD will TRIM, the other will not.

     

    root@Tower:/var/log# hdparm -I /dev/sdc1|grep TRIM
               *    Data Set Management TRIM supported (limit 4 blocks)
               *    Deterministic read ZEROs after TRIM
    root@Tower:/var/log# hdparm -I /dev/sdb1|grep TRIM
               *    Data Set Management TRIM supported (limit 4 blocks)
               *    Deterministic read ZEROs after TRIM

     

    (These are 'limit 4 blocks' while LSI states 'limit 8 blocks' - unsure if this matters, but the results below suggest it doesn't.)

     

    root@Tower:/var/log# fstrim -av
    /mnt/disks/vm_ssd_1: 0 B (0 bytes) trimmed
    fstrim: /mnt/cache: FITRIM ioctl failed: Remote I/O error
    /etc/libvirt: 922.2 MiB (966979584 bytes) trimmed
    /var/lib/docker: 18 GiB (19329028096 bytes) trimmed

     

    You can see that the cache disk fails, but the VM boot disk succeeded. Note this was a 2nd or 3rd run, so no data was TRIM'd on VM_SSD_1. On the first pass, ~200GB was TRIM'd. Other than the connected port, the only difference is the file system.

     

    root@Tower:/var/log# df -Th
    Filesystem     Type       Size  Used Avail Use% Mounted on
    ...
    /dev/sdc1      btrfs      224G   17M  223G   1% /mnt/cache
    /dev/sdb1      ext4       220G   60M  208G   1% /mnt/disks/vm_ssd_1

     

    I thought it was odd enough to mention. I'm short a few onboard SATA ports to make this work, so it looks like a 9300-8i or bust at this point.

×
×
  • Create New...