Jump to content

Michael_P

Members
  • Posts

    677
  • Joined

  • Last visited

Everything posted by Michael_P

  1. As long as it's not molded, should be fine. I had one catch fire on me and found that guy's videos a week or so after. It had been in my desktop machine for years, then one night, poof. Don't hang too many drives off of one run to the PS, either. Voltage sag can be a problem and cause drives to drop out of the array.
  2. Can also punch down a molex or SATA connector to suit as needed
  3. You can test the link itself with iperf
  4. Those drives are known for terrible write performance
  5. Try a brain dead reset on it https://www.apc.com/us/en/faqs/FA156611/
  6. Also, make sure you spread the load over multiple runs from the PS to the drives. Don't hang them all off of 1 line with splitters.
  7. May need to re-calibrate it https://www.apc.com/us/en/faqs/FA284198/
  8. Here's a post to get you started
  9. My little 15u setup: From the top down: Shelf: D-Link DKVM-4U KVM switch AC Infinity Airplate T3 fan controller for the rack fans PFSense Firewall/Router: Supermicro SuperServer 5018A-MLTN4 Intel Atom C2550 16GB DDR3 1600 PNY 120GB SSD BV-Tech POE Switch (not in service) TP-Link JetStream Smart Switch T1700G-28TQ Netgear ProSafe GS724TPS POE Smart Switch HP 1u KVM Console CyberPower PR1500LCDRT2U Unraid server: NORCO RPC-4224 Thermaltake Toughpower Grand 850W PS GIGABYTE Z370XP SLI Intel Core i7-8700 32GB DDR4 2400 2x LSI SAS 9207-8i Intel RES2SV240 expander Mellanox MNPA19-XTR 10Gb network card 2x Samsung 860 EVO 500GB SSD 12x WD 8TB 3x Toshiba 6TB 1x Toshiba 5TB 1X WD 4TB 1x WD Purple 3TB unassigned for the Blue Iris DVR All fans have been swapped out for Noctuas, except for the stock CPU cooler in the server and the AC Infinity fans I swapped into the rack. I've added 20mm fans to each card in the server just to be sure they'll stay cool and quiet. I left the stock cooler in place as the CPU isn't taxed very often or for very long when it is. Docker containers: CrashPlanPro mariadb nextcloud NginxProxyManager piwigo Virtual machines: Ubuntu - which runs the EAP Omada controller software which controls the two TP-Link EAP245 wireless APs Windows Home Server 2011 - For backing up my Windows Clients and runs my torrent and SABnzbd clients along with Subsonic for remote music access. I'll move it all eventually to dockers, but it works pretty darned good so I leave it alone. Windows 10 - Blue Iris host for monitoring and storage of eight 5MP POE cameras and two wireless. Windows 10 - Playing around with it for when I eventually have to move from W7
  10. Depends on the controller, connections, cables - any communication problems and the drive will drop out of the array
  11. I'd do the parity upgrade rebuild in maintenance mode just in case SHTF
  12. Wouldn't explain the freezes, but worth investigating. I had a dodgy power button on my WMC box, it kept randomly (cleanly) shutting down in the middle of the night. Took a few months for me to figure out what was happening.
  13. If you plan on using those for your unraid build, you're in for a world of pain. You need a proper enclosure and a HBA to address them. Using USB and/or eSATA multipliers isn't recommended as they'll just drop drives out of the array randomly if you get them all to show up at all.
  14. You can disable it, but you may not be able to see the server shares show up under 'network' in Windows 10 clients, but you can still browse them manually by server name or IP
  15. FWIW- Last weekend I was troubleshooting an issue so I had to un-rack my server, and through sheer laziness left the USB thumb-drive hanging off the back. While re-racking, it snapped off.... I was back up in less than 20 minutes from download to re-installation.
  16. Look at the specs https://www.asus.com/us/Motherboards/PRIME-X470-PRO/specifications/ the CPU has 20 PCIe 3.0 lanes, 16 for the two 16x slots (either one 16x full speed device like a GPU or two devices at 8x), then 4 lanes for storage via the m.2 slot. The chipset has 8 PCIe 2.0 lanes, 4 for the last 16x slot (it will run at 4x), 3 for the 1x slots and 2x for storage via the second m.2 slot but really only 1 is available So that's your budget, 28x - 1 or 2 GPUs will need 16x max total (16x for 1 or 8x times 2), HBAs need 8x for best throughput and that leaves you with 4x for a nic or m.2 drive, just remember the m.2 slots normally share PCIe lanes with other slots, so read the specs for available configuration options.
  17. I figured the same thing, "it's an 850W power supply, what could go wrong..."
  18. This. I was having problems with drives dropping from the array, replaced HBAs, cables, drives, until I figured out there were voltage drops causing random drives to fall out of the array.
  19. Do you have all of the 4224's backplanes plugged into 1 cable coming off of the power supply?
  20. Not for that workload, you'd just be wasting power
×
×
  • Create New...