Jump to content

Michael_P

Members
  • Posts

    669
  • Joined

  • Last visited

Everything posted by Michael_P

  1. May need to re-calibrate it https://www.apc.com/us/en/faqs/FA284198/
  2. Here's a post to get you started
  3. My little 15u setup: From the top down: Shelf: D-Link DKVM-4U KVM switch AC Infinity Airplate T3 fan controller for the rack fans PFSense Firewall/Router: Supermicro SuperServer 5018A-MLTN4 Intel Atom C2550 16GB DDR3 1600 PNY 120GB SSD BV-Tech POE Switch (not in service) TP-Link JetStream Smart Switch T1700G-28TQ Netgear ProSafe GS724TPS POE Smart Switch HP 1u KVM Console CyberPower PR1500LCDRT2U Unraid server: NORCO RPC-4224 Thermaltake Toughpower Grand 850W PS GIGABYTE Z370XP SLI Intel Core i7-8700 32GB DDR4 2400 2x LSI SAS 9207-8i Intel RES2SV240 expander Mellanox MNPA19-XTR 10Gb network card 2x Samsung 860 EVO 500GB SSD 12x WD 8TB 3x Toshiba 6TB 1x Toshiba 5TB 1X WD 4TB 1x WD Purple 3TB unassigned for the Blue Iris DVR All fans have been swapped out for Noctuas, except for the stock CPU cooler in the server and the AC Infinity fans I swapped into the rack. I've added 20mm fans to each card in the server just to be sure they'll stay cool and quiet. I left the stock cooler in place as the CPU isn't taxed very often or for very long when it is. Docker containers: CrashPlanPro mariadb nextcloud NginxProxyManager piwigo Virtual machines: Ubuntu - which runs the EAP Omada controller software which controls the two TP-Link EAP245 wireless APs Windows Home Server 2011 - For backing up my Windows Clients and runs my torrent and SABnzbd clients along with Subsonic for remote music access. I'll move it all eventually to dockers, but it works pretty darned good so I leave it alone. Windows 10 - Blue Iris host for monitoring and storage of eight 5MP POE cameras and two wireless. Windows 10 - Playing around with it for when I eventually have to move from W7
  4. Depends on the controller, connections, cables - any communication problems and the drive will drop out of the array
  5. I'd do the parity upgrade rebuild in maintenance mode just in case SHTF
  6. Wouldn't explain the freezes, but worth investigating. I had a dodgy power button on my WMC box, it kept randomly (cleanly) shutting down in the middle of the night. Took a few months for me to figure out what was happening.
  7. If you plan on using those for your unraid build, you're in for a world of pain. You need a proper enclosure and a HBA to address them. Using USB and/or eSATA multipliers isn't recommended as they'll just drop drives out of the array randomly if you get them all to show up at all.
  8. You can disable it, but you may not be able to see the server shares show up under 'network' in Windows 10 clients, but you can still browse them manually by server name or IP
  9. FWIW- Last weekend I was troubleshooting an issue so I had to un-rack my server, and through sheer laziness left the USB thumb-drive hanging off the back. While re-racking, it snapped off.... I was back up in less than 20 minutes from download to re-installation.
  10. Look at the specs https://www.asus.com/us/Motherboards/PRIME-X470-PRO/specifications/ the CPU has 20 PCIe 3.0 lanes, 16 for the two 16x slots (either one 16x full speed device like a GPU or two devices at 8x), then 4 lanes for storage via the m.2 slot. The chipset has 8 PCIe 2.0 lanes, 4 for the last 16x slot (it will run at 4x), 3 for the 1x slots and 2x for storage via the second m.2 slot but really only 1 is available So that's your budget, 28x - 1 or 2 GPUs will need 16x max total (16x for 1 or 8x times 2), HBAs need 8x for best throughput and that leaves you with 4x for a nic or m.2 drive, just remember the m.2 slots normally share PCIe lanes with other slots, so read the specs for available configuration options.
  11. I figured the same thing, "it's an 850W power supply, what could go wrong..."
  12. This. I was having problems with drives dropping from the array, replaced HBAs, cables, drives, until I figured out there were voltage drops causing random drives to fall out of the array.
  13. Do you have all of the 4224's backplanes plugged into 1 cable coming off of the power supply?
  14. Not for that workload, you'd just be wasting power
  15. There's a bug in 6.7.2 when mover is running the rest of the system slows to a crawl. I didn't dig into the specifics but it's fixed in 6.8
  16. Dunno if this is expected behavior or not, but I was running 6.8.0-rc9 and after logging in and shutting down both my WHS 2011 and Windows 10 VMs, via RDP, and then trying to log into the web interface for the unraid server so I could do the update to 6.8.0 stable, the server crashed and rebooted. When it came back up it detected an unclean shutdown, as expected, and would have run a parity check if I started the array. I decided to just upgrade the server instead of bringing the array back online and waiting a day for the parity check. After upgrading and re-booting, the array showed parity is valid? I'm running a check anyway but I just thought it was weird.
  17. Unraid has a 30 drive limit for the array, so there's that. As for your risk tolerance, well, you need backups for anything you can't afford to lose. Every storage OS is similar in that regard. Parity, RAID, whatever, it's not a backup. If you can't afford to lose it, you can't afford not to back it up.
  18. Im pretty sure the OP was referring to the power draw not run time
  19. Yep, install it on whatever clients you have that you want to shut down when on UPS power
×
×
  • Create New...